id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
240546403
|
pes2o/s2orc
|
v3-fos-license
|
Reflection-type photoplethysmography pulse sensor based on an integrated optoelectronic chip with a ring structure
: Reflection-type photoplethysmography (PPG) pulse sensors are widely used in consumer markets to measure cardiovascular signals. Different from off-chip package solutions in which the light-emitting diode (LED) and photodetector (PD) are in separate chips, a GaN integrated optoelectronic chip with a novel ring structure is proposed to realize a PPG pulse sensor. The integrated optoelectronic chip consists of two multiple-quantum well (MQW) diodes. For higher sensitivities, the central and peripheral MQW diodes are suitable as the LED and PD, respectively. The results indicate that the integrated optoelectronic chip based on a blue LED epitaxial wafer is more suitable for the integrated PPG sensor based on device performance. Moreover, the amplitude of the PPG pulse signal collected from fingertips is higher than that from a wrist. The feasibility of the reflection-type PPG pulse sensor based on a GaN integrated optoelectronic chip is fully verified with the advantages of smaller sizes and lower costs.
Introduction
Cardiovascular signals such as blood flow changes or heart rates can provide valuable information for patient diagnosis in clinical applications, the daily health nursing of elders and people with heart problems, and continuous sports health monitoring [1][2][3]. The reflection-type photoplethysmography (PPG) pulse sensors [4][5][6][7][8] feature noninvasive measurements, low costs, fast response, simple structure, and convenient test methods and are preferred to obtain cardiovascular signals in consumer markets. A basic PPG sensor only requires a light emitter to illuminate skin tissue and a photodetector (PD) to detect small variations in the light intensity associated with blood volume changes in the microvascular bed. Compared with the transmissiontype PPG sensor that uses infrared light sources [9][10][11], the reflection-type that generally uses a green light source can be placed on various parts of human skin to detect reflected light and is less susceptible to interference from thermal stress [11]. A shorter distance between the light emitter and PD helps improve the reflected light ratio and resulting device performance. Thus, Chen et al. recently developed a PPG pulse sensor based on an integrated optoelectronic chip [12] in which the LED and PD are on a single chip and the sapphire substrate can directly contact the skin. The distance between the LED and PD can reach micron scales, which cannot be achieved from existing off-chip packages. Furthermore, much lower cost and more efficient packages are expected for the integrated optoelectronic chip with an LED and PD as realized by the same epitaxial layers and fabrication process. However, many issues need to be solved for the practical application of this PPG sensor based on an integrated optoelectronic chip. First, unlike off-chip packages in which the light emitter and PD are independently selected, the performances of the LED and PD in the integrated optoelectronic chip are simultaneously restricted by the parameters of InGaN/GaN multi-quantum well (MQW) diodes [13][14][15][16]. Therefore, the green light usually adopted by off-chip package schemes may not be suitable for integrated PPG sensors, indicating LED epitaxial wafers of suitable wavelengths should be identified. Second, novel structures for the integrated optoelectronic chip are anticipated to better meet the needs of reflection-type PPG sensors. Third, a complete PPG pulse sensor prototype based on an integrated optoelectronic chip, which has not been demonstrated to date, needs to be developed to validate the feasibility of practical applications. To solve the above issues, this paper builds a PPG pulse sensor system that includes an integrated optoelectronic chip with a novel ring structure, a week signal processing circuit, signal analysis and display unit. Moreover, integrated optoelectronic chips with blue and green emission spectra are compared to determine the suitable wavelength for the integrated reflection-type PPG sensor.
Design and fabrication
The integrated optoelectronic chips are implemented on 4-inch commercial GaN-on-sapphire LED epitaxial wafers. The epitaxial layers on the sapphire substrate are an AlN buffer layer, an unintentional-doped GaN layer, an N-doped GaN layer, an InGaN/GaN MQW layer, and a p-GaN layer, from bottom to top. The fabrication processes of the integrated optoelectronic chip are similar to common discrete flip-chip LEDs [17]. The details are illustrated in Fig. 1 and described as follows. First, as shown in Fig. 1(a), a transparent indium tin oxide (ITO) current spreading layer is deposited via sputtering and treated using rapid thermal annealing in an N 2 atmosphere for 7 min. Second, as shown in Fig. 1(b), the mesa regions are defined by photolithography and etched to the N-doped GaN layer by inductively coupled plasma reactive ion etching (ICP-RIE). Third, as shown in Fig. 1(c), a deep ICP-RIE is further performed to etch all epilayers for device isolation. Fourth, as shown in Fig. 1(d), the Ni/Al/Ti/Al multi-layers are evaporated via electron beam evaporation (EBE) and lifted off to form the p-contact metal. Fifth, as shown in Fig. 1(e), a 1 µm-thick SiO 2 is deposited by plasma-enhanced chemical vapor deposition (PECVD) and patterned to realize electrical isolation. Sixth, as shown in Fig. 1(f), the Ni/Al/Ti/Pt/Ti/Pt/Au multi-layers are evaporated by EBE and lifted off to form the large-sized electrode for flip-chip packing. The sapphire substrate is thinned to 200 µm to improve the light extraction from the sapphire side. Figure 2 shows the final structure of the integrated optoelectronic chip sized at 2.6 × 2.6 mm after ultraviolet nanosecond laser dicing. The LED and PD have identical epitaxial layer structures (MQW diode) but are in different shapes. The center MQW diode is adopted as the LED and the peripheral is the PD in our design. Two MQW diodes are separated using a deep isolation trench, and the patterned substrate is observed in Fig. 2. The patterned sapphire substrate can effectively reduce the dislocation density of epitaxial layers to improve the luminous efficiency of the MQW diode [18].
Measurement results
The performance of two integrated optoelectronic chips based on blue and green LED epitaxial wafers are characterized to determine the suitable chip for the PPG pulse sensor system. For measurement convenience, the integrated optoelectronic chips are flip-chip bonded to a flexible printed circuit board and encapsulated in polydimethylsiloxane (PDMS) strips, as shown in the insets of Fig. 3. Figure 3(a) shows the IV characteristics of the PD based on the blue epitaxial wafer, in which different currents are injected into the adjacent LED. For the IV curves in the lower left of Fig. 3(a), the central MQW diode is used as the LED and the peripheral is the PD. For the IV curves in the lower right of Fig. 3(a), the central MQW diode is used as the PD and the peripheral is the LED. The contrast shows that the peripheral MQW diode is more suitable for the PD of the PPG due to the higher output current. The reason for this phenomenon is that the light emitting from the central MQW diode is more easily collected by the large-sized peripheral MQW diode. The IV characteristics of the PD based on the green epitaxial wafer shown in Fig. 3(b) present the same contrast. Comparing the results in Figs. 3(a) and 3(b) indicates the PD of the integrated optoelectronic chip based on the blue epitaxial wafer has a larger photocurrent. To explain this phenomenon, the electroluminescence (EL) spectra and response spectra (RS) of the MQW diodes based on blue and green LED epitaxial wafers are measured. As seen in Fig. 4, the overlapping wavelengths of the EL spectra and RS for the blue MQW diode are significantly larger than those of the green MQW diode. Therefore, the integrated optoelectronic chip based on the blue LED epitaxial wafer is more suitable for the integrated PPG sensor based on the device performance. The simple test platform shown in the inset of Fig. 5 is built to verify the PD's ability to detect reflected light emitting from the LED. The PD photocurrent on the integrated optoelectronic chip is caused primarily by the light emitted directly from the LED and only a small part is caused by light from the reflector. The results shown in Fig. 5 indicate that the PD photocurrent caused by reflected light decreases significantly with the distance between the integrated optoelectronic chip and the reflector. Therefore, for practical applications of the reflection-type PPG pulse sensor, the distance between the chip and human skin should be as small as possible. The corresponding solutions are to place the sapphire substrate of the chip directly on the skin and reduce the sapphire thickness. The test platform shown in Fig. 6 is built and an external LED is used as the modulation light source to verify the PD's ability to detect external modulated optical signals that simulate heart pulses. The pseudo random binary sequence (PRBS) data applied to the external light source is generated with a Keysight 33600A series waveform generator. The central MQW diode is used as the PD and the peripheral is the LED from the integrated optoelectronic chip based on the blue LED epitaxial wafer. The measured voltages of the PD with a 1 MΩ oscilloscope under incident PRBS data of 100, 200, and 500 bps are shown in Figs. 6(b), 6(c), and 6(d), respectively. In each case, the measured voltages of the PD with the central MQW LED in the on and off states are provided in the upper and lower parts, respectively. The illumination of the central MQW LED only increases the DC bias and has little influence on the signal and noise amplitudes of the output voltage. Moreover, the PD response rate can reach 500 bps while higher rates would result in severe distortions. However, as the heart rate is usually several Hertz, the PD of the integrated optoelectronic chip guarantees sufficient resolution of heartbeat waveforms.
The ring-structured integrated optoelectronic chips are used to detect a PPG pulse signal that contains small AC and large DC components. The AC or pulsatile components are due to the reflected light and are related to blood volume changes that result in attenuation variability in the incident light, as shown in Figs. 7(a) and 7(b). The DC or steady components are due to the light emitted directly from the LED and that reflected and scattered from the arterial, venous, and tissue layers. Figures 7(c) and 7(d) demonstrate the PD current waveform with the packaged green integrated optoelectronic chips placed on a fingertip and wrist, respectively. The PD currents are measured using an Agilent Technologies B1500A semiconductor device analyzer with a 20 mA current applied to the LED. The results indicate that the amplitude of the PPG pulse signal collected from the fingertip is greater than that from the wrist. The results from the blue integrated optoelectronic chips shown in Figs. 7(e) and 7(f) exhibit the same trend. Furthermore, the amplitude of the PPG pulse signal based on the green integrated optoelectronic chip is significantly weaker than that of the blue chip. Therefore, the blue chip is selected to constitute the complete PPG pulse sensor prototype. A schematic diagram of the sensor system is shown in Fig. 7(g), which consists of an integrated optoelectronic chip with a novel ring structure, amplifier and filter circuits for week signal processing, signal analysis and display units based on an Arduino board. The blue integrated optoelectronic chip and the week signal processing circuits are packaged in a finger-sized PCB. The processed PPG pulse signal collected from the fingertip is illustrated in Fig. 7(h), and the corresponding heartbeat can be obtained from the software algorithm. Therefore, the proposed PPG pulse sensor can realize real-time heart pulse monitoring.
Conclusions
In this work, ring-structured integrated optoelectronic chips as used for reflection-type PPG pulse sensor were fully characterized. The suitable LED epitaxial wafer and the measurement position on human skin were determined. A reflector was used to simulate human skin, and the results indicate that the distance between the chip and human skin should be as small as possible. Furthermore, the response rate of the PD in the integrated optoelectronic chips can reach 500 bps, which is sufficient for the PPG pulse sensor. For higher sensitivities, larger chips are preferred at the expense of the response rate. The reflection-type PPG pulse sensor prototype based on a blue integrated optoelectronic chip successfully realized real-time heart pulse monitoring and is important in daily health nursing.
|
2021-10-21T15:19:55.072Z
|
2021-09-15T00:00:00.000
|
{
"year": 2021,
"sha1": "20b3b8353d00e5fd59e16385e6a66aaff52c7cc6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/boe.437805",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d80338f431029652643318e2c62f5532c088f0ef",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
247875361
|
pes2o/s2orc
|
v3-fos-license
|
Introducing an interesting and novel strategy based on exploiting first-order advantage from spectrofluorimetric data for monitoring three toxic metals in living cells
In this work, we did our best to develop a novel and interesting analytical method based on coupling of spectrofluorimetry with first-order multivariate calibration techniques for simultaneous determination of lead (Pd), zinc (Zn) and cadmium (Cd) in HeLa cells. To achieve this goal, quenching of the emission of graphene (GR) was individually investigated in the presence of Pb, Zn and Cd and then, according to the linear ranges obtained from individual calibration graphs, a multivariate calibration model was developed based on modeling of the quenching of the emission of GR in the presence of the mixtures of Pb, Zn and Cd. First-order multivariate calibration models were constructed by partial least squares (PLS), principal component regression (PCR), orthogonal signal correction-PLS (OSC-PLS), continuum power regression (CPR), robust continuum regression (RCR) and partial robust M-regression (PRM) and their performances were evaluated and statistically compared. Finally, the OSC-PLS was chosen as the best model with the best practical performance for analytical purposes.
Introduction
Nanomaterials have strange and valuable properties compared with bulk materials and because of that are widely used for different purposes especially for sensing purposes [1][2][3][4][5][6][7][8][9][10]. Graphene is a two-dimensional carbon nanomaterial which is not only a flexible structure but also is a robust structure which make it to be very useful for different applications [1]. The graphene can be existed in different structures such as graphene oxide, graphene quantum dots and graphene nanoplatelets [2,3]. The graphene because of having good electrical, thermal and optical properties, has a great potential for application to developing transistors [2,4], chemical and electrochemical sensors [5] and biological sensors [6]. The graphene has some extra applications in surface coatings for inhibiting corrosions [7,8] and to reduce wear and friction on sliding metal surfaces [9,10]. The graphene sheets with lateral dimensions less than one hundred nanometers are called graphene quantum dots (GR) which have new chemical and physical properties such as high stability, good solubility, low toxicity, photoluminescence and excellent biocompatibility.
Heavy metals are existed in the earth's crust but their geochemical cycles and biochemical balance have been significantly affected by human activities. Sometimes, the heavy metals are considered as contaminants which can be hazardous for human health therefore, monitoring of them is important. Lead (Pb) and cadmium (Cd) are heavy metals which are widely and naturally distributed toxic metals. There are some reports on determination of these metals with zinc (Zn) [11]. The Zn is one of the most abundant metals in the human body which is a vital element for growth. There are more than 300 enzymes in human body whose active sites contain the zinc ions and Zn has an important role in synthesis of DNA and RRNA and protein and in cell division as well. Therefore, determination of these three metal ions is interesting and so important. Determination of heavy metals is usually performed by atomic absorption spectroscopy (AAS), inductively coupled plasma, atomic emission spectroscopy, X-ray fluorescence spectroscopy and mass spectroscopy which need expensive instruments which can't be accessible in most of all of laboratories therefore, developing new analytical methods which are fast, low-cost and accessible is sensible.
HeLa is an immortal cell line which is the most commonly used human cell line in scientific research. The HeLa cell line is durable and prolific which make it to be extremely suitable for scientific research. Therefore in this study, we have used the HeLa cells as a very interesting case for developing a novel analytical method for simultaneous determination of the Pb, Cd and Zn. Chemometrics combines chemical data with mathematical and statistical methods to extract useful information which can help the chemists to better justify their observations. Chemometricians have performed different projects by the use of instrumental data [12][13][14][15][16][17][18][19][20][21][22][23]. In this project, we are going to couple first-order chemometric multivariate calibration techniques with spectrofluorimetric data to develop a novel analytical method for simultaneous determination of the Pb, Cd and Zn in HeLa cells. To achieve this goal, the GRs were uptaken by HeLa cells and then, Pb, Cd and Zn were individually uptaken and fluorescence quenching of the GRs was recorded in the presence of the metals to obtain individual calibration graphs. Then, a mixture design was used to multivariate calibration of the quenching of the GRs in the presence of Pb, Cd and Zn simultaneously. The spectrofluorimetric responses of the mixtures were modeled by partial least squares (PLS), principal component regression (PCR), orthogonal signal correction-PLS (OSC-PLS), continuum power regression (CPR), robust continuum regression (RCR) and partial robust M-regression (PRM) to build multivariate calibration models and finally, their performance were compared and the best multivariate calibration model was chosen for practical purposes. Schematic representation of the steps described above are shown in Scheme 1.
Chemicals
Trypsin-EDTA, Dulbecco's modified Eagle's medium (DMEM/F-12 (1:1)), fetal bovine serum (FBS, 10%), penicillin-streptomycin (PEN-STREP), zinc nitrate hexahydrate, cadmium nitrate tetrahydrate and Pb (NO 3 ) 2 were purchased from Sigma. Commercial Pb, Cd and Zn standards (1 g l − 1 ) were prepared from Merck. Graphene quantum dots (blue luminescent) were purchased from Sigma-Aldrich. The other chemicals which were needed for doing this project were available in archive of our laboratory which had been purchased from Sigma or Merck. Doubly distilled water was used wherever water was needed. A phosphate buffer solution (PBS, 0.01 M) was prepared from Na 2 HPO 4 and its pH was adjusted at 7.4 by the use of H 3 PO 4 and NaOH.
Instruments and software
Spectrofluorimetric data were recorded by a Cary Varian spectrofluorimeter equipped with a quartz cell (1 cm length path). First-order multivariate calibration algorithms including PLS, PCR, OSC-PLS, CPR, RCR, PRM, smoothing of the data and elliptical joint confidence region (EJCR) were run in MATLAB (Version 7.5) by the use of a series of mfiles. The first-order multivariate calibration algorithms have been run in MATLAB with the help of PLS-toolbox or TOMCAT. The HeLa cells were prepared from the cell bonk of Kermanshah University of Medical Sciences. Then, the flask was transferred into a culture room where a deep-freezer (− 80 • C), a memmert incubator, a JTLV CZS hood and a Motic microscope were existed for cell culturing. pH adjustments were performed by a Jenway pH meter 3510. Performance of the developed Scheme 1. Graphical representation of the steps of project described in this article. methodology was compared with the results of an Agilent atomic absorption spectrometer as reference method (AAS). Operating conditions for the AAS were: PMT voltage (450 V), slit width (0.40 nm), lamp current (9.0 mA), sample volume (20 µl), purging gas (argon), sample injection replicates (2) and measurement (peak height). All the calculations which were needed for data processing were performed on a Dell XPS laptop.
Procedure
Dispersion of the HeLa cells were performed in DMEM + FBS (10%) + PEN-STREP (1%) and seeded on five confocal dishes and then, they were incubated at an humidified atmosphere (5% CO 2 +95% air) at 37 • C during a day (24 h). For uptaking the GR, 100 ng mL − 1 GR was added to different culture dishes and incubated at different times and then, the cells were washed with PBS (0.01 M, pH 7.4) and left to be in the PBS.
For simultaneous determination of Pb, Cd and Zn in HeLa cells, the seeded cells were allowed to grow during a day (24 h) and 1 mL DMEM having 1300 ng mL − 1 GR was used to replacing the culture medium of each dish and for uptaking the GR, the procedure was continued by incubating the dishes in an incubator for 2 h. Afterwards, the extra amounts of GR were removed by washing the dishes with the PBS for three times. Then, 1 mL DMEM having different concentrations of Pb, Cd and Zn (for all the three metals: 700-1600 ng mL − 1 , with an interval of 100 ng mL − 1 ) were added to the dishes. The cells were further incubated for 2 h and washed with the PBS for three times and kept in the PBS. Spectrofluorimetric monitoring of the Pb, Cd and Zn was performed by excitation at 405 nm. For performing background correction on the data, the control cells which had not been incubated with GR (didn't have any GR) was prepared. The procedures described above were continued by digestion of the treated and control cells with trypsin and then, the cells were kept in the PBS. Afterwards, the cells were counted, broken by ultrasonic and centrifuged. Finally, the supernatant of cells were measured spectrofluorimetrically.
Theoretical details in brief
In this work, we are going to develop a novel spectrofluorimetric method assisted by chemometric methods which will enable us to simultaneous determine Pb, Cd and Zn in living cells. Data treatment and development of multivariate calibration models must be very carefully performed to achieve the final goal. Prior to data modeling, all the spectrofluorimetric data were treated according to the following equation [24]: All the data used in this work after passing this correction step was used for the next steps. Emission of the control cells was subtracted from the emission of the all of the cells and the corrected emissions were used for developing multivariate calibration models. Background correction was performed on the whole of data by subtracting emission of the control cells from emission of the whole of sets. Performance of the calibration models will be compared by the use of the following equations (RMSEP: root mean square error of prediction and REP: relative error of prediction): where y act and y pred are nominal and predicted concentrations, respectively, and y mean is the mean of the nominal concentrations. n are the number of samples in the validation set. Precision and accuracy of the developed calibration models will be compared according to the ellipses of the EJCR as well. Univariate calibrations and multivariate calibration and validation sets were performed in internal medium of the cells and by digestion of the cells with trypsin, the medium was extracted. This is a very important advantage which causes having a same medium for calibration and validation of the method which can help us for exploiting first-order advantage.
Individual calibration graphs
Generally, developing a novel analytical method needs a calibration step by which an instrumental signal is connected with concentration of the analyte of the interest. Therefore, in this project, at the first step, we must calibrate the spectrofluorimetric response of the GR with concentration of the Pb, Cd and Zn. This goal can be achieved by recording spectrofluorimetric responses of the GR in the presence of Pb, Cd and Zn individually. Building the individual calibration curves needed some complicated steps which will be expanded in this section.
The HeLa cells which had uptaken 1300 ng mL − 1 GR, were used to uptaking different concentrations of Pb, Cd and Zn from 700 to 1600 ng mL − 1 . The images related to the control cells, the cells which uptook GRs, the cells which uptook GRs and Pb, the cells which uptook GRs and Zn, the cells which uptook GRs and Cd and the cells having all of the three metals are shown in Fig. 1A-F, respectively. As can be seen, obvious variations were observed among the images which confirmed successful uptaking GR and metals in HeLa cells.
After observation of the appearance of the cells microscopically, the broken cells were monitored spectrofluorimetrically. It should be noted that prior to selection of the optimum concentration of the GR for having the best emission, its concentration was varied and its emission was recorded as the data shown by Fig. 2A. Variation of the emission of the GR versus concentration of the GR is shown by Fig. 2B, and as can be seen, the graph is increased and leveling off which helped us to choose 1300 ng mL − 1 as the optimum concentration of the GR. Afterwards, the broken cells having GR and Pb, Cd and Zn were monitored individually to build the individual calibration graphs which are shown in Fig. 2C-H. The calibration graphs gave us the linear ranges where emission of the Table 1 Concentrations (ng/mL) of the metals in the calibration set.
Multivariate calibrations
In order to multivariate calibrate the emission of the GR with concentration of Pb, Cd and Zn, a central composite design was developed based on linear ranges obtained from individual calibration graphs. Composition of the calibration set is shown in Table 1. All the cells related to the calibration set had 1300 ng mL − 1 GR as its optimum concentration where each run had different concentrations of the Pb, Cd and Zn chosen according to the linear ranges obtained from individual calibration graphs. The images taken from the cells related to the calibration set are shown by Fig. 3A-J. The work was continued by the application of PLS, PCR, OSC-PLS, CPR, RCR and PRM to the spectrofluorimetric data recorded for the calibration set which are shown by Fig. 4A. Wherever number of latent variables (LVs) was required, it was determined by leave one our cross validation (LOOCV). Different algorithms used in this study needed some parameters which were optimized as follows: PLS: number of LVs = 3, OSC-PLS: number of LVs = 3, CPR: number of LVs = 3 and power = 1, RCR: number of LVs = 3, percentage of data contamination = 0.1 (PDC) and delta parameter= 0.05 (δ) and PRM: number of LVs = 3 and PDC = 0.12. After application of the algorithms and optimization of their parameters and constructing multivariate calibration models, their performance was verified by their application to a validation set having cells with different concentrations of Pb, Cd and Zn whose composition is shown by Table 2. The images taken from the cells related to the validation set are shown by Fig. 3K-T and their spectrofluorimetric responses are shown by Fig. 4A.
Application of the constructed multivariate calibration models to the validation set for examination of their performance was performed and the predicted concentrations by different algorithms have been collected in Table 3. By calculating REPs and RMSEPs which are collected in Table 4, it can be clearly seen that OSC-PLS had the best performance among the tested algorithms and their performance obeys from the following order: OSC-PLS>PLS>PRM>RCR~PCR~CPR. For further comparison of different algorithms, their accuracy and precision were compared by the use of EJCR and the results are shown in Fig. 5. The outputs of the EJCR are ellipses whose size is proportional to the precision of the method and falling the ideal point within the ellipse
Table 2
Concentrations (ng/mL) of the metals in the validation set.
confirms the accuracy of the method. The ellipses related to the application of different algorithms for prediction of Pb, Zn and Cd in the validation set are shown in Fig. 5A, B and C, respectively. Blue ellipse, pink ellipse, green ellipse, yellow ellipse, black ellipse and red ellipse are related to OSC-PLS, PLS, PRM, CPR, RCR and RCR, respectively, and the black point shows the ideal point. Yellow, green, black and red ellipses were fallen to each other and only blue and pink ellipses were apparently different from the other ellipses. According to the results of EJCR, the blue ellipse which was related to OSC-PLS confirmed the best performance which motivated us to select it as the best model for simultaneous determination of the Pb, Zn and Cd.
In order to further verification of the performance of the spectrofluorimetric method assisted by OSC-PLS, the AAS was applied to the prediction of the concentrations of the validation set as reference method and the results are shown in Table 5. The REPs and RMSEPs are presented in Table 5 as well, and as can be seen, the method showed a good performance. For graphical comparison of the AAS and OSC-PLS by the use of EJCR, their results were fed to MATLAB and the EJCR was run on them and the results are shown in Fig. 6. As can be seen, the AAS (black ellipse) showed better accuracy and precision than OSC-PLS (red ellipse) but, by tacking to account that the OSC-PLS is low-cost, simple and fast method in comparison with the AAS which motivated us to suggest it for practical applications.
The intra-day precision of the assay was estimated by calculating the relative standard deviation (RSD) for the analysis of 800 ng mL − 1 Pb, Zn
Conclusion
In this work, a novel and interesting analytical methodology based on coupling of spectrofluorimetry and chemometrics was developed for simultaneous determination of Pb, Cd and Zn in Hela cells. Among the tested chemometric algorithms, the OSC-PLS showed the best performance for simultaneous monitoring of Pb, Cd and Zn whose performance was comparable with AAS as reference method. The results of this work showed that chemometrics has a great potential for assisting instrumental techniques to develop accurate novel methods which have very better performance than those instrumental alone. As a new research field for our research group, we are going to continue coupling of chemometric method with instrumental techniques for bioanalytical purposes and definitely, this work will be a bridge to connect the world of chemometricians with the world of bioanalysts.
Statement
The main idea of this project belongs to Dr. Ali R. Jalalvand and the other authors contributed equally in this project.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2022-04-03T15:10:11.298Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "85913502127a2d816b9ebf1f998945c15c055165",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.toxrep.2022.03.049",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3da8a1108cccfd2f24e683f5d46b32199b57cef2",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246529873
|
pes2o/s2orc
|
v3-fos-license
|
DNA methylation in blood cells is associated with cortisol levels in offspring of mothers who had prenatal post‐traumatic stress disorder
Abstract Maternal stress during pregnancy is associated with differential DNA methylation in offspring and disrupted cortisol secretion. This study aimed to determine methylation signatures of cortisol levels in children, and whether associations differ based on maternal post‐traumatic stress disorder (PTSD). Blood epigenome‐wide methylation and fasting cortisol levels were measured in 118 offspring of mothers recruited from the Kosovo Rehabilitation Centre for Torture Victims. Mothers underwent clinically administered assessment for PTSD using Diagnostic and Statistical Manual of Mental Disorders. Correlations between offspring methylation and cortisol levels were examined using epigenome‐wide analysis, adjusting for covariates. Subsequent analysis focussed on a priori selected genes involved in the hypothalamic–pituitary–adrenal (HPA) axis stress signalling. Methylation at four sites were correlated with cortisol levels (cg15321696, r = −0.33, cg18105800, r = +0.33, cg00986889, r = −0.25, and cg15920527, r = −0.27). In adjusted multivariable regression, when stratifying based on prenatal PTSD status, significant associations were only found for children born to mothers with prenatal PTSD (p < 0.001). Several sites within HPA axis genes were also associated with cortisol levels in the maternal PTSD group specifically. There is evidence that methylation is associated with cortisol levels, particularly in offspring born to mothers with prenatal PTSD. However, larger studies need to be carried out to independently validate these findings.
| INTRODUCTION
Stress related disorders during pregnancy, such as high stress, depression and anxiety have been shown to affect foetal development and lead to a multitude of poor birth and later health outcomes (Glover, 2014). This includes low birthweight for gestational age, negative effects on brain and cognitive development, an increased likelihood of social and behavioural problems, and a higher risk of stress-related mood disorders in childhood which can persist into later life (Jarde et al., 2016). Maternal stress during pregnancy may disrupt the setting of offspring hypothalamic-pituitary-adrenal (HPA) axis signalling, resulting in aberrant cortisol secretion (Castelli et al., 2020). Epigenetic mechanisms, including DNA methylation, are likely to play an important role, and could help explain the lasting effects of early-life maternal stress on the offspring (J. Ryan et al., 2017).
Cortisol is a glucocorticoid hormone secreted from the zona fasciculata of the adrenal cortex when stimulated by adrenocorticotropin release from the pituitary gland (Lightman et al., 2020), and is primarily secreted in response to stress (Pulopulos et al., 2020).
Epigenetic mechanisms, such as DNA methylation, play a role in cortisol (HPA-axis) signalling. DNA methylation is involved in cortisol production (Kometani et al., 2017), and glucocorticoid receptor activity (Watkeys et al., 2018). Further, DNA methylation has been associated with cortisol levels (Wrigglesworth et al., 2019) and shown to mediate the association between childhood trauma and cortisol stress reactivity (Argentieri et al., 2017;Houtepen et al., 2016).
Post-traumatic stress disorder (PTSD) is characterised by a reexperiencing of traumatic events, associated with symptoms including intrusive thoughts and memories, active avoidance, negative changes to mood and cognition, and changes to reactivity and arousal (Miao et al., 2018). Post-traumatic stress disorder is thought to have negative intergenerational effects, passed on from mothers to offspring prenatally (Miao et al., 2018;von der Warth et al., 2020;Yehuda & Bierer, 2007), with biological mechanisms such as DNA methylation likely to be involved (J. Ryan et al., 2016). One of the primary biological characteristics of PTSD is disrupted cortisol secretion (Speer et al., 2019). Differences in cortisol, and cortisol signalling have also been observed in the offspring of mothers that have experienced prenatal PTSD (Bader et al., 2014;Liu et al., 2016), which may be associated with poor psychiatric outcomes. We have recently demonstrated differential blood DNA methylation profiles in offspring of mothers who had prenatal PTSD compared to those without (Hjort et al., 2021). This is also supported by previous studies showing that intergenerational effects are partly attributed to epigenetic processes (Perroud et al., 2014;Youssef et al., 2018). The mechanisms of this transgenerational effect of PTSD in pregnancy could be due to differential DNA methylation which is associated with cortisol levels in offspring.
The aim of this study was to identify DNA methylation signatures associated with fasting cortisol levels in children, at the epigenomewide level and then focus on specific candidate genes of the HPA stress axis. A secondary aim was to ascertain whether maternal PTSD during pregnancy modifies any observed associations.
| Study cohort
This study involved women recruited from the Kosovo Rehabilitation Centre for Torture Victims (KRCT) and their youngest offspring. Participant characteristics have been described in detail previously (Hjort et al., 2021). The KRCT recruited 130 women aged between 30 and 59 years, who had experienced torture and/or sexual violence during the Kosovo war. Participants had given birth to at least one child after the war, which was not related to sexual assault. All women were of Albanian ethnicity, born in Kosovo, and had a home address in Kosovo during the war in 1999. Clinical assessments and questionnaires were conducted during 2019 by psychologists and medical doctors at KRCT for all participants. Diagnosis of PTSD was based on the Diagnostic and Statistical Manual of Mental Disorders criteria's "Clinician-Administered PTSD Scale" (CAP-IV) (American Psychiatric Publishing Inc., 1994), which had been translated and validated in Albanian language (Turner et al., 2003). Sociodemographic and lifestyle data were also collected. These included age, educational attainment (none, primary, secondary, or higher), marital status (married, divorced, single, widowed), place of residence (city or village) and prenatal cigarette smoking.
| Ethics statement
This study was approved by the commission for the ethical issues within Kosovo doctor's chamber. The study was carried out in accordance with the Ministry of Health Central Ethics Committees in Kosovo, as per Kosovar Government guidelines, and with the Helsinki Declaration. All participants who agreed to take part provided informed consent. They were informed that they have the right to withdraw from the study at any time. Any participant suffering from adverse effects of trauma was referred to a psychologist or medical doctor at KRCT. The information provided by the study participants was treated throughout the process with confidentiality according to the Kosovar law and Declaration of Helsinki II on biomedical research and complied with general data protection regulation.
| Blood collection
Fasting blood samples were collected from 120 of the youngest offspring born to each woman, by lab technicians at the Tirana Laboratory, Pristina, Kosovo in March, and April of 2019. After a 20min rest period in a comfortable environment, a sample was collected from each child between 7:30 and 9:30am in an 6 ml tube (SAR-STEDT AG & Co.). Cortisol was measured in offspring blood samples using electrochemical luminescence immunoassay and reported in International System of Units (nmol/L) (COBAS E411, Roche). The reference range for cortisol levels in the laboratory in Pristina (Kosovo) was used to identify low and high cortisol levels. A separate sample for DNA extraction, was collected in an 6 ml EDTA plasma tube and stored at −20°C for 2-3 weeks, before being shipped to Denmark where it was stored at −80°C until processed (DNeasy DNA blood kit, Qiagen).
| DNA methylation profiling and bioinformatics
Epigenome-wide DNA methylation data was generated using the Illumina's Infinium HumanMethylationEPIC BeadChip (Illumina), processed by GenomeScan in Leiden, Netherlands. After removing one sibling from two sets of twin pairs, methylation data of 118 offspring were available for analyses.
Pre-processing of data was carried out using R version 4.0.3 (R Core Team, 2021), and the minfi package (Aryee et al., 2014). Probes at methylation sites (also known as cytosine-phosphate-guanine dinucleotides or CpGs) where array signals were not discernible from background noise (at P > 0.01) were removed from the data set using the 'detectionP' function of minfi. No samples required removal as after removing problematic probes, no sample was missing data, and all were uniformly bi-modally distributed. Child biological sex was determined and confirmed using the 'getSex' function of minfi. Data were normalized using the subset quantile normalisation method (Wu & Aryee, 2010). After removing sex chromosome probes, known cross-reactive probes (Pidsley et al., 2016), and probes containing a single nucleotide polymorphism at the methylation site (CpG) or within a single-base extension (SBE) (Supplementary Table 1), 625,431 CpGs were available for analysis.
Cytosine-phosphate-guanine dinucleotides methylation signal intensities were then transformed into M-values for analysis (log2 unmethylated/methylated signal intensity), and β-values for biological interpretation (methylation between 0 and 1 at each site). Mvalues are preferred for statistical analysis due to their bi-modal distribution, which reflects patterns of methylation across the epigenome (Du et al., 2010). Blood cell estimation was carried out, using the 'estimateCellcounts2' function of the FlowSorted.Blood.EPIC package (Salas & Koestler, 2018). This function estimates the proportions of B cells (CD19+), T lymphocytes (CD4+ and CD8+), monocytes (CD14+), neutrophils and natural killer cells (CD56+) in blood. As neutrophils were the most prominent cell proportion (mean = 49.9%, SD = 0.08), this estimate was left out of adjustment models.
| Statistical analysis: Epigenome wide association study
To identify differentially methylated CpGs associated with cortisol levels, two separate analyses were carried out. One to find associations between methylation and cortisol as a continuous measure (ranged between 10.7 and 722.8 nmol/L), and another using categorical measures to observe if methylation differs between groups of low (≤170 nmol/L) or high (≥550 nmol/L) cortisol compared to normal (>171 to <549 nmol/L). The cate package, which removes unwanted variation while controlling for known variables in modelling, was used to carry out high dimensional factor analysis and confounder adjusted multiple testing (Wang & Zhao, 2020). Models assessed continuous and categorical cortisol levels associated with differential methylation, and adjusted for the child's age and sex, the mothers age, level of education, marital status, living location, and pregnancy smoking status, as well as EPIC array chip number for batch effect, and estimated cell proportions (not including neutrophils). A small number of participants were missing data for some of the covariates, and these were imputed using the median value. This included maternal age (n = 1), living area (n = 1), maternal education (n = 3), maternal marital status (n = 2) and maternal smoking during pregnancy (n = 2). All p-values were adjusted for multiple testing using the Benjamini-Hochberg method (BH.Adj.P) (Chen et al., 2017).
For stratification analysis, we analysed β-values from CpGs with p < 0.15 after adjustment for multiple testing. These analyses were conducted using STATA version 14 (StataCorp, 2015).
| Candidate gene analysis
Key genes involved in HPA axis signalling for investigation in this study included those which encode signalling molecules such as brain derived neurotrophic factor (BDNF) (de Assis & Gasanov, 2019), and corticotropin releasing hormone (CRH) (Zhou & Fang, 2018), as well as glucocorticoid receptors and chaperones involved in receptor activity, nuclear receptor subfamily 3 group C member 1 and 2 (NR3C1/2) (Iftimovici et al., 2020;Plieger et al., 2018) and FK506-binding protein 51 (FKBP5/FKBP51) (Zannas et al., 2016), and corticotropin receptors CRH receptor 1 and 2 (CRHR1/2) (Grimm et al., 2017;Sanabrais-Jiménez et al., 2019). Methylation data were extracted for each of these genes from the epigenome wide association study (EWAS) data set. Genomic positions of each gene were selected by using the Homo sapiens (human) genome assembly GRCh37 (hg19) reference in the University of California, Santa Cruz genome browser (Haeussler et al., 2019). Genomic regions of probe extraction included the gene body, as well as approximately 25% of the gene size up and down stream. This was done to ensure capturing data from any nearby CpG islands concentrated areas of CpGs, mostly present in gene promotor regions (Hughes et al., 2020), and to capture CpGs surrounding the gene. Correlations between continuous cortisol measures and CpG methylation were carried out using Pearson (normally distributed methylation) and Spearman (non-normally distributed methylation) FRANSQUET ET AL. methods, and adjusted for multiple comparisons using the Holm method (H.adj.p) (Aickin & Gensler, 1996) in R. STATA was then used for multivariate linear regression for CpGs found to be significantly correlated with cortisol levels using the aforementioned variables, both on the whole sample population and stratified by prenatal PTSD status.
| Participant characteristics
Participant characteristics, stratified by cortisol level, can be seen in Table 1. Just over three quarters of offspring had a normal fasting cortisol level (n = 90, 76%). Offspring of mothers with prenatal PTSD had higher cortisol levels. Offspring cortisol levels were also associated with prenatal maternal smoking, living location and maternal marital status.
| Epigenome-wide analysis
In epigenome wide association analysis between methylation and cortisol levels, no CpGs reached adjusted 5% significance levels after correction for multiple testing and controlling for estimated blood cell proportions, child's age and sex, maternal age, education, marital status, living location and prenatal smoking status. Methylation at four CpGs were nominally associated with continuous cortisol levels after adjustments for covariates (p < 1 � 10 −6 ) and were the most significant sites associated with cortisol after correction for multiple testing (at BH adjusted p-value <0.15) ( Table 2). The correlation between methylation and cortisol levels at these CpGs can be seen in Figure 1 (a-d). The strongest effect sizes observed were for CpGs cg15321696 and cg18105800, where the correlation coefficients were −0.33 and + 0.33 respectively. None of these sites were replicated in analysis of categorical cortisol groups, however six separate CpGs were nominally associated with categorical levels of cortisol after aforementioned adjustments (Supplementary Table S2, Supplementary Figure S1).
| Stratification of multivariate linear regression by prenatal maternal post-traumatic stress disorder status
Eighty-five of the 118 mothers had PTSD during pregnancy. For the four CpGs found in epigenome wide association analysis, multivariate linear regression analysis was then stratified according to prenatal maternal PTSD (n = 85) or no-PTSD (n = 33). In offspring born to women with maternal PTSD, these CpGs were significantly associated with cortisol levels (Table 3). In contrast, none of these CpGs were associated with cortisol levels in offspring of women without maternal PTSD. Findings concerning the categorical cortisol analysis (Supplementary File, Table S2) were similar, in being only significant for offspring born to mothers with maternal PTSD.
| Candidate gene analysis
Seven HPA-axis related candidate genes were examined, with a total of 407 CpGs sites ( In multivariate linear regression adjusting for confounding factors, methylation at seven of these CpGs remained significantly associated with cortisol levels (Table 5). Stratifying these findings by prenatal PTSD status, five remained significantly associated with cortisol levels in offspring with mothers with PTSD (Figure 2a,b,d,f,g), *** p < 0.001. a adj.p, adjusted p value using beta values, adjusted for cell type, child age and sex, maternal age, maternal education, maternal marital status, maternal living area, prenatal smoking, and batch.
-759 but none were associated with cortisol levels in the non-maternal PTSD group. Interestingly, three significantly correlated CpGs which were not significant overall in the adjusted multivariate linear regression, were also found to be significant in the prenatal PTSD group (Table 5 and Figure 2 c,e,h).
| DISCUSSION
In this cohort of survivors of sexual violence during the Kosovo war (1998)(1999), we have previously shown that maternal PTSD during pregnancy was associated with higher cortisol levels in the offspring, as well as differential methylation of HPA-axis stress-related genes (Hjort et al., 2021). We extend these findings in the current study, with the identification of sites across the epigenome where offspring DNA methylation was associated with cortisol levels. Furthermore, in stratified analyses, these associations were significant only in offspring born to mothers with PTSD during pregnancy. Together these findings suggest that PTSD during pregnancy plays a role in mediating aberrant cortisol signalling in offspring, in part regulated by DNA methylation.
There may be clinical utility in using DNA methylation markers of cortisol which reflect biological embedding of future mental health disease risk due to prenatal exposure (Aristizabal et al., 2020;Graham et al., 2019). DNA methylation measures could be used to measure risk of cortisol dysregulation and subsequent mental health issues, regardless of the knowledge of prenatal maternal mental health. They are also likely to reflect more stable changes in stress signalling and thus help explain long lasting associations with health outcomes occurring many years later (Nemoda & Szyf, 2017 determine the true utility of these biomarkers. However, the fact that our observations were only seen in the PTSD group support the idea of transgenerational programing due to negative prenatal exposure. There have been few previous epigenome-wide association studies of cortisol levels. The first example of which found 22,425 sites associated with cortisol stress reactivity in a sample of 85 participants (Houtepen et al., 2016). None passed adjustment for multiple testing, so they focussed on three CpGs, cg27512205 (intronic region of KITLG), cg05608730 (upstream of C1QTNF2), and cg26179948 (intronic region of JAZF1-AS1), which were also associated with childhood trauma, all of which were negatively correlated with cortisol. Another small study (n = 22) previously found early post-conceptional maternal cortisol to be associated with multiple measures of methylation across the genome, with numbers varying in time points (between 2 and 1639 sites) (Barha et al., 2019). However, specific sites were not listed. Another more recent study of 318 participants, found one CpG, cg16290996 in the GAS5 gene, was negatively correlated with morning cortisol levels (Lohoff et al., 2020). None of these CpGs were associated with cortisol levels in our study. Instead, we found that DNA methylation within three separate gene regions, as well as upstream of a separate gene, were most strongly associated with cortisol. genes in relation to cortisol, however differential methylation has been found in relation to diseases like Alzheimer's (Lambert et al., 2013;Yan et al., 2016), and for prenatal exposures like smoking (Breton et al., 2014;Küpers et al., 2015;Markunas et al., 2014;Richmond et al., 2018;Rzehak et al., 2016;S. Rauschert et al., 2019).
Methylation of
A positive correlation was observed between methylation at cg18105800 within an exon of the F-Box And Leucine Rich Repeat Protein 2 gene, which encodes a subunit of a ubiquitin protein ligase complex, and cortisol (Matsushima et al., 2019). This is a novel finding.
Methylation at several genes involved in HPA-axis regulation were associated with cortisol levels. After adjusting for multiple testing and covariates, seven CpGs were significantly correlated with cortisol levels, including cg27193031 and cg25328597 from BDNF, cg10106856 and cg15844800 from CRHR1, cg14939152 and cg17349736 from NR3C1, and cg07335874 from NR3C2. (Pilkay et al., 2020), bi-polar disorder (Duffy et al., 2019), and depressive symptoms (Braithwaite et al., 2015). However, none of these observed BDNF methylation in the context of cortisol levels. When looking at specific CpGs, a separate study found that differential methylation (in placental tissue) at cg27193031 in offspring was significantly associated with maternal war related stress exposure in 24 mother/child dyads (Kertes et al., 2017). However, significance did not remain after adjustments. CpGs cg25328597 and cg04672351 seem to be novel observations. They have been reported (but not significant) in children, in relation to maltreatment in childhood (Weder et al., 2014) however, no previous studies reported their association with cortisol, or prenatal PTSD, in childhood or intergenerational studies. NR3C1 encodes the glucocorticoid receptor. This receptor is a binding site for cortisol when it is released in response to acute and chronic stress (Gjerstad et al., 2018). One of the primary functions of the glucocorticoid receptor is to facilitate a negative feedback loop, halting the stress response, the process of which can been affected by sustained high levels of cortisol (Efstathopoulos et al., 2018).
There are many intergenerational NR3C1 DNA methylation studies of prenatal, perinatal, and early childhood exposure to maternal factors. These include but are not limited to, the effect of maternal care giving on infant methylation (Conradt et al., 2019), maternal support during stress (Bosmans et al., 2018), maternal psychosis (Palma-Gudiel et al., 2015), and harsh parenting (Lewis et al., 2021).
Interestingly, the harsh parenting study, including 97 children, also looked at NR3C1 methylation and daily cortisol levels, and found that NR3C1 methylation could predict a steeper daily cortisol slope (Lewis et al., 2021 cg14939152 methylation in offspring have been studied in relation to maternal antenatal depression and anxiety (Bleker et al., 2019), however there was found to be no association. Only the one aforementioned study directly compared NR3C1 methylation to cortisol measures (Lewis et al., 2021), and no studies have explored the relationship of cortisol and NR3C1 methylation in response to maternal PTSD. In our study, after stratifying by prenatal PTSD status, all associations remained significant only in the PTSD group.
Thus, our findings suggest there may be relationship between exposure in utero to adverse mental health and methylation of these particular CpGs.
| Strengths and limitations
One of the strengths of this study was the collection of blood during a narrow two-hour window in the morning. Our sample size is relatively small (n = 118) for an EWAS study which measures over 850,000 methylation sites. We took a less conservative approach in reporting findings after adjusting for multiple testing. They should therefore be interpreted with caution due to the increased risk of type I statistical errors (false positives) (Chen et al., 2017), and thus require independent replication in another study. The small sample size of this study also meant groups formed by stratification of maternal PTSD status were quite small, especially the control group (n = 33 compared to n = 85 for PTSD). This could have influenced the power to detect significant associations in that group.
Finally, genetic factors could not be accounted for as maternal samples were not collected. Future studies should seek to include genotyping to assess genetic relatedness between mother and child.
Further, although we exclude methylation at known SNPs, it is pertinent to include relevant SNPs within epigenetic analysis to assess whether methylation patterns may be driven by genetic variation.
| CONCLUSION
To our knowledge, this is the first study to look at epigenome-wide methylation in association with fasting cortisol levels in offspring and according to maternal prenatal PTSD. The relationship between DNA methylation and cortisol levels is largely understudied and considering it may have utility as a biologically embedded biomarker of future adverse mental health risk, this highlights an important unmet gap in research. This study has found that there is some level of evidence that differences in DNA methylation are associated with cortisol levels, and that this relationship is prominent in offspring whose mothers had prenatal PTSD. However, findings need to be replicated in larger cohorts that allow for greater statistical power to be confident of the observations of this study.
ACKNOWLEDGEMENT
The authors greatly appreciate all the children and their mothers who -763
|
2022-02-05T06:23:47.862Z
|
2022-02-04T00:00:00.000
|
{
"year": 2022,
"sha1": "cc295285cf6eb97ba65bfef18a60543783003297",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "867065f0b19fbce37b1783230e517bd0a1633b31",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11887877
|
pes2o/s2orc
|
v3-fos-license
|
Real time error detection in metal arc welding process using Artificial Neural Netwroks
Quality assurance in production line demands reliable weld joints. Human made errors is a major cause of faulty production. Promptly Identifying errors in the weld while welding is in progress will decrease the post inspection cost spent on the welding process. Electrical parameters generated during welding, could able to characterize the process efficiently. Parameter values are collected using high speed data acquisition system. Time series analysis tasks such as filtering, pattern recognition etc. are performed over the collected data. Filtering removes the unwanted noisy signal components and pattern recognition task segregate error patterns in the time series based upon similarity, which is performed by Self Organized mapping clustering algorithm. Welder quality is thus compared by detecting and counting number of error patterns appeared in his parametric time series. Moreover, Self Organized mapping algorithm provides the database in which patterns are segregated into two classes either desirable or undesirable. Database thus generated is used to train the classification algorithms, and thereby automating the real time error detection task. Multi Layer Perceptron and Radial basis function are the two classification algorithms used, and their performance has been compared based on metrics such as specificity, sensitivity, accuracy and time required in training.
INTRODUCTION
Manual Arc welding process, though carried out using specifically designed power sources, is a dynamic and stochastic process due to random behavior of the electric arc and the metal transfer that takes place during welding. Fluctuations in voltage and current are so rapid and random that a high speed data acquisition system is required to capture these variations. Data thus, acquired can be analyzed to derive characteristic feature of the welding process. Data collected is huge and contain features corresponding to the performance of welding power source, welding consumables and the welder. As far as power source and welding consumables are concerned, their qualities can be improved by using better technology and better composition respectively. But process is greatly affected by human made errors. Number of errors as well as uniformity of the weld greatly depends upon the experience of welder. Presently existing techniques declare quality of welder by inspecting the weld done by him, through various NDT techniques, which are time consuming and resource intensive. Amit kumar et al. has done a detailed study on utilization of ANN in welding technology [1] and suggest ANN can be used to greatly optimize welding techniques.
T. Polte and D. Rehfeldt [2] suggest better alternative for declaring weld quality. Electrical parameters (current and voltage) obtained from power supply terminal are of random character and varies over wide range. Consequently statistical behavior of these parameters is used to characterize process, which can thereafter be used to detect the type of errors in the process. PDD transformation of voltage time series is fed to the ANN to classify the type of errors. On the other hand, J. Mirapeix and P.B. Garcia-Allende [3] used spectroscopic analysis of plasma spectra produced during welding to monitor quality of resulting weld seams. Plasma spectra captured during welding process goes through PCA technique, thereby reducing spectral dimensions and consequently they are fed to ANN for fault detection task. ANN is also used by Hakan Ates [4] for prediction of welding parameters such as hardness, tensile strength, elongation and impact strength. A. Sanchez Roca obtained a model using ANN for estimating stability of gas metal arc welding [5]. Application of ANNs for prediction of the weld bead geometry using features derived from the infrared thermal video of a welding process has also been done [6].
A novel and less computation intensive method of error detection is presented in this paper. Since the electrical parameters are clear description of the process in real time, so rather than using their PDD transformations we can directly use these waveforms to promptly recognize the process errors. Artificial neural networks (ANN) are employed to analyze the processed data. Self organized mapping (SOM) [9] algorithm clusters the input voltage data based upon waveform similarity. Database thus prepared is used to rank welders as well as it also trains classification algorithms. Multi Layer Perceptron (MLP) and Radial Basis function (RBF) are two type of ANN classifiers used to perform classification task on database obtained from SOM algorithm. Gradient descent algorithm trains MLP model. In RBF, transformation to higher dimension is performed using K means clustering algorithm [8], thereafter learning of separation boundary is done using gradient descent algorithm. Both the ANN classifiers are able to perform classification task fairly accurately. Performance of both has been compared.
EXPERIMENTAL
Aim of the experiment is to acquire voltage and current values from terminals of power supply. In Figure 2.3 third and fifth segments are most unsteady and these patterns correspond to error in weld at that particular time. Detecting these patterns with data mining algorithm will help us to pinpoint error in the weld in real time.
Every welder undergoes 3 trails, thus for 30 welders we are having 90 voltage time series. After applying proper filtering technique, every voltage time series is segmented into 17 segments, each of 100000 data points (thus each welder is having total of 51 segments from his three voltage time series).This procedure give us total of 1530 segments, each of 1,00,000 data points. Thereafter applying downsampling reduces data point in each segment from 100000 to just 50. As signal has only low frequency contents, thus this much downsampling doesn't affect overall shape of segment.
RESULTS AND DISCUSSIONS
Database of segment is not classified i.e. it is not known that which pattern belong to which class, thus an unsupervised clustering algorithm is required to group together the patterns based on similarity. Therefore, using Self Organized Mapping unsupervised clustering algorithm similar patterns are grouped together to form 9 different groups. Groups that are clustering steady patterns are identified and all the patterns clustered under it are marked as desirable patterns. And patterns under other groups are marked as undesirable. As we already know that each welder has 51 patterns, counting is done for number of undesirable pattern. Welder which has most number of undesirable patterns is ranked the lowest and that is how the ranking is performed. Weights of the cluster 4 and 5 came out to have least standard deviation. Thus all the patterns under it are marked as desirable. And patterns under other clusters are marked as undesirable and these are error patters. Now, the pattern database is classified and is suitable for supervised learning of classification algorithm. This step will help in automating the task of error detection.
Database is now modified by marking 1 against the pattern which is belonging to desirable class and marking 0 against the patterns that belongs to undesirable class. This modified database is used for supervised learning of Multilayer Perceptron and radial basis function type neural networks. 70% of database is used for learning and 30% is used for testing. Comparative study of performance of both networks on same database is done.
SIMULATION RESULTS
Training of classification algorithms is done using 1021 entries and last 509 entries are used for testing purpose. Selection of number of hidden nodes in single layer and double layer MLP was done based on study done by Wanas [7]. Table 6 suggest that 2 layer MLP network gave least % test error among the three types of networks. % test error shown by RBF network was indeed highest, but time required for training the RBF network was 5 times less as compared to that required by MLP network as shown in Table 7. It is also seen that, specificity, which is the fraction of true negative classifications, was highest for RBF network and sensitivity, which is the fraction of true positive classification, was highest for two layer network. Comparison results between MLP and RBF are complying with the study done by Santos et al. [10].
Choosing any of the two trained neural network (based upon the accuracy needed) we could able to detect number of error patterns in the incoming voltage time series in real time, thereby detecting the errors in weld in real time.
CONCLUSIONS:
Technique is developed to detect number of error patterns in a time series using Artificial Neural Networks. Self Organized mapping algorithm could successfully segregate the steady patterns and patterns that are in error. Reference database is generated using clustering process done by SOM. Both MLP and RBF network can easily be trained with reference database and thereby be used to classify an unknown pattern. Two layered MLP gave better error performance than RBF but training time required by MLP was almost 5 times as that required by RBF.
|
2016-03-10T05:00:26.000Z
|
2016-03-10T00:00:00.000
|
{
"year": 2016,
"sha1": "0cd2e4dd2816380a2209ad94b9b886d9c528137d",
"oa_license": null,
"oa_url": "https://doi.org/10.5121/ijci.2016.5102",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "0cd2e4dd2816380a2209ad94b9b886d9c528137d",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
231709569
|
pes2o/s2orc
|
v3-fos-license
|
Glioblastoma Multiforme Patient Survival Prediction
Glioblastoma Multiforme is a very aggressive type of brain tumor. Due to spatial and temporal intra-tissue inhomogeneity, location and the extent of the cancer tissue, it is difficult to detect and dissect the tumor regions. In this paper, we propose survival prognosis models using four regressors operating on handcrafted image-based and radiomics features. We hypothesize that the radiomics shape features have the highest correlation with survival prediction. The proposed approaches were assessed on the Brain Tumor Segmentation (BraTS-2020) challenge dataset. The highest accuracy of image features with random forest regressor approach was 51.5\% for the training and 51.7\% for the validation dataset. The gradient boosting regressor with shape features gave an accuracy of 91.5\% and 62.1\% on training and validation datasets respectively. It is better than the BraTS 2020 survival prediction challenge winners on the training and validation datasets. Our work shows that handcrafted features exhibit a strong correlation with survival prediction. The consensus based regressor with gradient boosting and radiomics shape features is the best combination for survival prediction.
Introduction
Glioblastoma multiforme (GBM) is the commonest type of primary malignant brain tumor. In the case of adults, glioblastoma makes up 60% of all brain tumors [1]. The World Health Organization (WHO) classified GBM as a grade IV type of cancer due to its invasive and diffusive nature. Patients suffering from GBM have a poor prognosis, with a median survival rate of about ten months [1]. This is due to its aggressive nature, highly heterogeneous appearance, location, shape, and unpredictable response to therapy [2].
Magnetic Resonance Imaging (MRI) has been widely utilized to examine tumors due to its non-hazardousness, high contrast and superior resolution. Generally, manual segmentation of a tumor in MRI is time consuming and prone to subjective error. In this regards an automated segmentation method would be of enormous help to oncologists and clinicians. It can help in early diagnosis as well as in therapeutic strategy planning. In recent years, deep learning-based segmentation approaches have outperformed traditional state-of-the-art methods [3,4]. Segmentation delineates the brain tumor into Whole Tumor (WT), Enhancing Tumor (ET), and Tumor Core (TC). Handcrafted features extracted from these segments are used to classify the survival days of the patients.
There are many segmentation models available. Recently, Jiang et al. [5], in the BraTS 2019 challenge, proposed a two-stage asymmetry cascaded U-Net [2] structure. Each model is made up of a larger encoder in order to be able to extract more complex semantic features and a smaller decoder part for generating a segmentation map with a size identical to the input. Zhao et al. [3] proposed multiple methods to generate robust segmentation results. They grouped it into data processing, model devising, and optimization modules. Multiple methods are assimilated into each of these modules to enhance segmentation results. McKinley et al. [4] proposed a Densenet based U-Net architecture. Convolutions that were dilated were used to bring about an increase in the receptive field, which retains spatial information. The model was trained by combining label uncertainty loss, binary cross-entropy and focal loss. Dice scores on the BraTS-2019 validation dataset were 0.91(WT), 0.83(TC), 0.77(ET), and on the BraTS-2019 test dataset were 0.89(WT), 0.83(TC), 0.81(ET). Therefore, researchers seem to be favouring the U-Net based architecture for segmentation.
Once the tumor is segmented, features are extracted for overall survival prediction. Agravat et al. [6] used dense layers U-Net trained on the focal loss for segmentation. Next, age, statistical features and radiomic features train the Random Forest Regressor (RFR) for survival prediction and the obtained accuracy on the test dataset was 0.58. Wang et al. [7] used U-Net and U-Net ensembles with attention gates trained on soft dice scores and cross-entropy segmentation. For survival prediction, they proposed the following prognosis models: i) baseline model where only the age feature was used to train a linear regressor model. ii) Radiomic model where morphological and texture features were extracted from segmentation results. iii) Tumor invasiveness model, where relative invasiveness coefficient (RIC) and age feature train the support vector regressor model. The tumor invasive model was found best for survival prediction. The accuracy for survival prediction was 0.59 and 0.56 for BraTS-2019 validation and test dataset respectively. Feng et al. [8] used an ensemble of U-Net models. The models were trained on patches having brain pixels. The main advantage of using an ensemble method is that the network parameter need not be fine-tuned. Further, for OS prediction, volume and surface area features were extracted for each Region of Interest (ROIs) and age to train a linear regression model. The training and testing set accuracy was reported as 0.31 and 0.55 respectively on the BraTS-2019 datasets. Wang et al. [9] utilized a 3D U-Net-based model, and the training occurred in two phases using patching methods. The first phase included both brain and background pixels, whereas the second included only brain pixels. The dice score coefficient loss function was utilized to train the 3D U-Net model. Further for survival prediction, volume, surface area and age were used to train the ANN model. The training, validation, and testing accuracy of the models were 0.515, 0.448, and 0.551 respectively. Islam et al. [10] proposed a 3D U-Net architecture for segmentation, where attention blocks have been desegregated with the decoder modules. For survival prediction, various geometric, fractal, and histogrambased features were extracted to train multiple regressor models, i.e., support vector machine (SVM), multi-layer perceptron (MLP), random forest regressor (RFR), and eXtreme gradient Boosting (XGBOOST). The validation accuracies were: 0.329 for SVM, 0.414 for MLP, 0.356 for RFR and 0.429 for XGBOOST.
The proposed paper aims to establish the correlation between handcrafted features and overall survival prediction. Unlike the existing state-of-the-art methods used for survival prediction [6], [7], [8], [9], the paper uses four predictors and two feature sets to establish their correlation with overall survival prediction of High Grade Glioma (HGG) patients. Shape features and gradient boosting regressors achieve better survival prediction accuracy than state-of-the-art methods. It establishes that shape features have a strong correlation with survival prediction. The organization of the remainder of the paper is as follows: The Brain Tumor Segmentation (BraTS) dataset is described in Section 2, survival prediction methods with four predictors and two feature sets are in Section 3, Section 4 contains results and discussions and finally the conclusion of the paper is in Section 5.
BraTS dataset
Due to different standards and differences in the dataset, evaluating brain tumor segmentation methods objectively and predicting overall survival is a challenge. Nevertheless, for a comparison of different tumor segmentation and survival prediction techniques, the BraTS (brain tumor segmentation challenge) [11,12,13] has become a popular platform. Since the year 2018, there are three tasks that are included in this platform. The first task is the process of segmenting the brain tumor. The second task is predicting the overall survival (OS) and the third task is estimating the uncertainty for the predicted tumor sub-regions. The process of tumor segmentation involves delineating the tumor into three sub-regions, namely, the whole tumor, the tumor core, and the enhancing tumor. Specificity and sensitivity metrics as well as Dice score and Hausdorff Distance are used for evaluating performance. The overall survival prediction task classifies survival days into the following categories: long-term survivors (>15 months), intermediate-survivors (between 10 and 15 months), and short-survivors (<10 months). Samples with resection status GTR (gross total resection) are used to rate the performance of the OS prediction. An accuracy metric is used for performance evaluation, whereas mean and median square error are used for postanalysis [14].
The BraTS 2020 training dataset includes 369 volumetric samples of high-grade glioma (HGG) and low-grade glioma (LGG) cases. It includes metadata of 236 samples such as age, survival days, and resection status for survival days prediction (Grosstotal Resection (GTR) = 119, Sub-total Resection (STR) = 10, and NA = 107). The validation dataset includes 125 sample images and metadata (age, survival days, and resection status) with 29 images having a GTR resection status. Each subject includes four MRI scans that are preoperative (T1-weighted, T1-CE, T2-weighted, and FLAIR) and manually annotated ground truth results. The annotations of ground truth include Necrotic and Non-Enhancing tumor core NCR/NET (label-1), Edema (label-2), Active Tumor (label-4), and 0 for everything else. The dataset has been pre-processed, i.e., all the scans are co-registered to the same anatomical structure, skull stripped and resampled to an isotropic resolution of 1 × 1 × 1 mm 3 . The width, height, and depth of each sample are 240, 240, and 155 respectively.
Survival Prediction Methodology
We use the 3D U-Net model for brain tumor segmentation proposed by Isensee et al. [15]. This is the highest ranking and simple model in BraTS 2017. Like the U-Net [2], this model [15] comprises a contracting path to extract more feature information with increasing network depth. It has an expansion path to generate a segmentation mask with precise localization information and a skip connection for better feature reconstruction at every stage of the expansion path. In our work we have used the bias field correction, normalization, clipping maximum/ minimum intensity to remove outliers, rescaled to [0, 1] and setting non-brain pixels to 0. The model was trained on a patch size of 128×128×128, randomly generated from all the input MRI modalities. The obtained dice score on the BraTS 2020 validation dataset is 0.880(WT), 0.858(TC), 0.759(ET). The segmentation of tumor tissue of a validation sample is as shown in 1. The figures show a visual comparison of an input flair image and a predicted image. The segmented parts are then used for survival prediction with the prognosis methods with 1) Image-based features, 2) Radiomics based features, and the following four predictors.
Predictors and Parameter Tuning
We have used four predictors and parameter tuning. These are (1) Artificial Neural Network (ANN) [9,10], (2) Linear Regressor (LR) [7,8], (3) Gradient Boosting Regressor (GBR) [10], and (4) Random Forest Regressor (RFR) [6,15,10]. All these predictors were used by the top performing models in all recent BraTS challenges. These predictors deal with a small dataset and overfitting problems. The image-based prognosis method uses only seven features making it less vulnerable to overfitting. We retain default parameters for ANN and LR, while parameters for GBR and RFR are hyper-tuned using a grid search. We tuned the number of estimators, depth of the tree, sample split, and learning rate parameters for the GBR. In the case of the RFR, the number of estimators and the depth of the tree were hyper tuned. The predictors with radiomics features were also tuned.
For radiomics features it turns out that an ANN with five hidden layers was better compared to 2 or 3 hidden layers. Further, we tuned epochs, learning rate, number of neurons, and an optimizer for ANN. In the LR model, a search was also performed for the penalty term, the number of iterations, and up-grading of feature parameters using LASSO and a ridge regressor. We tuned the number of estimators, maximum depth, and learning rate for the GBR. In the RFR model, we tuned the number of estimators, maximum depth of the tree, minimum sample split, minimum samples in a leaf node, and maximum features parameters. Since the random forest and gradient boosting regressor work on ensemble-based learning, they are robust, efficient, and less prone to overfitting.
Prognosis using Features
Image-based features [8,9] Shape features extracted from the segmentation were used in the OS prediction. These features were volume of the WT, TC, and ET, surface area of the WT, TC, and ET, age. Since the tumor size was the decisive predicting factor for various cancer types, we extracted the volume and surface area of the WT, TC, and ET. The features were extracted from the segmentation maps and input images without any library dependency. Training with fewer features has the advantage that it limits the dimensions of feature space. Hence, the model did not overfit. However, we found saturation in the performance due to high bias in the model.
Radiomics based features [16]
Radiomics based feature extraction is widely used for disease diagnosis, classification, and survival prediction like lung cancer [17], breast cancer [18], and Alzheimer's disease [19]. Along with the size of the tumor, explor-ing the correlation of the other features with survival prediction is crucial to increase the performance of the predictor models. Radiomics features addresses this problem. It allows extracting various statistical, shape, intensity, and texture features from radiographic scans. Also, radiomics allow extracting features from many imaging techniques.
Radiomics features are typically multi-collinear and redundant [20]; hence the correlation between these features needs to be validated for specific real-world problems. We performed feature selection through recursive feature elimination (RFE) [21] to remove weaker features and avoid the curse of dimensionality. RFE is an example of backward feature elimination. With the given number of estimators, it selects principal features recursively from the feature set. It refits the model until the desired number of selected features is eventually reached. Out of 107 features, we selected 20 best ranking features.
In summary, the four predictors: ANN, RFR, LR, and GBR, are applied to: i) the seven image-based features, ii) 107 radiomics features, iii) 20 principal radiomics features, and iv) only shape radiomics features. Literature [6,15] also suggests dominance of shape features so we also used all predictors with only shape features for survival prediction. We trained the models with all the resection status (i.e., GTR, STR, and NA) given with the dataset to increase the database size and reduce overfitting.
Results and Discussions
Image-based feature prediction is derived from the BraTS 2019 dataset, and the BraTS 2020 dataset was used for radiomics based feature extraction. The results are shown in Tables 1 to 4. We have not participated in the BraTS 2020 challenge and do not have access to the test dataset. Therefore, results are derived on the training and validation datasets.
Image-based feature prediction
We observe that the ensemble-based models, i.e., GBR and RFR, show a better performance on the training and validation dataset. Their consistency in the training and validation accuracy suggests that the model does not overfit.
Radiomics feature-based prediction
As mentioned, we extracted 107 radiomic features from the segmentation results of the BraTS 2020 images and fed them as input to four regressor models; ANN, LR, GBR, and RFR. It was observed that RFR gave the best results, and they are shown in Table 2. The other regressors performed poorly compared to RFR, and even the finetuning of the parameters did not improve the performance. The possible reasons are the redundant nature of radiomics [20], over complexity due to too many features and fewer training samples. Radiomics features are shallow and low-order image features, and unable to fully describe distinct image characteristics [22]. Also, when the number of observations is less for large extracted features, survival prediction is an ill-posed problem [20]. It can be observed from Table 2 that the large feature set is unable to yield stateof-the-art accuracy results. Therefore, we reduced the feature set by applying recursive feature elimination to find the 20 most dominant features. Dominant features obtained using RFE are: age, amount of edema, elongation, maximum 2D diameter slice, sphericity, surface-volume ratio, minimum and maximum intensity, interquartile range, skewness, kurtosis, root mean absolute deviation, cluster prominence, cluster shade, inverse variance, coarseness, and dependence variance. We then applied four regressors on the dominant feature set, and performance has been noted in Table 3. We observe that the linear regressor with regularisation outperforms all other regression models with the highest accuracy on the validation dataset. LR also provides similar accuracy for the training and validation datasets. The Spearman-R is also highest for LR. In contrast, RFR achieves the lowest mean square error (MSE) on the validation dataset.
Radiomic shape features based prediction Reviewing the correlation between radiomics features and survival prediction, we found that radiomic shape features play a crucial role in survival prediction [6,15]. Shape features show significant statistical differences across ROIs [23]. Hence, shape features can capture tumor features related to genetic anomalies and profoundly impact survival prediction. We formulate the hypothesis that shape features profoundly impact survival prediction. In order to validate the hypothesis, we trained predictor models with the following shape features: the amount of necrotic, edema, enhancing tumor, the extent of the tumor, coordinates of tumor, elongation, flatness, axis lengths, 2D diameter row, 2D diameter column, 2 D diameter slice, maximum 3D diameter, mesh volume, sphericity, surface area, surface volume ratio, centroid of necrosis and age information. The performance of each predictor model has been noted in Table 4. We observe that GBR and RFR have better performance. Specifically, the gradient boosting regressor outperforms all other regression models. In contrast, LR with regularization achieves the lowest mean square error (MSE) on the validation dataset.
Discussions
It has been observed that classical machine learning techniques performed better than the deep learning neural network-based models for survival prediction. Radiomics based approaches are well suited for survival prediction. Traditional regression algorithms have better interpretability than deep learning-based algorithms, they have fewer learnable parameters than CNN, and perform better with smaller sample data. A large sample dataset for training is crucial for direct regression from image modalities using CNN.
The predictors trained on the 107 radiomics features underperformed. The predictors modelled on the 20 principal features improved the performance. Further, to alleviate performance, we experimented and trained predictors on shape features and found a strong correlation with survival prediction. Shape features trained on the consensus model obtained state-of-the-art survival prediction accuracy. It was observed that the gradient boosting regressor model performed better than other classical algorithms because of: additive model, and with each tree built, the model becomes more expressive based on the ensemble learning model. The proposed GBR model is compared with the survival prediction challenge winners of BraTS 2020 and prediction accuracy for the state-of-the-art methods was obtained from the unranked leader board . TA performance comparison of the GBR model with top-ranking models has been noted in Table 5. It can be observed that shape-based features with the gradient boosting regressor outperform the best-ranking methods over the validation dataset.
Conclusion
Predicting oncological outcomes is always very tricky due to multiple challenges from clinical and engineering perspectives. In this work, we have evaluated two feature sets over four predictors. We proposed the image-based and the radiomic based prognosis approaches for survival prediction. The image-based prognosis models performed well, but the performance saturates beyond a certain point because of fewer features, and models could not learn complexity. Similar observations are also made for the 107 radiomics features / 20 principal features and the regressor combination. All above the combinations exhibited correlation with survival prediction. However, we recommend that shape based features with the gradient boosting regressor is the best combination for survival prediction. Comparing models, it was found that ensemble-based learning models became more useful for survival prediction because of their robustness. Whereas ANN converges speedily compared to classical models but due to lack of ample training samples, it overfits easily. With the availability of a large dataset and more clinical nonimaging information such as gender and treatment, survival prediction can be robust. It can further be applied to clinical practice.
|
2021-01-27T02:16:11.326Z
|
2021-01-08T00:00:00.000
|
{
"year": 2021,
"sha1": "dbdd0284f1485c63edc5346c94056a76f065a8d2",
"oa_license": "CCBYSA",
"oa_url": "https://www.techrxiv.org/articles/preprint/Glioblastoma_Multiforme_Patient_Survival_Prediction/13642664/files/26191079.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "dbdd0284f1485c63edc5346c94056a76f065a8d2",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering",
"Mathematics"
]
}
|
14007594
|
pes2o/s2orc
|
v3-fos-license
|
Low-frequency theta oscillations in the human hippocampus during real-world and virtual navigation
Low-Frequency Oscillations (LFO) in the range of 7–9 Hz, or theta rhythm, has been recorded in rodents ambulating in the real world. However, intra-hippocampus EEG recordings during virtual navigation in humans have consistently reported LFO that appear to predominate around 3–4 Hz. Here we report clear evidence of 7–9 Hz rhythmicity in raw intra-hippocampus EEG traces during real as well as virtual movement. Oscillations typically occur at a lower frequency in virtual than real world navigation. This study highlights the possibility that human and rodent hippocampal EEG activity are not as different as previously reported and this difference may arise, in part, due to the lack of actual movement in previous human navigation studies, which were virtual.
Reviewer #1 (Remarks to the Author)
Movement-related theta rhythm in the hippocampus is a robust and dominant feature of the local field potential of experimental animals such as rats and mice. Attempts to understand the computational and information-processing significance of this rhythm have led to numerous fundamental discoveries and insights into hippocampal function (in particular) and neural system dynamics (in general). However, it has been difficult to relate these findings to human hippocampus due to a longstanding controversy about whether humans (and primates in general) show the same type of theta under the same conditions as rodents. This brief communication provides compelling evidence that human hippocampus can display movement related theta, of the same frequency and under the same conditions, as rodents. Although there are some concerns with the data, allowances must be made due to the difficulty in obtaining these data from freely moving patients. I find the overall pattern of results to be convincing and important. These results will help bridge the gap between human and rodent work, assisting the effort to apply the principles learned from rodent work (concentrating on spatial maps and place/grid cells) to understanding the role of the human hippocampus in declarative memory.
1)
Lines 70-72. Where do the numbers 83% and 10% come from? I do not understand how these numbers are calculated from the data in Figure 3A. Please clarify here and throughout (line 76, 88).
2)
Lines 91-92. It is debatable which of the real world conditions most closely matches the virtual navigation condition. The authors should justify this better. The comparison they make has the feel of cherry-picking, in that they chose the real-world condition to compare against the virtual condition based on the pattern of data after the fact and the barely significant result. This is my greatest concern about the paper that the analysis of Figure 3B was not a principled one but instead was a post hoc decision.
3)
Why does Figure 3A not show results from Stop > RW Recall?
Reviewer #2 (Remarks to the Author)
Summary -------In their manuscript, Bohbot et al. present results from a human electrophysiology study of real and virtual navigation. Whereas numerous studies, including many by one of the corresponding authors, have explored the electrophysiological correlates of virtual navigation in humans with recordings from depth electrodes in the hippocampus and other medial temporal lobe regions, the authors claim this is the first to study both virtual and real navigation (where the participant actually is in motion around a room preforming a cognitive task) in humans. They found that real-world navigation does induce oscillations (or increased amplitudes) in the hippocampus that are similar to those observed in rats performing locomotion. Interestingly, these oscillations occurred at a higher spectral frequency than the increase in power observed during virtual navigation.
These results are important for a number of reasons. First, it is important to show that real-world navigation induces oscillations in humans in a similar way to rodents, helping to unify the two domains. Second, just looking within humans, it is interesting to see the increased frequency (around 7-9Hz instead of 1-4Hz) for real-world vs. virtual navigation. The results for virtual reality replicate many previous results in the literature, which themselves have spawned much debate as to why the frequencies observed in human virtual navigation are lower than the frequencies observed in rodent navigation. This study highlights the possibility that the difference in hippocampal activity between humans and rodents is not a large as previously reported and this difference is likely due to the lack of actual movement in human virtual navigation. Finally, the real-world navigation task is innovative and is directly analogous to a Morris water maze task, which has been used quite often in rodent studies of memory-guided navigation.
That said, as I outline below, there are a number of points that require clarification, including both the narrative and the statistical approaches.
Major Points -------------I think the issues and importance of the results that I outline in the summary above should be better spelled out in the manuscript, itself. The intro as written is fragmented (it reads like a bullet list of semi-related one-line summaries with citations) and does not properly motivate the study or focus the reader on the importance of the results.
-Are the significant regions in grey presented within-subjects in Figure 2 corrected for multiple comparisons? I can see from the supplemental methods that the main analyses are corrected for the three frequency bands, but I don't see how the grey regions were determined.
-Why are counts of significant electrodes the proper way to assess significance? On a related note, how do you calculate the number of electrodes that would be predicted by chance (1 electrode)? Also, please motivate the use of the Chi Square and Fisher's exact tests.
-Were the statistical comparisons performed once across all subjects or within subject and combined across subjects?
-Since the part of the story is about Real vs Virtual Navigation and all participants performed both tasks, shouldn't the comparisons be made within subject and combined across subject, not just as a sum of electrodes? Is the issue that there are only five participants? I think my confusion is arising from a lack of clarity in the methods.
Minor Points -------------Can you better characterize the movement-related artifacts in the data during the real-world data and how you removed them?
Reviewer #3 (Remarks to the Author)
The paper by Bohbot et al. has been investigating oscillatory activity in the hippocampus of freely moving patients. As I understand the main finding is that clear 7-9 Hz theta activity emerges in at least some patients when they are actively exploring. A lower effect at 1-4 Hz was observed when (some) patients were navigating in a virtual maze or doing memory-related tasks but not moving. This is a highly important finding since it resolves a debate on the frequency of the human hippocampal theta. Previous reports have reported the human hippocampal theta to be in the 1-4 Hz range; however, this finding now demonstrates that the real-world navigation related theta in humans can be close the exploration related theta in rats. I do however lack a clear summary in the text of what findings can be considered robust (while the opening of the paper is quite clear about the 7-9 Hz theta, the concluding paragraphs in line 99 and line 119 only mentions the 1-12 Hz band). I feel that these exciting findings suffers from a somewhat unclear presentation. Nevertheless, I anticipate that this study will spark future MTL recordings in humans during real-world explorations.
Major concerns: -The paper was a difficult read despite a conceptual simple study. The presentation does feel somewhat 'cherry picked' and I would encourage the authors to do a more complete data presentation. I'm aware of the practical complications acquiring these data and differences in subjects due to the placement of the electrodes etc. I also do realize that the purpose of the paper is to provide proof-of-principle for real-world navigation related high-frequency theta.
-I realize that there was quite some variability between subjects in terms of frequency in relation to task. I propose to be explicit on this and articulate the need to collect more data in patients during real-world navigation to get a complete picture.
-In Figure 2, different conditions are presented for different subjects. Why not show all the conditions per subject?
-The main text should make clear how many patients were used? How many contacts in hippocampus? How was the hippocampus identified given that the electrodes also go in rhinal cortex?
-Why use a logarithmic scale in the spectra Fig 2? It makes the 7-9 Hz theta peak looks narrower but I would prefer a linear scale as more typically use for low-frequency spectra.
- Figure 3A; table hard to understand. Explain in caption.
Reviewer #1 (Remarks to the Author)
Movement-related theta rhythm in the hippocampus is a robust and dominant feature of the local field potential of experimental animals such as rats and mice. Attempts to understand the computational and information-processing significance of this rhythm have led to numerous fundamental discoveries and insights into hippocampal function (in particular) and neural system dynamics (in general). However, it has been difficult to relate these findings to human hippocampus due to a longstanding controversy about whether humans (and primates in general) show the same type of theta under the same conditions as rodents. This brief communication provides compelling evidence that human hippocampus can display movement related theta, of the same frequency and under the same conditions, as rodents. Although there are some concerns with the data, allowances must be made due to the difficulty in obtaining these data from freely moving patients. I find the overall pattern of results to be convincing and important. These results will help bridge the gap between human and rodent work, assisting the effort to apply the principles learned from rodent work (concentrating on spatial maps and place/grid cells) to understanding the role of the human hippocampus in declarative memory.
R1.1) Lines 70-72.
Where do the numbers 83% and 10% come from? I do not understand how these numbers are calculated from the data in Figure 3A. Please clarify here and throughout (line 76, 88).
We thank the reviewer for his/her positive and constructive comments on our manuscript. We appreciate the opportunity to respond to clarifications about our approach and methods. The significance counts (in this case, the numerator for our percentages above) came from considering the total number of unique electrodes within the 1-12 Hz frequency band that showed movement > stopping effects and stopping > movement effects in a two-sample one-tailed t-tests (to provide directionality) corrected at p < .01. The total number of hippocampal electrodes across all patients was 30 (in this case, the denominator for the percentages). Thus, in the case of 83%, we had 25 significant electrodes contacts and in the case of 10%, 3 significant electrode contacts. We have attempted to clarify this issue throughout the manuscript.
R1.2) Lines 91-92. It is debatable which of the real world conditions most closely matches the virtual navigation condition.
The authors should justify this better. The comparison they make has the feel of cherry-picking, in that they chose the real-world condition to compare against the virtual condition based on the pattern of data after the fact and the barely significant result. This is my greatest concern about the paper that the analysis of Figure 3B was not a principled one but instead was a post hoc decision.
We appreciate this concern and agree that we need to provide better justification for the rationale behind this comparison. The real world search condition was selected because it best represented the period during which patients had to search for the sensor to learn it's location in the environment. During the real world search, patients had to remember the areas of the environment that were previously visited in order to avoid these areas until the target is found. Similarly, in the virtual environment, patients search for target objects and in a second phase, they have to remember the areas of the environment that were previously visited in order to avoid these areas until the target is found. For this reason, we believe that the two conditions are the most closely matched and result in a valid comparison. This information was added to the Supplementary Methods. Figure 3A not show results from Stop > RW Recall?
R1.3) Why does
We thank the reviewer for this clarification. The Stop > real world (RW) Recall was added to Figure 3A Reviewer #2 This study highlights the possibility that the difference in hippocampal activity between humans and rodents is not a large as previously reported and this difference is likely due to the lack of actual movement in human virtual navigation. Finally, the real-world navigation task is innovative and is directly analogous to a Morris water maze task, which has been used quite often in rodent studies of memory-guided navigation." "-Are the significant regions in grey presented within-subjects in Figure 2 corrected for multiple comparisons? I can see from the supplemental methods that the main analyses are corrected for the three frequency bands, but I don't see how the grey regions were determined." We appreciate the reviewer's constructive and positive comments on our manuscript. We also appreciate the chance to explain how we corrected for multiple comparisons in the power spectral density (psd) plots. Briefly, we did this through a non-parametric permutation approach. Specifically, consistent with past work (e. We appreciate the opportunity to address this important issue regarding our methodological approach in this study. Considering significance at each individual electrode (or cell) is generally considered the "gold" standard for statistical analyses in electrophysiology, particularly when analyzing the LFP and when sufficiently large numbers of samples are collected. This approach is also consistent with the vast majority of past approaches to large-sample LFP and cellular recording studies, particularly in humans but also in monkeys ( Neurophys). This approach is preferable to using grand averages, which is more common in the scalp EEG literature, and which uses a standardized electrode placement procedure. This is because intracranial electrodes are placed solely based on clinical determinations, and electrode contacts that happen to be closer to large sources will pick up stronger signal, which will then dominate in the grand average. Thus, considering each electrode individually thus allowed us to better deal with variability in individual electrode placement, while at the same time is consistent with most past approaches to invasive recordings that employ sufficiently large samples for Chi square tests to be meaningful.
R2.3) Also, please motivate the use of the Chi Square and Fisher's exact tests."
We employed Chi Square goodness of fit tests to compare the distribution of electrodes across frequency bands to a null distribution in which the counts did not differ across frequency bands. This is a standard application of the Chi square test (e.g., Kreysig 1993 Advanced Engineering Mathematics) and is consistent with numerous past ECoG and single neuron recording studies in humans (e.g., In most cases, we did not see a deviation from an even distribution of electrodes across frequency bands. However, these comparisons did not involve a direct comparison of electrode differences between conditions (e.g., real vs. virtual) which is why we employed the Fischer's exact test. In contrast to the Chi square goodness of fit, the Fischer's exact test is intended instead to detect associations among categorical data sets, typically in 2x2 or 2x3 categorical situations. In our case, we wished to test whether there was a greater tendency for real world movement to be associated with higher frequency bands and virtual movement to be associated with lower frequency bands. Because we had counts of significance for each of these categories, (1-4 Hz, 4-8 Hz, 8-12 Hz), it was then natural to compare whether there were differential association (i.e., a dissociation). Our statistical comparison indicated that this was in fact the case.
R2.4) Were the statistical comparisons performed once across all subjects or within subject and combined across subjects?
All statistical comparisons were performed within electrode within patient, consistent with the majority of approaches to human LFP (please see R2.2). These were then tabulated across electrodes and patients, which we now detail in Supplemental Table 2.
R2.5) Since the part of the story is about Real vs Virtual Navigation and all participants performed both tasks, shouldn't the comparisons be made within subject and combined across subject, not just as a sum of electrodes? Is the issue that there are only five participants? I think my confusion is arising from a lack of clarity in the methods.
We appreciate this concern. The finding that movement shows greater power than stopping periods, first demonstrated by Vanderwolf in the 60s and widely replicated in both rats and humans, concerns the basic contrast of movement-related theta power being greater than that during still periods. In our paradigm, movement periods varied by condition (e.g., real world searching and virtual moving), as did stopping periods (e.g., stopping during real-world searching for targets vs. stopping in virtual reality). Because all movement > stop contrasts necessarily involved comparison first within a condition, we thus first performed these contrasts and then compared between the different experimental manipulations. The issue with directly comparing real world vs. virtual movement is that it would not take into consideration the corresponding stop periods, when we expect theta to be lower. We have tried to clarify these issues in the methods and provided a new table (Suppl Table 2) that provides more detail on the number of electrode contacts for each patient, as requested by the reviewer. This makes it clearer in terms of which patients show which effects. Overall, the table makes it clear that our effects at the group level that we originally reported are also present at the individual patient level.
R2.6) Can you better characterize the movement-related artifacts in the data during the real-world data and how you removed them?
Large amplitude movement related artifacts were removed manually with the software Harmonie after these were identified by visual inspection. The word "manually" was added to the supplementary methods in the sentence: "Movement artifacts were manually removed by visually inspecting raw EEG traces on the Harmonie system."
Reviewer #3 (Remarks to the Author)
R3.1) The paper by Bohbot et al. has been investigating oscillatory activity in the hippocampus of freely moving patients. As I understand the main finding is that clear 7-9 Hz theta activity emerges in at least some patients when they are actively exploring. A lower effect at 1-4 Hz was observed when (some) patients were navigating in a virtual maze or doing memory-related tasks but not moving. This is a highly important finding since it resolves a debate on the frequency of the human hippocampal theta. Previous reports have reported the human hippocampal theta to be in the 1-4 Hz range; however, this finding now demonstrates that the real-world navigation related theta in humans can be close the exploration related theta in rats. I do however lack a clear summary in the text of what findings can be considered robust (while the opening of the paper is quite clear about the 7-9 Hz theta, the concluding paragraphs in line 99 and line 119 only mentions the 1-12 Hz band). I feel that these exciting findings suffers from a somewhat unclear presentation. Nevertheless, I anticipate that this study will spark future MTL recordings in humans during real-world -The paper was a difficult read despite a conceptual simple study. The presentation does feel somewhat 'cherry picked' and I would encourage the authors to do a more complete data presentation. I'm aware of the practical complications acquiring these data and differences in subjects due to the placement of the electrodes etc. I also do realize that the purpose of the paper is to provide proof-of-principle for real-world navigation related high-frequency theta.
We appreciate the reviewer's constructive comments on the manuscript. We have now provided a more complete data presentation in the Supplementary Methods and apologize for any unintended lack of clarity in our original presentation.
R3
.2) I realize that there was quite some variability between subjects in terms of frequency in relation to task. I propose to be explicit on this and articulate the need to collect more data in patients during real-world navigation to get a complete picture.
We agree and have added the following to the manuscript: "Note, however, that we did observe some variability in frequency in relation to the task, with the real world condition showing significant contacts with the 1-4 Hz frequency band while the VR condition showed significant contacts within the 8-12 Hz band (Suppl. Table 2). Thus, our findings suggest that while real world vs. VR tend to elicit differences in the predominance of oscillations within higher vs. lower theta bands, both conditions result in changes across the 1-12 Hz band; larger data sets of freely ambulating patients would be ideal in order to better quantify this difference. Figure 2, different conditions are presented for different subjects. Why not show all the conditions per subject?
R3.3) -In
A complete data presentation for all participants is now available in the Supplementary Methods.
R3.4) -The main text should make clear how many patients were used? How many contacts in hippocampus?
This information was added to the main text in the last sentence of the second paragraph and in the first sentence of the third paragraph.
R3.5) How was the hippocampus identified given that the electrodes also go in rhinal cortex?
The target of the stereotaxic electrode placement is the hippocampus however, this was not confirmed with MRI in any of the patients. Our stereotactic procedure, however, has yielded correct target placement of the electrodes based on post-implant electrode tracts, in other patients who received MRI after electrode removal. Patients in the current study, however, were tested before the MRI procedure became standard practice. As such it is not possible to exclude the possibility that electrode contacts may have been in the entorhinal cortex with certainty. Fig 2? It makes the 7-9 Hz theta peak looks narrower but I would prefer a linear scale as more typically use for low-frequency spectra.
R3.6) Why use a logarithmic scale in the spectra
As can be seen in the plots below, both are valid options that look just as good. We favor the log scale.
To reduce the problem of multiple comparisons and to recognize the canonical frequency bands investigated in other studies, we restricted our quantification of significant electrode contacts to these windows: Delta (1-4 Hz), Theta (4-8 Hz), Alpha (8-12 Hz). We recognize that the highest peak in our spectra reflects a significant difference in the 7-12 Hz range but we wanted to explain the data in terms of the canonical frequency bands in order to make assertions about these previously established frequency windows. Figure 3A; table hard to understand. Explain in caption.
R3.8)
We appreciate the reviewer's attention to this issue as our caption did not contain the detail and explanation to make it sufficiently clear. The caption of Figure 3A has been expanded with clear details on the breakdown of the table and the plot in 3B. In addition, we further specified column labels in Figure 3A.
Reviewers' Comments:
Reviewer #1 (Remarks to the Author): The revised manuscript has addressed my concerns only partially.
1) The description of Figure 3A at the bottom of p. 2 still is confusing, as the numbers in the figure do not apparently match the numbers in the text related to the number of unique contacts. It thus is very confusing to understand the statistical tests and how they apply to the data that are actually presented in the figure. I assume that the problem is that the numbers in the figure do not correspond to unique contacts, that is, a single contact can contribute to more than one number in the figure if it is significant in more than one comparison. The authors need to provide a table with the unique contacts to support the statistical tests or do a better job of explaining the data behind the tests in the main text.
2)
Line 103 should say row 7, not row 6 3) Line 105: are the numbers in the parentheses correct? delta is 10 vs 3 and alpha is 4 vs. 9 in Fig. 3A, no? If I am incorrect, then this just shows that the text is confusing in how it describes the figure. (I think I understand the figure precisely, just not how it is described in the text.) 4) For a future revision, please indicate exactly which lines in a revised manuscript contain revised text, to make the reviewers' job easier in evaluating the revisions. 5) I am still not convinced by the authors' arguments about which real world comparison is the best match for the VR task, but since they have made their rationale more explicit, we can let readers decide how much they buy it. The data are still convincing.
Reviewer #2 (Remarks to the Author):
After reading all the responses to reviewers, the revised main manuscript and supplementary information, I believe the authors have adequately addressed all the issues I raised (as well as those raised by the other reviewers) in my initial review. They have provided justification for all the statistical methods and added clarification to the manuscript where needed.
Reviewer #3 (Remarks to the Author):
The authors have adequately addressed the concerns of the referees Figure 3A at the bottom of p. 2 still is confusing, as the numbers in the figure do not apparently match the numbers in the text related to the number of unique contacts. It thus is very confusing to understand the statistical tests and how they apply to the data that are actually presented in the figure. I assume that the problem is that the numbers in the figure do not correspond to unique contacts, that is, a single contact can contribute to more than one number in the figure if it is significant in more than one comparison. The authors need to provide a table with the unique contacts to support the statistical tests or do a better job of explaining the data behind the tests in the main text."
R1.1) "The description of
We apologize for this confusion. The reviewer is correct that unique contacts were used for statistical tests comparing effects across the 1-12 Hz band because a single electrode could be significant in multiple frequency bands. We have now added a column to the table in Figure 3A which contains the total number of unique electrode contacts across the 1-12Hz band for each contrast. We have added the following explanation to the Figure 3A caption and we have ensured that all references to unique electrode contacts vs. effects within specific frequency bands are now clear.
"Because some contacts showed significant effects in multiple frequency bands (e.g., 1-4 Hz and 4-8 Hz), the final column ("unique 1-12Hz") tabulates the total number of unique contacts showing effects across the 1-12Hz band for each comparison." We are grateful to the reviewer because this has helped clarify the numbers used in our analyses. All results remain unchanged.
R1.2) "Line 103 should say row 7, not row 6"
We thank the reviewer for catching this misstatement -the correction has been made.
R1
.3) "Line 105: are the numbers in the parentheses correct? delta is 10 vs 3 and alpha is 4 vs. 9 in Fig. 3A, no? If I am incorrect, then this just shows that the text is confusing in how it describes the figure. (I think I understand the figure precisely, just not how it is described in the text.)" We thank the reviewer for this comment. We have now fixed this statement to be clearer: "Using a 2 x 3 Fisher's exact test to assay for associational differences, we found a significant cross over interaction ( Figure 3A row 1 vs. row 7, p < .04, Figure 3B). Specifically, this effect appeared driven by a greater number of contacts showing increases in the delta band during virtual than real world searching compared with the alpha band (delta: 10 vs. 3 contacts and alpha: 4 vs. 9 contacts, Figure 3A)."
|
2018-04-03T05:58:41.447Z
|
2017-02-14T00:00:00.000
|
{
"year": 2017,
"sha1": "ceeafc4bfc5a1a6ee05679fca478584a86e5f7d3",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/ncomms14415.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77a70def21cf5a468dbc69e62d2fa852992d851f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
122852587
|
pes2o/s2orc
|
v3-fos-license
|
Inclusion Rating by Statistics of Extreme Values and Its Application to Fatigue Strength Prediction and Quality Control of Materials
The inclusion rating method by statistics of extreme values (IRMSE) using area of inclusions as the size parameter enables one to discriminate between current super-clean steels. Moreover, IRMSE enables one to predict the size (areamax) of maximum inclusions contained in domains larger than the inspection domain. The statistical distribution of areamax can be used for the quality control of materials and for the prediction of a scatter band of fatigue strength. Practical procedures of inclusion rating and prediction of a scatter band of fatigue strength are shown.
Introduction
With the increase in cleanliness of steels, conventional inclusion rating methods are no longer as useful as before, because conventional inclusion rating methods cannot determine the cleanliness of new clean steels.Although the cleanliness of steels has been markedly improved in the last two decades, the fatigue strength of recent clean high strength steels cannot attain the ideal value expected from their high static strength.Nonmetallic inclusions are predominantly the cause of lower fatigue strength even for such clean high strength steels.Thus, in order to predict the fatigue strength behavior and to evaluate quality, we need a new inclusion rating method relevant to recent superclean steels.The inclusion rating method based on statistics of extreme [1] is most relevant for this purpose.In the following, we call this method Inclusion Rating Method by Statistics of Extreme (IRMSE).
In this study, we shall first show that if we choose an appropriate size parameter for inclusions, the size of inclusions obey the statistics of extreme value theory.The appropriate size parameter is the square root of projected area of the maximum inclusion contained in a standard inspection area or volume, y/areama.Second, we predict the size of the maximum inclusion which may be contained in a larger area or volume than the standard inspection area and, lastly, we use the size parameter, y/area,n", to predict the scatter band of fatigue strength of hard steels.
The merits of IRMSE, in comparison with conventional methods, are (1) to distinctly discriminate the cleanliness of recent super-clean steels, and (2) to predict the size of larger inclusions contained in a domain larger than the inspection domain.This method is useful for quality control of materials and for improvement of the steel making processes.It also enables one to predict the scatter of the fatigue strength of a large number of mass production products.
Nonmetatlic Inclusions as a Fatigue
Fracture Origin Figure 1 shows an example of the nonmetalhc inclusion which was observed at fatigue origin of a bearing steel under a rotating bending fatigue test.If this inclusion did not exist in this specimen, the fatigue strength of this specimen should have been higher than the applied stress, a,, = 1078 MPa.Since the size and location of nonmetallic inclusions scatter randomly, the fatigue strength of high strength steels naturally scatters.Although there has been a firm opinion that the chemical composition and shape of nonmetallic inclusions substantially influences the fatigue limit, Murakami et al. [2 5] have shown the incorrectness of the conventional opinion by their detailed experiments and analyses, and reported distinct experimental evidence that the size of inclusions (defined by \/area) is the most crucial geometrical parameter.It is empirically known that the intrinsic fatigue Strength of steels is determined by the hardness (//v) of its microstructure.For steels with //v<400, nonmetallic inclusions contained in current commercial Steels are not detrimental and we have the following empirical formula 0*11 = 1,6 Hv (i) where ow is the fatigue limit (MPa) and //v is the Vickers hardness (kgf/mm^).However, for steels with //v>400, the effect of inclusions reveals itself and the intrinsic or ideal fatigue limit given by Eq. ( 1) cannot be attained.The fatigue strength depends on the size (\/area) and location of the fatal inclusion and //y of the matrix.Murakami et al.'s [6][7][8][9] fatigue limit prediction equations are classified into three categories depending on the location of fatal inclusions (see Fig. 2); Fatigue limit for a surface inclusion [Fig.2(a)] a. = 1.43 (Hv +120)/(Vorefl ) 1/6 (2) Fatigue limit for an inclusion in touch with free surface [Fig, 2 Since for a constant value of area, an inclusion is most detrimental when it exists just in touch with the free surface of a specimen, we can use Eq. ( 3) in combination with the maximum size \^areamia obtained by IRMSE to predict the lower bound {o-"f) of scattered fatigue strength of many specimens or machine elements.(1) A section perpendicular to the maximum principal stress is cut from the specimen.After polishing with a n°2000 emery paper, the test surface is mirror-finished with buff.
Inclusion Rating of Various High Strength Steels by Statistics of Extreme
(2) A standard inspection area Su (mm^) is fixed.Generally, it is advisable to take a microscope picture for an area approximately equivalent to 5n.In the area So, the inclusion of maximum size is selected.Then, the square root of the projected area y/areamax of this selected inclusion is calculated.This operation is repeated n times {in n areas So) (see Fig. 2).
(3) The values of \/area,aaj are classified, starting from the smallest, and indexed: (with; = \..n).We then have the following relation: The cumulative distribution function F, and the reduced variates y, are then calculated from the equations.
Fi =y X 100/(rt -I-1) (4) The data are then plotted on probability paper.The point ;' has an abscissa coordinate of \/aream^ while the ordinate axis represents either Fj ory,.An example of the curve is shown in Fig. 4.
Figure 4 shows the inclusion ratings by IRMSE for two kinds of super-clean bearing steels, SUJ2(N) and SUJ2 (H).The total oxygen contained in these steels is 8 ppm for SUJ2{N) and 5 ppm for SUJ2 (H).This kind of information enables one to discriminate quantitatively the difference among the cleanliness levels of the same kind of materials produced by different companies or produced by a company at different periods.Thus, this information will be useful for the quality control of materials and the improvement of the steel making process.It is not a priori evident to what extent the extreme values y/areomax of inclusions contained in various steels follow extreme statistics value.However, Murakami et al. [3,[5][6][7][8][9][10][11] have shown many examples of measurements which obey the statistics of extreme value theory.Uemura and Murakami [12] carried out a three-dimensional numerical simulation to find the statistical distribution of the extreme values ^area^^ of inclusions which were distributed in a constant volume with the size (D) distribution of the type, where m is the mean value, and they confirmed the validity of IRMSE (Fig. 5).In addition, they indicated the quantitative difference between two-dimensional and three-dimensional measurements, though the difference virtually vanishes with increasing inspection domains.
Application to Prediction of Scatter Band of Fatigue Strength
Figure 6 illustrates the shape and dimension of a tension-compression fatigue specimen [13].The material used is tool steel, SKHSl.The chemical composition is shown in Table 1.Table 2 shows the mechanical properties.Figure 7 shows the extreme value distribution of yjarea of the inclusions found at the fracture origin of 34 specimens.The data in Fig. 7 are the extreme values obtained by the fatigue test but not by the two-dimensional metallographic method described in Sec. 3. Figure 8 indicates the location of these inclusions on the fracture surface.If the tensioncompression fatigue test is not performed correctly, that is, specimens are subject to a bending moment due to a bad alignment or the curving of the specimen axis, nonmetallic inclusions existing near the free surface are likely to appear as the fracture origin on the fracture surface [14].In such a case, unusually low fatigue strength is likely to be obtained.Since the fatigue fracture origins shown in Fig. 8 are distributed randomly on the section of specimen, these data may be valid for the statistical analysis.However, it should be noted that when the surface inclusions became the fracture origins, the data were not plotted on Fig, 7, because such inclusions are more detrimental than an inclusion having the same size and existing internally and accordingly they may be a little smaller than the exact maximum inclusion.
In the case of the data of Fig. 7, the volume of the test part of one specimen (Fig, 6) corresponds to one inspection domain and there are 34 extreme values in Fig. 7. Therefore, Fig. 7 can be used for predicting the expected maximum size of the inclusion which may be contained in more specimens than those used in fatigue tests.For example, an inclusion having ■\/area",M=i^8 ji.ni is expected to be contained in 100 specimens (iV = 100).Combining this \/areamai ( = 138 jxm) and Eq. ( 3), the lower bound {(r^r) of fatigue strength of 100 specimens can be predicted.
Figure 9 compares the scatter observed in experiments and the predicted lower bound o-"c of the scatter band.The prediction is in good agreement with experiments.The prediction of the lower bound of fatigue strength explained above can be used for the quality control of machine elements which are produced by mass-production and cannot be tested individually.
The data as shown in Fig. 7 offer us reliable information on inclusions expected to be contained in other specimens.However, obtaining the data shown in Fig. 7 requires preparation of many precise specimens and time consuming fatigue tests.To avoid this inconvenience, the author has proposed an alternative two-dimensional method as Xfish-eye(2 specimens) upper limit Soo } lower limit (r"f = 1.4](//v-HZQ)/(yWg"..,)'''' 300 400 500 600 700 800 900 1000 Hv kgf/mm' Fig. 9. Comparison between tlie expterimentai results and the lower bound of fatigue strength which was predicied on the ba sis of Eq. ( 3) and the majdmum size of inclusion (Tool steel, SKH51).explained in Sec. 3. A sufficient number (N) of inspection domains (inspection areas) necessary to predict reliably \/aream3^ for more specimens or larger areas should depend on the materials to be in.spectedand on the inspection area Su observed by the image processor combined with an optical microscope.From the author's experience, it is recommended that N be larger than 40 for 5o = 0.031 mm^.
Several Japanese industries have already put the method proposed in this study in practice [15],
Conclusions
(1) If we define the size of nonmetallic inclusions contained in commercial steels by the square root of the projected area, \^area, the maximum values, Viirea^^, in a definite inspection domain obey the statistics of extreme value theory.
(2) The inclusion rating method by the statistics of extreme values (IRMSE) based on -s/area^f can be used for a new inclusion rating method.[RMSE enables one to discriminate distinctly between recent super-clean steels, while conventional inclusion rating methods are no longer valid as the method to evaluate the cleanliness of new clean steels.
(3) IRMSE is useful not only for a relative evaluation of materials but also for the prediction of the expected maximum size of inclusions to be contained in a domain larger than the inspection domain.The value of y/area^^ can be used with the fatigue strength prediction equation to predict a scatter band of fatigue strength of high strength steels.
Fig. 1.A typical example of inclusion observed at the center of fatigue fracture origin [super-clean bearing steel, SUJ2(N)].
Figure 3 Fig. 3 .
Figure 3 explains the practical procedure to implement the inclusion rating by statistics of extreme values.The details of this method are reported in
Fig. S .
Fig. S. Numerical simulation of the incluiiiun rating by statistics of extreme values Of) the materi-1 D als with the inclusion size distribution of the type ^(D)--exp {).
Fig. 7 .Fig. 6 .
Fig. 7. Statistical distribution of the extreme values, the maximum size of inclusion at the center of fracture origin (Tool steel, SKHSl).
Fig. 8 .
Fig. 8. Relationship between the size (yjarta) and location of inclusions at the center of fracture origin (Tool steel, $KH51).
|
2018-12-20T22:29:29.876Z
|
1994-07-01T00:00:00.000
|
{
"year": 1994,
"sha1": "0297aa0be73801e52f2a3583ff98e4293faacbbf",
"oa_license": "CC0",
"oa_url": "https://doi.org/10.6028/jres.099.032",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "0297aa0be73801e52f2a3583ff98e4293faacbbf",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Physics",
"Medicine"
]
}
|
240947682
|
pes2o/s2orc
|
v3-fos-license
|
Developing Eighth-Grade Students’ Computational Thinking with Critical Reflection
As computer science has become a vital power in facilitating the rapid and sustainable development of various fields, equipping everyone with computational thinking (CT) has been recognized as one of the core pillars supporting the sustainable development of individuals and our digital world. However, it remains challenging for secondary school students to assimilate CT. Recently, critical reflection has been proposed as a useful metacognitive strategy for regulating students’ thinking to solve current and future problems. In this study, a quasi-experiment was conducted to investigate the role of critical reflection in advancing eighth-grade students’ CT. The participants were 95 eighth-grade students, comprising an experimental group (n = 49) and a control group (n = 46). The students’ CT was evaluated based on their learning performance in computational concepts, computational practices, and computational perspectives. The results showed that critical reflection, compared with traditional instruction from teachers, could significantly advance eighthgrade students’ CT. Interestingly, the two groups showed significantly different learning performance in computational practices during the learning process. Furthermore, interaction with peers and instructors played an essential role in helping students engage as active agents in critical reflection. The results of this study emphasize the need to develop students’ CT by practicing critical reflection in eighth-grade education.
Introduction
As Goal 4 of the Sustainable Development Goals highlights, providing everyone with quality education is an indispensable pillar to support the sustainable development of our world [1]. In line with the Sustainable Development Goals, computational thinking (CT) has been globally integrated into K-12 education to build a solid foundation for the future success of individual students and the sustainable development of the world [2,3], as CT has been recognized as an essential skill for everyone to solve problems effectively in our computer science-driven world [4][5][6][7].
Meanwhile, considerable efforts have been made to develop secondary school students' CT. For example, the Computer Science Teachers Association and the International Society for Technology in Education (CSTA & ISTE) suggested the steps to develop K-12 students' CT, including identifying problems, organizing and analyzing data, representing the data, developing automated solutions, implementing and evaluating the optimal solution, and generalizing and transforming solutions [8]. Moreover, the three-dimensional framework of CT has been widely adopted to develop and evaluate students' CT from the dimensions of computational concepts, computational practices, and computational perspectives [9]. This integrated framework has attracted much attention on pedagogical research, because it
Literature Review
This study reviews four areas in the literature: (1) defining CT; (2) defining critical reflection; (3) integrating CT into secondary school education; and (4) learning CT through critical reflection.
Defining CT
CT has been widely regarded as an essential ability to solve problems by applying basic knowledge of computer science in technological societies [2,28]. Initially, CT was defined as using the fundamental concepts of computer science to solve problems, design systems, and understand human behaviors [5].
Meanwhile, some operational definitions of CT have been proposed. For example, the CSTA & ISTE put forward that CT is a particular set of problem-solving skills that includes identifying the problem, organizing and analyzing data, representing the data, developing automated solutions, implementing and evaluating the optimal solution, and generalizing and transforming solutions [8]. Adding to this, Brennan and Resnick proposed the three-dimensional framework of CT. This framework argues that students' CT Sustainability 2021, 13, 11192 3 of 21 should be developed and evaluated in three dimensions: (1) computational concepts (the concepts that students often use in programming, such as sequences, loops, and events); (2) computational practices (the practices that students develop when they engage with computational concepts, such as iterating, debugging, and abstracting); and (3) computational perspectives (the perspectives students form about themselves and about the world, such as recognizing that computation is a medium of creation, realizing the power of working with others, and feeling empowered to ask questions) [9]. This integrated framework has been used frequently to cultivate K-12 students' CT, because it not only emphasizes the conceptual knowledge and practical skills of CT but also attaches importance to the social attribute of CT [10,11].
To sum up, we draw three conclusions. First, CT is an important ability to solve problems. Second, generalizing solutions is an integral part of learning CT. Third, students need not only to learn computational concepts and computational practices but also to develop computational perspectives.
Defining Critical Reflection
In the context of thinking, critical reflection is defined as a process of thinking about the conditions and the effects of what a person is doing or has done [29]. It reveals the influence and function of thought and action on people [30]. To achieve critical reflection, students need to complete the following steps. First, students need to examine noticeable details of the problem-solving process. Then, they are required to judge the reasons for their decisions from different perspectives. Following this, they re-cast prior experiences and knowledge into other contexts. Finally, they should develop new plans for solving similar problems in the future [31][32][33]. Therefore, researchers believe that a statement can be measured as critical reflection only when it shifts from a description of events to a critical report that analyzes, integrates and reconstructs experiences, and ultimately produces a new perspective [34,35].
In the domain of education, the theory of experiential learning recognizes critical reflection as an essential metacognitive strategy for acquiring meaningful learning outcomes from specific experiences [20,21]. Researchers believe that metacognition refers to thinking about one's own thinking, which is a crucial component in controlling and regulating one's thinking, especially problem-solving [20,24,36]. If problem-solvers can be aware of their cognition and can use this awareness to control and regulate their problem-solving process, they will have a better chance of success [37].
Integrating CT into Secondary School Education
In the past years, CT has been integrated into secondary school education to build a solid foundation for students' future success and to support the sustainable development of our computer science-driven world [2,3,6]. Consequently, various learning strategies have been used to develop students' CT. In general, there are two types of learning strategies for developing CT with secondary school students.
The first and most popular is student-centered learning. Motivated by constructionism [16], researchers have used the design-based learning strategy to develop CT in eighth-grade students [17]. Students self-explored the applications of CT by engaging in multiple cycles of design, evaluation, and redesign to make computer games about science topics iteratively in a programming environment. The results showed that design-based learning activities enable students to master computational concepts and computational practices. Along similar lines, a pedagogical strategy based on agile software engineering methods has been adopted to develop secondary school students' CT [14]. This strategy allows students to explore, iterate, and experience computational practices in different settings. The results showed that students' CT was enhanced by exploring these multidisciplinary activities. Moreover, collaborative learning strategies have been integrated into student-centered learning activities. A three-year research project claimed that co- designing mobile games about social change could advance secondary school students' understanding of computational concepts and computational practices [18].
The second is teacher-directed learning. For instance, Saritepeci conducted a study that explored the effect of completing programming tasks on CT development with ninthgrade students [15]. The teacher introduced computational concepts and sample problemsolving activities to the students. Then, the teacher directed the students to collaborate on computational practices. The results showed that the students' CT had been significantly improved, while interacting with others had a positive effect on developing these students' computational perspectives about themselves and their relationships with others. Moreover, teacher-directed learning has also been used to develop interdisciplinary CT in ninth-grade students [38]. In this pedagogical process, the teacher demonstrated examples, guided the students to discuss, and provided the students with a series of pre-designed problems. As a result, the students' CT and programming skills were improved.
Overall, gaining computational experiences through self-practice or observation has been a main learning activity in cultivating CT with secondary school students. These cognitive experiences not only help students understand computational concepts and computational practices, but also shape students' computational perspectives [9].
Meanwhile, many studies have been conducted to explore the possible means of assessing CT with students. In general, there are four ways of assessing CT. The first popular way is works analysis. For example, a visualization tool named Scrape (http: //happyanalyzing.com/ accessed on 9 October 2021) has been developed and used to automatically present the computational concepts used within Scratch projects [9]. Moreover, there are researchers who analyze works manually from the four aspects of content, creativity, artistry, and technology to evaluate students' learning performance in computational concepts and computational practices [11]. The second way is using traditional quizzes, such as multiple-choice items, to evaluate students' learning performance in computational concepts and computational practices [39]. The third way is conducting artifact-based interviews or self-reports to assess computational concepts, computational practices, and computational perspectives [9]. The fourth way is using scales. For instance, a five-point Likert scale was developed by Korkmaz et al. [40]. Computational thinkers use this scale to evaluate their creativity, algorithmic thinking, cooperation, critical thinking, and problem solving.
However, generalizing and transferring CT to other contexts remains a challenge for secondary school students [9,18]. In addition, more attention should be paid to the ways of developing K-12 students' computational perspectives [8,19].
Learning CT through Critical Reflection
In recent years, some researchers have discussed the relationship between CT and critical reflection. They argued that CT is exactly a particular set of problem-solving abilities, while critical reflection could improve students' problem-solving abilities [5,[22][23][24]. Moreover, generalizing solutions and forming computational perspectives are essential parts of CT, while critical reflection features prominently in promoting the generalization of solutions and the formation of new perspectives [8,25].
In view of these arguments, critical reflection has been employed as a metacognitive strategy for developing CT-related knowledge in higher education. For instance, university students working in programming and multimedia systems development were required to blog about their critical reflections on their learning process. The students were prompted to examine what problems they have encountered, how they solved the problem, why the solution worked or not, and what they should do next time. The results showed that these activities could help the participants develop computational practices, such as finding problems and making revisions [27]. Moreover, Kwon and Jonassen's study showed that critical reflection had positive effects on helping university students understand computational concepts and complete computational practices in the domain of programming [23]. In their study, participants were asked to explain and critically judge their decisions. Similarly, Miller et al. integrated critical reflection into university students' CT learning activities [26]. The students were provided with critical reflection prompts to think about their CT learning activities at a more abstract level. The results confirmed that critical reflection enabled the students to transfer CT to more general problem-solving contexts.
However, whether and how critical reflection can improve CT with secondary school students remains to be explored. At present, only noncritical reflection has been integrated in the CT learning activities of secondary school students. These noncritical reflections stated the noticeable details of the problem-solving process but did not evaluate the solutions. For instance, Zhong et al. stated that reflective card, which was designed for students to report their noticeable errors after they finished computational practices, was a useful tool for helping teachers discover students' learning barriers [11]. Similarly, reflective journals have been adopted to reveal secondary school students' perspectives on CT training activities and problems encountered [18]. In particular, the students were assigned to individually report the successes and difficulties they faced, as well as the activities they enjoyed and disliked. Although these noncritical reflections benefit teachers in detecting students' immediate and present learning difficulties, they have a trivial effect on activating and enhancing metacognition, which is essential for controlling and regulating learners' thinking [24,41].
In light of the above findings, this study integrated critical reflection in the learning process of secondary school students to help them gain an in-depth understanding of computational concepts and computational practices and develop computational perspectives. Specifically, the research questions of this study are as follows: 1.
Do students who engage in critical reflection have better learning performance in CT than those who do not? 2.
What are the participants' perceptions of engaging in critical reflection?
Method
We conducted a quasi-experiment in a secondary school. The participants' learning performance and perceptions of the learning activity were documented and analyzed.
Participants
The participants were 95 eighth-grade students aged 13 to 15 at a secondary school in China. Two classes were randomly selected from 14 eighth-grade classes and randomly assigned to an experimental group and a control group. Before the experiment, all of the participants had basic computer literacy, but no programming or CT learning experience, and no knowledge of computational concepts or computational practices. Excluding those participants who were absent from some lessons or tests because of sick leave and so on, there were 43 effective participants in the experimental group and 41 effective participants in the control group.
In this study, the two classes were taught by the same teacher. The teacher had four years of teaching experience at the secondary school level and one year of programming experience. In particular, the teacher had two years of experience teaching CT in this secondary school. Before the experiment, we gave the teacher a training session in teaching CT and critical reflection. The training session was divided into two stages. In the first stage, the teacher learned about the resources provided. These resources included: (1) What are CT and critical reflection? (2) Why are CT and critical reflection necessary? (3) How to teach students CT and critical reflection? In the second stage, the teacher, along with a professor who studied CT and a professor who studied critical reflection, spent a week testing the training effect. The teacher was required to provide a critical reflection-integrated CT lesson plan for secondary school students. Then, the professors gave comments and suggestions. Following this, the lesson plan was discussed until the two professors and the teacher reached agreement. During the discussion, the teacher was asked about her design rationale. For example, what are the key steps in critical reflection? How are these steps arranged in this lesson plan? Finally, the teacher conducted a trial lesson based on the lesson plan and the professors provided feedback to enhance the teacher's learning outcomes.
Course Setting
In this study, the setting for all learning activities was an information and computer science course that lasted 13 weeks. This course was compulsory for all eighth-grade students and it was held weekly. Each lesson lasted 40 min during the school day. Each student was provided with a computer in the school's computer lab.
According to the syllabus for this course, students should master basic computational concepts, computational practices, and they should be able to apply CT to create a digital work and to solve problems. Table 1 shows the content that students needed to learn, which was designed based on the three-dimensional framework of CT [9]. More specifically, the programming tool selected was Small Basic (https://smallbasic-publicwebsite. azurewebsites.net, accessed on 9 October 2021). Thirteen examples of how to create a digital work using Small Basic were also provided to the students for reference (as listed in Table 2). In this study, the students in the control group followed the teacher's original teaching plan, which was designed as suggested in literature: (1) computational experience was essential [3,10,12]; (2) collaborating with others was allowed [9,15]; and (3) iterative design was considered [17,18]. In each lesson, the teacher began by giving a representative example to illustrate new computational concepts, computational practices, and programming skills. Then, the teacher provided the students with supporting materials and learning objectives. After gaining a preliminary understanding of the new knowledge, the students were given time to practice with the sample and explore different applications of the new knowledge. The students could ask each other for help. Meanwhile, the teacher walked around to provide learning assistance and keep track of the students' problems and learning performance. The practices and explorations were used to help students gain a better understanding of conceptual knowledge and acquire practical skills [42]. Moreover, creating digital works and interacting with peers could help the students develop computational perspectives, such as regarding computation as creating and recognizing the power of creating with others [9]. Near the end of each lesson, the teacher gave a summary of the students' learning performance to enhance their learning outcomes. In addition, all students were required to submit their works and concise reflection reports at the end of each lesson. The three prompts to help the students in the control group complete their concise reflection were: Have you accomplished at least one work? What was the hardest part for you? Which class activity has helped you the most?
To answer the research questions of this study, we redesigned the learning activities for the experimental group. Instead of giving a summary directly and assigning students to write the concise reflection reports, the teacher guided the students in the experimental group to engage in critical reflection at the end of each lesson.
Due to the class time limit, the teacher randomly selected about two students in each lesson. Each selected student was required to share learning experiences and reflections by presenting an oral report. As previous research suggested, prompts and critical feedback from others can stimulate critical reflection [43,44]. Therefore, the teacher was required to prompt (see Appendix A for examples of prompts) the selected students to report the following progressive reflection topics: What was the most difficult problem or important issue? How was the problem solved? What computational concepts or computational practices were used in this solution? Why did the problem occur and why did the solution work? Answering these questions could enhance the students' ability to find problems and abstract, which are important computational practices. Previous research has shown that secondary school students struggle to complete critical reflection independently, and interacting with the entire class is the most effective strategy for helping them [43,45]. Thus, all the students were then prompted by the teacher to collaborate on critical reflection by discussing the following. First, they openly shared perspectives on the problem and solution so that different perspectives could be considered by everyone, and so that underlying assumptions could be challenged (e.g., whether the computational knowledge was appropriate to solve the problems, and whether there were other possible solutions and why). Second, in order to generalize the computational knowledge, the students were then prompted to think of the computational knowledge at a more abstract level (e.g., what similar problems could they solve with this solution). By linking to other experiences or existing knowledge structures, students could gain a broader perspective and expand understanding of computational knowledge. With the aim of guiding the students to use the solutions flexibly in appropriate situations, the teacher then prompted them to summarize the advantages and disadvantages of these solutions, and to think about how they would handle similar challenges in the future.
Procedure
Before the experiment, we obtained the participants' consensus on data collection. Besides, the background questionnaire was used to collect the students' personal information and CT-related learning experience. Figure 1 shows the next steps of the experiment. First, every student was required to complete a test of computer operation and a scale for measuring computational perspectives individually. The reasons for this requirement are: (1) The background questionnaire results showed that there was no difference between the experimental group and the control group in terms of computational concepts and computational practices. Before the experiment, none of the students had knowledge about computational concepts or computational practices. (2) In the learning environment where programming was the primary learning approach, the ability to operate computers could influence the students' learning performance in computational practices. In this study, creating digital works by programming was the main learning activity and evaluation method, which was also an important and common way to learn and evaluate computational concepts and computational practices. Since the ability to operate computers could affect students' performance in programming, we could regard this ability as one factor that influences students' learning performance in computational practices. Thus, the test of computer operation was carried out. (3) Students might have computational perspectives, even if they have no knowledge about programming and CT. For example, they might have recognized the power of creating with others. Thus, the students were required to complete the scale for measuring computational perspectives. After the students had finished the pre-test and the pre-scale, the teacher introduced the learning objectives to all the students.
The experiment comprised 13 lessons. In each lesson, the teacher guided the students under the aforementioned course setting. Taking the third lesson as an example, the teacher used the task of painting regular polygons as an example to illustrate the computational concepts (i.e., loops and variables). The teacher taught the students about the usefulness and limitations of loop statements. She also re-coded the program of painting a regular triangle (i.e., the example from the first lesson) with a loop statement to enhance the students' understanding. Following this, the teacher provided the students with supporting materials and learning objectives. Then, all the students were given 25 min to participate in computational practices. During the whole time period, each student was required to create a digital work using their newly acquired computational concepts. Meanwhile, the students discussed with others and revised their works. To create a digital work, the students must Sustainability 2021, 13, 11192 9 of 21 understand the computational concepts and complete a series of computational practices. For instance, they should abstract the main points from their ideas first. Then, they had to experiment and test. Moreover, they needed to reuse previous works when necessary. As suggested by the literature, these practices could help students gain a better understanding of computational concepts and enable them to become more familiar with computational practices [17,18]. Furthermore, creating different digital works could help students develop a computational perspective regarding computational tools as a vehicle for creation [9]. In addition, working with classmates helps the students develop computational perspectives, such as recognizing the power of creating with others [15].
At the end of each lesson, for the control group, the teacher gave a summary of the students' learning performance to enhance their learning outcomes. Following this, all the students completed a concise reflection report (as stated in Section 3.2) individually.
For the experimental group, the students engaged in critical reflection. Prompted by the teacher, the students reflected critically on their experiences and knowledge. Below, an example of critical reflection on the computational concept of parallelism from lesson 11 is described: Teacher: Just as student A [all student names are anonymous] stated, his assumption is that parallelism is an efficient problem-solving strategy, and this is why he and student B decided to debug his program separately. What do you think about this? For example, what do you think is good about this, or what might not go so well? Student C: I think it is a good idea. Doing different things at the same time saves time. Student D: Parallelism is a really good computational idea. However, I prefer to work together, because when we work together, we can discuss about what are the possible problems. It is more targeted.
Teacher: Exactly. There are always many ways to solve a given problem. Does anyone have any other ideas? Student E: It looks like they can discuss how to solve the bug first. I mean, they can split the work. For example, student A checks the first half and student B checks the second half. Then, they go through their share. Would that not be more effective?
Critical reflection on the solutions from different perspectives could help the students develop a comprehensive perspective on themselves and their initial works [46]. By recasting prior experiences and knowledge into other contexts, critical reflection could enable students to come up with more problem-solving methods [31,32]. Thus, the students were then prompted to link parallelism to other everyday scenarios.
Student F: On my way to school, I memorize English words while walking. Student G: I went shopping with my mother in the supermarket yesterday. There were too many people in the checkout line. Then, I waited in line while my mother picked out things so we could save time.
To complete critical reflection, students need to not only generalize about the solution, but also consider its conditions and limitations [29]. Consequently, the teacher instructed the students to think about and summarize the conditions and limitations of parallelism.
Student H: Yes. I agree with you. But my mother may not agree, because she does not think it is safe. If I am standing in line alone, she is afraid I will be kidnapped by bad guys. But if she goes out [shopping] with my dad, they would do what you do.
Teacher: Parallelism is a good computational idea. You also recommended many cases. Just as student E suggested, parallelism works better when combined with other strategies. On the other hand, student H also pointed out that parallelism is not available in some cases. Now, we come to the conclusion that not all situations are suitable for parallelism, and similarly, we can combine this idea with other strategies to solve problems more efficiently. Right?
All students: Yes. Finally, the students were required to develop new plans for solving similar problems in the future [34].
The teacher: So, student A, what do you think of your solution now? Student A: Frankly, I did not think about parallelism that much. Student B and I just think it works. My classmates' suggestions are very enlightening.
The teacher: What would you do when you run into similar problems in the future? Student A: I think I will notice the constraints first. Just like when we write programs, we use the statement of "if . . . else . . . ". I also have to consider different situations when dealing with everyday problems.
Teacher: Exactly! After the experiment, all the students individually completed the post-test and post-scale.
Instruments
To collect data regarding CT cultivation, the following instruments were used: The background questionnaire: This questionnaire had three fill-in-the-blank items aiming to collect the participants' demographic characteristics, two single-choice items and one fill-in-the-blank item aiming to collect the participants' programming experience, and three single-choice items aiming to collect the participants' CT learning experience.
Test of basic knowledge of computer operation: This test was developed by a professor and four experienced teachers who had at least four years of experience teaching information technology courses in secondary schools. In order to ensure the content validity of this test, the items were developed referring to the method mentioned in the literature [40,47]. Firstly, the four experienced teachers developed their own items individually. Secondly, the items were discussed and revised in terms of clarity, accuracy, and content relevance. Following this, the items were added to the item pool. Then, the professor and one of the four teachers selected items from the pool separately. After this, the selected items were compared and reselected until agreement was reached. Finally, all the selected items were examined by the authors and the teacher participating in this study. Overall, this test covered two themes: (1) Common operations, such as creating a new folder and copying text. In this study, these operations were commonly used by the participants who learned CT primarily by making programming works. (2) Skills to solve common problems in using a computer. Specifically, this test had 25 multiple-choice items, 10 yes-or-no items, and 20 fill-in-the-blank items to evaluate the students' basic knowledge of computer operation, with a perfect score of 100. The Cronbach's α value of the test was 0.82, implying the high reliability of the test. Sample items for this test are presented in Appendix B.
Test of computational concepts and computational practices: Inspired by the CT evaluation methods used by Grover [39] and the characteristics of Small Basic, we developed this test to evaluate the students' knowledge of computational concepts and computational practices. Moreover, the development of this test followed the method commonly used in prior studies to ensure the content validity [40,47]. The same procedure for ensuring content validity was used as previously mentioned. Specifically, the test comprised 25 multiple-choice items, 10 yes-or-no items, and 20 fill-in-the-blank items, with a perfect score of 100. These items examined students' understanding of computational concepts and computational practices both in Small Basic and their daily lives. The Cronbach's α value of the test was 0.83, implying the high reliability of the test. Appendix C lists some sample items for this test.
Scale of computational perspectives: This scale was developed based on the computational thinking scale proposed by Korkmaz et al. [40]. The scale comprised 29 items in five dimensions, including eight items for creativity, six items for algorithmic thinking, four items for cooperation, five items for critical thinking, and six items for problem solving. The items were scored on a five-point Likert scale. To validate the scale in this study, the items were translated from English into Chinese by three English language teachers who are bilingual and have at least four years of teaching experience in secondary schools. As suggested by Alharbi [47], the scale was firstly translated into Chinese by two of the three teachers. Then, the items in Chinese were translated back to English by the third one. Their translations were discussed and revised amongst themselves and two professors.
Finally, the items were pretested for appropriateness among 25 secondary school students (not involved in the experimental group or the control group in this study). Based on the collected feedback, some statements were slightly modified to resolve any linguistic ambiguities. In this study, the Cronbach's α value of the scale was 0.92, and the Cronbach's α values of the five dimensions were 0.73, 0.80, 0.87, 0.76, and 0.83, respectively. The Cronbach's α values imply the high reliability of this scale.
Works: The performability of the work and the types of computational concepts used are important indicators of students' CT [9,11]. Thus, each work in this study was assessed with a grade according to the degree of the work's performability, accuracy, and creativity. Table 3 shows the grades and corresponding descriptions and examples. Guided by this rubric, all the works were explored and analyzed by the same teacher. Examples were taken from the students' works in lesson three (the computational concept illustrated were loops and variables), except for the "Creative", "PC" and "None" because no work received this grade. "PA" stands for integrating Performability and Accuracy. "PC" stands for integrating Performability and Creativity. "AC" stands for integrating Accuracy and Creativity. "PAC" stands for integrating Performability, Accuracy, and Creativity.
In addition, open-ended interviews with the teacher were conducted every two weeks. The interview questions mainly focused on her perceptions with regard to involving the students in critical reflection and suggestions for improving CT training activities. Each of the students was interviewed after the experiment to obtain the students' opinions about the overall learning activities, such as their perceptions of engaging in critical reflection.
Students' Learning Performance in CT
Before the experiment, the two groups had no significant difference in CT proficiency. First, the background questionnaire results showed that there was no difference between the two groups in terms of computational concepts and computational practices before the experiment. Second, the average scores of the experimental group and the control group on the pre-test of computer operation were 72.14 and 73.54, respectively. The results of an independent sample t-test (t = −0.59, p > 0.05) indicated that the two groups had an equivalent knowledge of computer operation before the experiment. It implied that the two groups have an equivalent operational ability to accomplish the CT-focused programming tasks designed in this experiment. Third, the average scores of the experimental group and the control group on the pre-scale of computational perspectives were 96.47 and 97.54, respectively. The results of an independent sample t-test (t = −0.32, p > 0.05) showed that there were no significant differences between the two groups in relation to computational perspectives before this experiment.
After the experiment, the learning performance in CT of the experimental group was significantly better than that of the control group. The detailed results are as follows.
In order to statistically control for differences that might be attributable to the learning performance of the students in computational concepts and computational practices, we adopted an analysis of covariance (ANCOVA) to analyze the post-test scores of the two groups by including the students' basic knowledge of computer operation as a covariate. First, the homogeneity test (F = 0.00, p > 0.05) was passed. Then, ANCOVA was carried out. As shown in Table 4, the adjusted means and standard error of the experimental group were 76.95 and 1.35, respectively. The results of ANCOVA also showed that the experimental group had a significantly better learning achievement than the control group (F(1,81) = 4.04, p < 0.05, partial η 2 = 0.05). There was a desired effect in the real educational context between the experimental group and the control group, as indicated by a partial η 2 value of 0.05 [48]. Consequently, it could be concluded that engaging in critical reflection was more effective at helping students master computational concepts and computational practices than simply receiving a summary from the teacher. Moreover, we graded the works according to their performability, accuracy, and creativity. Each work was assigned to one of the eight grades (as shown in Table 3 presented in Section 3.4). Then, percentages of each grade were also calculated. For instance, the experimental group had a total of 43 works in the tenth week, among which five works were graded as PAC. Thus, the percentage of PAC works in the tenth week for the experimental group was 12%. Figure 2 shows the learning performance of the two groups in different weeks. Overall, most works were graded as PA, implying that most of the students could complete the sample practices. Interestingly, we found that the two groups performed significantly differently during the learning process. Firstly, the experimental group had more works graded as AC and PAC than the control group. That is, the experimental group was more adept at synthesizing different computational concepts, which indicated that they had a better generalization of these computational concepts. Secondly, the control group had more non-performable works than the experimental group, especially in the middle stage (i.e., the fifth week to the ninth week).
The data on the students' computational perspectives were also analyzed. The assumption of homogeneity was tenable (F = 1.24, p > 0.05). Then, one-way ANCOVA (shown in Table 5) was conducted to preclude the effects of students' prior computational perspectives. The results indicated that students in the experimental group had a significantly better learning achievement than students in the control group (F(1,81) = 4.61, p < 0.05, partial η 2 = 0.05). A desired effect in a real educational context was also found between the two groups, as indicated by a partial η 2 value of 0.05 [48]. This result implied that critical reflection significantly stimulated the formation of these eighth-grade students' computational perspectives. Figure 2 shows the learning performance of the two groups in different weeks. Overall, most works were graded as PA, implying that most of the students could complete the sample practices. Interestingly, we found that the two groups performed significantly differently during the learning process. Firstly, the experimental group had more works graded as AC and PAC than the control group. That is, the experimental group was more adept at synthesizing different computational concepts, which indicated that they had a better generalization of these computational concepts. Secondly, the control group had more non-performable works than the experimental group, especially in the middle stage (i.e. the fifth week to the ninth week). The data on the students' computational perspectives were also analyzed. The assumption of homogeneity was tenable (F = 1.24, p > .05). Then, one-way ANCOVA (shown in Table 5) was conducted to preclude the effects of students' prior computational perspectives. The results indicated that students in the experimental group had a significantly The interview results also confirmed the effectiveness of critical reflection in developing students' computational perspectives. In particular, 37 (86%) of the students in the experimental group stated that engaging in critical reflection with their classmates enabled them to recognize the power of cooperation. Moreover, 35 (81%) of the students in the experimental group expressed that critical reflection helped them to find shortcomings, which in turn gave them the confidence and ability to question and redesign computational works. By contrast, 30 (73%) of the students in the control group acknowledged that there was no discernible change in their perspectives on cooperation and computational works.
Participants' Perceptions of Critical Reflection
The teacher who was involved in the experiment was interviewed seven times during the study. All of the teacher's comments were recorded chronologically in a text document for content analysis. The teacher yielded 76 comments on critical reflection, while the experimental group made 43 comments. The comments were analyzed according to the three-layer coding procedure [49]. First, the coders read all of the comments several times and created open codes. Then, they refined and clustered all the codes into primary themes. Finally, the themes were further checked and compared to obtain core themes.
While the teacher liked using critical reflection to enhance eighth-grade students' CT, she also pointed out the problems that needed attention. She indicated that critical reflection enabled her to discern students' learning difficulties and improve their learning initiative. For instance, she stated, "critical reflection is helpful for me to gain insight into the students' problems and their problem-solving strategies" and "The thing I appreciate most with critical reflection is keeping the students actively involved". On the other hand, if the students in the experimental group were not prompted by the teacher, they did not actively engage in critical reflection, especially in the first few weeks of this experiment. The teacher said that "the students (in the experimental group) were highly dependent on reflection prompts, possibly because they had not memorized the steps of critical reflection".
Critical reflection was also generally accepted by the students in the experimental group. 38 (88%) of the students in the experimental group noted that critical reflection helped them develop CT. Firstly, they stated critical reflection helped them generalize computational concepts. A student stated that "our teacher organized us to engage in critical reflection and discussion, which helped us connect textbook knowledge with practice and apply knowledge to solve similar problems in different contexts". Secondly, they indicated critical reflection had a positive effect on their computational practices. When talking about this, one student said that "peers' evaluations (presented in the critical reflection process) enabled me to realize my shortcomings, and this made me want to further modify my programs and even change my way of thinking". Thirdly, the interview results showed that critical reflection helped the students form computational perspectives. Some of the students said the following: (1) "although I was never the one who was selected to give an oral reflection report, I gained various problem-solving strategies from my classmates," which implies that the student realized the power of learning with others; (2) "I prefer to ask questions during our critical reflection," which implies that the student felt empowered to ask questions; and (3) "when we reflected, I got a lot of ideas, which made me think that we can create a lot of interesting works with Small Basic", which implies that the student recognized computation as a medium of creation.
Effect of Critical Reflection on Advancing Students' CT
In order to identify the effect of critical reflection on enhancing eighth-grade students' CT, a quasi-experiment was conducted. The learning activity of the control group ended with the teacher's summary, whereas the experimental group engaged in critical reflection to further explore their computational experiences. The results confirmed that critical reflection is more effective at advancing students' CT. This is consistent with the finding that critical reflection is more conducive to stimulating deeper learning than direct feedback from a teacher [50].
With respect to computational concepts and computational practices, the experimental group made significantly better learning achievements. These results were observed in both the post-test scores and the learning process. As the CT-oriented practices became increasingly complicated, the control group had more non-performable works than the experimental group, especially in the middle stage of the semester. On the other hand, the experimental group had more works graded as AC and PAC than the control group. In particular, the experimental group had nearly three times as many PAC works as the control group in the last stage. These creative works conveyed that the experimental group could better generalize these computational concepts and computational practices to sustainably use them to create new works and solve problems, while the control group's understanding of these computational concepts and computational practices was relatively superficial and narrow. The different formative learning performance of the two groups in this study was consistent with the conclusion of other studies: critical reflection can support the development of computational practices in the domain of programming and transfer CT to more general problem-solving contexts [23,26,27]. There are three possible reasons for these results. Firstly, critical reflection can heighten students' internal attention that facilitates the efficient integration of knowledge and idea generation [51]. Secondly, critical reflection can benefit students' follow-up actions to solve similar problems [46]. In this study, the students in the experimental group explained that critical reflection helped them discover the powers and limitations of computational ideas and their way of thinking. Moreover, various suggestions raised during critical reflection were beneficial to testing and debugging in subsequent computational practices. The interview results also showed that many students in the control group were not efficient at debugging. When they encountered problems, the students in the control group felt frustrated and did not know what to do, while the students in the experimental group were more confident and more likely to seek help from others. As a consequence, the students in the experimental group could effectively complete their works within the time limit. Thirdly, critical reflectionintegrated programming education encourages students to review what they have learned, which contributes to converting new knowledge into long-term memory [52].
In terms of computational perspectives, the experimental group also had significantly higher test scores than the control group. This indicated that critical reflection significantly facilitated the development of these eighth-grade students' computational perspectives. Prior research has shown that CT-oriented programming practices enable students to master computational concepts and practical skills, but that programming is insufficient to cause a shift in perspectives [18]. Nevertheless, critical reflection provides students with opportunities for re-examining specific computational experiences from different perspectives, as well as for building a bridge between specific computational experiences and overall experiences. Thus, many studies have used critical reflection to not only help the students develop a comprehensive perspective on themselves and their initial works, but also enable them to come up with alternative problem-solving methods [32,46]. Similarly, in this study, the students in the experimental group were prompted to evaluate and question specific problems and solutions from different perspectives, as well as to link CT to other scenarios. These peer-evaluations and generalizations stimulated the students' thinking and ideas, and ultimately benefited them by giving them a sense that they can use computational tools for creation [53]. When the experimental group engaged in critical reflection together, they received various suggestions from others. The interviews confirmed that critical reflection equipped the students with ideas for optimizing their original works and creating new applications for CT. This enhanced their appreciation of learning with others. On the other hand, peers' judgements challenged the students' original assumptions and habitual ways of solving problems. Every student had to put up with and try to accept different points of view. This helped the students correct the flaws in their habitual assumptions and assimilate new ideas, which enabled them to ascertain that they had achieved personal development and gave them a sense of accomplishment. Moreover, sharing and arguing with peers could reinforce the students' roles as learning facilitators and improve their academic self-efficacy [52]. As a result, the experimental group were more confident to ask questions.
Compared with the experimental group, the control group was more of a receiver of information after completing the computational practices. The ideas and perspectives they had access to were mainly from the teacher. There was a lack of collision of different ideas that triggers their metacognition of the CT learning process and alters their perspectives.
Perceptions of Integrating Critical Reflection into CT Education
In this study, both the teacher and students recognized the role of critical reflection in developing the eighth-grade students' CT. The interview results showed that critical reflection was helpful for the eighth-grade students to generalize computational concepts, engage in computational practices, and form computational perspectives. While CT-oriented programming practices enabled students to apply computational concepts and acquire practical computational skills, critical reflection further helped the students develop various appropriate problem-solving strategies associated with computational concepts and computational practices [15,23,26,27]. When the students in the experimental group discussed with their peers to complete critical reflection on their works, they obtained different ideas of applying CT to solve problems from peers and then generated new problem-solving strategies. This helped them form computational perspectives (i.e., realizing the power of learning with others, feeling empowered to ask questions, and recognizing computation as a medium of creation). Consistent with previous findings, the interview results confirmed that reflection reports could benefit the teacher by helping her to identify the learning difficulties faced by the students [11,18]. When the teacher carried out CT teaching activities following the conventional teaching approach, she obtained students' learning difficulties through her own observations and students' reports. Limited by teachers' energy and students' initiative, it was difficult for teachers to find students' learning difficulties. Besides, critical reflection used in this study has the effect of improving the eighth-grade students' learning initiative, which is superior to noncritical reflection used in other studies on developing secondary school students' CT. As explained by the students in the experimental group, when they participated in critical reflection, they were prompted by their teachers and given many helpful suggestions by their classmates, which made them more willing to express their ideas and more confident to improve their works.
Furthermore, this study found that interaction with others was crucial for secondary school students to engage in critical reflection. On the one hand, this study confirmed that interaction with peers can help young learners reflect critically and come up with more alternative plans [54,55]. The different opinions of their peers led the students to re-examine their own learning process and perspectives. On the other hand, it was notable that students' participation in critical reflection was particularly dependent on the teacher's prompting. If the teacher did not actively give prompts, the students had difficulty engaging in critical reflection and even lost focus, which was especially true in the first few weeks of the experiment. This was possibly because the eighth-grade students were still not well informed about the steps of critical reflection. These findings imply that completing critical reflection independently is still a challenge for secondary school students.
The findings of this study have two practical implications. First, this study verified the effectiveness of critical reflection at improving eighth-grade students' CT. Hence, we suggest that instructors should integrate critical reflection into the CT learning process for secondary school students. Second, this study revealed that interaction played a positive role in facilitating secondary school students' critical reflection. In particular, we noticed that these young students were in need of guidance from their teacher when conducting critical reflection. Thus, we suggest that instructors should design multiple interaction strategies and provide appropriate support to broaden and deepen the secondary school students' critical reflection on the application of CT.
Limitations and Future Research
In this study, there were more than 40 students in each class, but only one teacher. It was therefore difficult for the teacher to capture all of the students' real-time learning performance in a naturalistic classroom setting. Future research should apply behaviormonitoring tools and thinking visualization tools to capture and analyze students' CT performance. Moreover, future research should adopt more strategies to stimulate students to participate actively. For example, paired programming and peer assessment can be inte-grated into learning activities [10,11], insofar as students noted that their peers' suggestions led them to come up with various problem-solving strategies.
Conclusions
In present and future society, equipping everyone with CT is recognized as an important component of sustainable educational development goals. In order to develop students' computational thinking more effectively, this study proposed the use of critical reflection to promote secondary school students' understanding and application of CT. The quasi-experiment conducted in this study revealed that critical reflection effectively improved the learning achievement of eighth-grade students with regard to computational concepts, computational practices, and computational perspectives. Moreover, the two groups showed significantly different learning performance during the learning process. The interview results confirmed that critical reflection helped eighth-grade students learn from their experiences. In addition, we found that interaction with others plays an essential role in stimulating critical reflection in students. The students' abilities to use computers play an important role in completing computational practices in this study. For instance, students with poor computer skills were more likely to be unable to complete computational practices using Small Basic in the given amount of time.
M2. In The Windows operating system, which of the following statements about filenames are correct: ( ) A. Chinese characters are allowed for file names B. Multiple dot separators are allowed for file names C. Space are allowed for file names D. Any character is allowed for file names Yes-or-no items Y1. You can have two identical files in the same folder. ( ) Y2. The software will be shut down when its window is minimized. ( ) Fill-in-the-blank items F1. If you find an extra typo in front of the cursor while typing, you can press the ( ) key to delete it. F2. To enter the "*" above the numeric key "8", you must first hold down the ( ) key and then hold down the numeric key.
Since all the computers used in this experiment were installed Windows operating system, the test items were also subject to the operation in the Windows operating system. "( )" stands for "Please write your answers in brackets so that teachers can find your answers quickly and accurately". M2. Which of the following situations requires a conditional statement? ( ) A. If it doesn't rain tomorrow, we will go to the amusement park. B. Only after you finish your homework can you play games. C. We will be late if we do not leave now. D. We should study hard.
Appendix C
This item examines students' understanding of computational concept (Conditionals) in everyday problem solving.
Subset Sample Items Relevance to CT
Yes-or-no items Y1. When writing a program with many repeated statements, you can use conditional statements to make the program concise. ( ) This item examines students' understanding of computational concepts (Conditionals and Loops) in programming.
Y2. We need to take different conditions into consideration when making plans, which is a parallel approach. ( ) This item examines students' understanding of computational concepts (Conditionals and Parallelism) in everyday problem solving.
Fill-in-the-blank items
Lily wrote a program to calculate "1 + 2 + 3 + . . . + 100" using Small Basic. As shown in the following figure, it did not work and showed error prompts: . F1. From the figure above, we can see that the error was in line ( ). F2. If we want to help Lily correct the mistake in the program, we should change line ( ) of the program to ( ).
These items examine students' understanding of computational practices (Debugging and testing) in programming.
"( )" stands for "Please write your answers in brackets so that teachers can find your answers quickly and accurately".
|
2021-10-15T15:16:54.955Z
|
2021-10-11T00:00:00.000
|
{
"year": 2021,
"sha1": "bc4e4f504efaf062f6e7a22444a7d59ba2274b1d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/20/11192/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6748c1f3d6c5119aa7b22f96c2f60734fd3dcf49",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": []
}
|
234971474
|
pes2o/s2orc
|
v3-fos-license
|
Discriminant Method of Sand Liquefaction
With the proposal of 1.8 billion mus of cultivated land red line and the rise of land reclamation in China, more and more national defense and civil projects are built on poor geological soil, such as reclamation reefs in the South China Sea. The liquefaction damage caused by earthquake, such as water spraying, grouting and surface cracks, has a great impact on such buildings, and poses a threat to people's lives and property. Therefore, it is of great significance to study the liquefaction of sand in foundation soil. Since Academician Huang Wen-xi first proposed to use indoor dynamic triaxial test to study liquefaction, a series of achievements have been made in the liquefaction of foundation soil. The research methods of sand liquefaction have developed from simple single sample of numerical simulation, numerical simulation and test. Referring to the literature at home and abroad, this paper sorted out the discriminant method of sand liquefaction, experimental study of sand liquefaction and post-liquefaction analysis.
Introduction
With the rapid development of economy and the increasing frequency of human activities, it has brought some great influence to the activities inside the earth. In recent years, some strong destructive earthquakes have occurred all over the world, such as the 8.3 earthquake in San Francisco, USA on April 18, 1906, the 7.9 earthquake in Kanto, Japan on September 1, 1923, the 7.1 earthquake in Fukui, Japan on June 28, 1948, and the 8.0 earthquake in Wenchuan, China on May 12, 2008. After the earthquake, it has brought great damages and inconvenience to people's production and life, resulting in great economic losses and casualties. According to incomplete statistics, the high death rates after the earthquake is mainly caused by the collapse of buildings. For the collapse of buildings, there are many reasons, such as the building seismic grade is not high, the building material strength grade is not high, or the building foundation soil liquefaction occurs under the action of earthquake force, resulting in the damage of buildings. With the continuous development of modern science and technology and the application of computer big data, there is a technical support for the occurrence and prediction of earthquakes. Therefore, it is also of great significance of the seismic research of buildings, especially for the problem of foundation soil.
Huang Wenxi is a creative scholar in the study of sand liquefaction mechanism in China [1]. As early as the late 1960s, Huang Wenxi proposed that the dynamic triaxial apparatus should be used to study the liquefaction of soil. Wang Yajun [2] proposed the pore pressure failure model of marine sand in Zhoushan sea area. The appearance of sand samples was analyzed by XRD and SEM to study the influence of appearance shape of Marine Soil on soil liquefaction, and the pore pressure model of liquefaction failure of sand under cyclic excitation was established. Liu Hanlong [3] and others have carried out a series of experimental studies on calcareous sand by using dynamic triaxial apparatus, analyzed the mechanical characteristics of liquefaction of calcareous sand in the South China Sea, and seed, Finn [4,5] and others have carried out a large number of experiments since the 1960s, and put forward the widely used simple and convenient method, which has made outstanding contributions to the study of sand liquefaction in the field of geotechnical engineering.
Liquefaction judgment
At present, there are many methods to distinguish sand liquefaction at home and abroad, which can be divided into traditional methods and non-traditional methods according to different ways. Although the two methods are different, they have their own advantages and disadvantages, which is mainly due to the complexity of factors affecting sand liquefaction.
Traditional discrimination method
The traditional discrimination method can be divided into field tests and indoor test.
The discrimination mechanism of field test is: in the macro seismic liquefaction and non liquefaction areas, according to the data of discrimination index measured by field test, through analysis, statistics and summary, the relationship between the data and macro seismic disaster data is established, and the empirical formula or liquefaction boundary is obtained to distinguish liquefaction or not.
(1) Criterion of critical blow count of standard penetration(SPT) cr N<N (1) Where, N is the actual number of SPT blows; N cr is the critical number of SPT for liquefaction. According to code for seismic design of buildings (GB 50011-2010), the critical number of SPT for liquefaction is calculated as follows: Where, N cr is the critical value of SPT blow count for liquefaction identification; N 0 is the benchmark value of SPT blow count for liquefaction identification; d s is the SPT depth of saturated soil (m); d w is the groundwater level (m); ȡ c is the percentage of clay content, which should be 3% when it is less than 3% or sandy soil; and ȕ is the adjustment coefficient, which is 0.80 for the first group, 0.95 for the second group and 1.05 for the third group.
According to the code for geological exploration of water resources and Hydropower Engineering (GB 50487-2008), the following formula is used to calculate the critical standard number of penetration for liquefaction: Where, c is the percentage of clay content, 3% should be used when it is less than 3%; N 0 is the benchmark value of standard penetration blow count for liquefaction identification; when the standard penetration point is within 5m below the ground, 5m should be used for calculation.
According to the formula proposed in code for seismic design of buildings (GB 50011-2001), the formula is as follows: For liquefiable soil layer, the liquefaction grade is determined according to the liquefaction index IIE calculated by the following formula: Where:I ie -Liquefaction index; N i -Measured SPT blow counts of point I in saturated soil; N cri -Critical SPT blow count corresponding to N i depth; N -total number of SPPs in saturated soil layer within 15m depth of each borehole.
The classification is as follows: Where: p scr , q ccr -Critical values of CPT specific penetration resistance and cone resistance of saturated soil (MPA); d w -Groundwater depth; d u -Thickness of overly non liquefied soil layer; Į w -the correction factor of groundwater depth, Į w =1.13; Į u -Correction factor of overly non liquefiable soil thickness.
When the measured specific penetration resistance or cone tips resistance is less than the critical value of penetration resistance or cone tip resistance of penetration liquefaction, it should be judged as liquefied soil, otherwise the sand will not be liquefied.
For the field test method, there are the following methods: shear to wave velocity method, Rayleigh wave velocity method and energy discrimination method. For the above methods, the phenomenon of sand liquefaction can be seen intuitively, and multiple factors affecting sand liquefaction can be considered, which can avoid the disturbance of sample preparation for laboratory tests, and has certain practicability and reliability, but there are also some shortcomings: 1, The accumulated field test data of all kinds of soil liquefaction are relatively few; 2. The investigation data onto foundation liquefaction are mostly obtained in free site, generally speaking, this kind of method is suitable for liquefaction identification of free site; 3. This kind of method is based on liquefaction examples of earthquake site, which is regional and not universal.
Laboratory tests methods mainly include dynamic triaxial test, centrifuge test and shaking table test.
(1) Dynamic triaxial test As early as the late 1960s, Huang Wenxi proposed that the dynamic triaxial apparatus should be used to study the liquefaction of soil. Since then, China has opened the door to the study of sand liquefaction by using dynamic triaxial apparatus, and a series of achievements have been made. The dynamic triaxial apparatus is shown in Figure 1.
In 2015, Liu Hanlong et al. Studied the dynamic liquefaction characteristics of calcareous sand in the South China Sea by using geotechnical dynamic triaxial apparatus, and analyzed the dynamic stress-strain relationship and dynamic pore pressure characteristics of calcareous sand through experiments. The results show that: (1) under the condition of isobaric consolidation, the deformation mode of saturated calcareous sand under different confining pressures is the same, and the cumulative plastic strain increases with the increase of vibration times. (2) Under the condition of isobaric consolidation, the ability of resisting deformation is stronger at the initial stage of loading, and the strain amplitude increases to the increase in loading times. In this test, the sand box is used to simulate the foundation. The sample sizes of the large and small shaking table are 160cm × 90cm × 120cm and 51cm × 34cm × 32cm respectively. The gray fine sand is used as the foundation soil, and the water sinking method is used to form the foundation. The experimental results show that (1) due to the difference between pore pressure on both sides with the symmetrical structure, the building will topple in the direction where the pore pressure is leading.
(2) One of the decisive factors affecting the increase of pore water pressure is the vertical dynamic stress of the base.
Unconventional discrimination method
Unconventional discrimination method includes neural network method and fuzzy comprehensive evaluation method. The following neural network method is introduced.
(1) Artificial neural network method Artificial neural network (ANN) is a new discriminant method, which mainly combines the global search ability of genetic algorithm with the guiding ideology of BP algorithm. It is based on the application of big data by using MATLAB. In 2012, Fan Fu-song and others used the generalized regression neural network to analyze the discrimination method of sand liquefaction. Based on a large number of experimental data onto a certain place, the standard method and seed method were used to judge the liquefaction respectively, and then the data onto the same discrimination results were selected as samples, and the generalized regression neural network was used to discriminate it twice.
Conclusion
(1) Although there are many researches on sand liquefaction, the research on sand liquefaction is still an important research content of the field of geotechnical engineering, and the test method can be further improved.
(2) Using special numerical simulation software can simulate the problem of sand liquefaction. Comparing the numerical simulation results from the experimental results, we can draw the corresponding conclusions. Therefore, the numerical simulation method to study the problem of sand liquefaction may become a new trend in the future.
(3) With the continuous maturity of big data technology, the non-traditional methods of sand liquefaction research will become more and more mature, which plays an important role in the auxiliary test verification of sand liquefaction.
|
2021-05-22T00:03:01.115Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "3e03c0bd5dcabd393e0c0a408ca89677f43d5379",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/24/e3sconf_caes2021_01013.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1eb96e06158ce685ba9657a80a5bfa83ff5af87d",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
155091837
|
pes2o/s2orc
|
v3-fos-license
|
Alteration in the mRNA expression of genes associated with gastrointestinal permeability and ileal TNF-α secretion due to the exposure of silver nanoparticles in Sprague–Dawley rats
Background Silver ions from silver nanoparticles (AgNP) or AgNPs themselves itself that are ingested from consumer health care products or indirectly from absorbed food contact material can interact with the gastrointestinal tract (GIT). The permeability of the GIT is strictly regulated to maintain barrier function and proper nutrient absorption. The single layer intestinal epithelium adheres and communicates actively to neighboring cells and the extracellular matrix through different cell junctions. In the current study, we hypothesized that oral exposure to AgNPs may alter the intestinal permeability and expression of genes controlling cell junctions. Changes in cell junction gene expression in the ileum of male and female rats administered different sizes of AgNP for 13-weeks were assessed using qPCR. Results The results of this study indicate that AgNPs have an altering effect on cell junctions that are known to dictate intestinal permeability. mRNA expression of genes representing tight junction (Cldn1, Cldn5, Cldn6, Cldn10 and Pecam1), focal adhesion (Cav1, Cav2, and Itgb2), adherens junction (Pvrl1, Notch1, and Notch2), and hemidesmosome (Dst) groups were upregulated significantly in females treated with 10 nm AgNP, while no change or downregulation of same genes was detected in male animals. In addition, a higher concentration of pro-inflammatory cytokine, TNF-α, was noticed in AgNP-treated female animals as compared to controls. Conclusions This study proposes that interaction of silver with GIT could potentially initiate an inflammatory process that could lead to changes in the gastrointestinal permeability and/or nutrient deficiencies in sex-specific manner. Fully understanding the mechanistic consequences of oral AgNP exposure may lead to stricter regulation for the commercial usage of AgNPs and/or improved clinical therapy in the future.
Introduction
Silver nanoparticles (AgNPs) are small, spherical particles of silver that range between 1 and 100 nm in size and continually release silver ions [1]. Nanoparticles can behave differently than larger particles of the same matter due to their extraordinary surface area to volume ratio [2,3]. Use of AgNP is not currently authorized in the US; however, AgNPs have been incorporated into a variety of consumer goods worldwide including clothing, medical products, and food packaging as antimicrobials [4][5][6]. AgNPs have the unique property to prevent the growth of bacteria and viruses and are known to extend the shelf-life of many food products [7][8][9][10]. Silver ions from the AgNPs that are incorporated into food contact materials are likely to migrate into the food by diffusion, dissolution, and/or desorption [11,12]. In addition, colloidal silver with AgNPs are found in health supplements sold commonly in stores that claim to support health [13,14]. Inclusion of AgNP in consumer use products and health supplements prompted a need for the safety of such materials. In a 14-day monitored human oral dosing study two doses (10-and 32-ppm) of a commercial silver nanoparticle solution were consumed by healthy individuals over 14 days [15]. The results from this human study did not show observable clinically important toxicity markers. However, peak serum silver concentration was detected in 42% and 92% of subjects in 10 ppm and 32 ppm dosed groups, respectively. This warrants further investigations for additional critical parameters, such as effect on the intestinal epithelium permeability, especially with long term exposure. Due to the high potential of gastrointestinal exposure to AgNPs health supplements, it is important to understand the potential adverse health effects that may occur due to the changes in the intestinal mucosal permeability [16,17].
The gastrointestinal tract (GIT) is the largest mucosal surface of the human body and is responsible for both barrier function, digestion of food, nutrients/water absorption, and excretion in healthy individuals. To carry these essential functions out properly, the intestinal epithelial cells (ICE) must maintain a constant state of homeostasis. Tight junctions, among other cell junctions, play a key role in sealing the intestinal epithelium to prevent harmful microbes and xenobiotics from entering systemic blood circulation [18]. A detailed schematic of the six major cell junction types is shown in Fig. 1. There are a wide variety of diseases and disorders associated with intestinal inflammation due to altered permeability including Crohn's Disease, irritable bowel syndrome, and celiac disease [19,20]. Importantly, oral exposure to AgNPs may cause alterations in intestinal permeability in healthy people or exacerbate the poorly regulated permeability in patients who suffer from gastrointestinal inflammation and other gastric disorders. Changes in the intestinal permeability may lead to exogeneous molecules cross the epithelial barrier, which can result in "leaky gut syndrome", or nutrient deficiencies and malnutrition [21,22]. On the other hand alteration in permeability may also activate immune cells leading to infection and inflammation [23]. Moreover, cytokines such as intestinal tumor necrosis factor α (TNF-α) are usually elevated during gastrointestinal inflammation [24].
In our previously published research, the 10 nm AgNPs were found to have the greatest impact on gut permeability, compared to other sizes of AgNPs, in an in vitro model [25]. In the present study, we utilized ileal tissue from Sprague-Dawley rats exposed to AgNPs by oral gavage to examine the effects on intestinal permeability via gene expression analysis. The purpose of this study was to compare our previous findings, where we used an in vitro intestinal epithelial cell culture model, to data derived from a 13-week oral gavage study in a rodent model to understand the potential alterations in intestinal permeability during AgNP exposure.
Effect of AgNPs in male vs. female animals
AgNPs (10 nm and 110 nm) and silver acetate (AgOAc) were suspended in sodium citrate and water, respectively. Carboxymethylcellulose (CMC) at 0.1% was used in the sodium citrate and all AgNP solutions, whereas 0.1% methylcellulose (MC) was used in the water and silver acetate solutions as vehicle to prevent the solutions from passing through the intestine too quickly. The AgOAc control in the study served to differentiate gene expression changes due to silver ions or the nanoparticles, specifically. After RNA extraction and cDNA synthesis, qPCR analysis was performed to assess the gene expression of cell junction and permeability genes in the small intestine. Interestingly, the numbers of genes upregulated and/or downregulated were entirely different in male and female animals (Fig. 2). Details regarding the fold change of each gene are represented in subsequent figures.
The expression of a gene was considered upregulated or downregulated if the fold change was equal to or greater than twofold. However, changes in gene expression were only considered statistically significant with p ≤ 0.05.
Changes in expression of tight junction genes
In males treated with AgNPs, there were fluctuations in tight junction genes, albeit they can be accounted as noise due to a lack of statistical significance (Fig. 4a). However, one gene, Ocln, was significantly downregulated in the AgOAc group.
In females, many of the claudin and other tight junction genes were upregulated after AgNP exposure (Fig. 4b). Cldn1, Cldn10, Cldn5, Cldn6, Icam1, and Pecam1 were tight junction genes upregulated in only the 10 nm AgNP group. Some of these genes were upregulated with a greater magnitude than the others. Namely, Cldn1, Cldn10, and Cldn5 were all significantly upregulated more than fivefold. In contrast, 2 different tight junction gene Cldn15 and Cldn9 were upregulated only in the AgOAc group. Overall, the tight junction genes were the most affected family of cell junction genes by AgNPs, especially in female animals.
Changes in expression of focal adhesions
Upon analysis of focal adhesion gene expression in males, only the group treated with AgOAc had a downregulation in Cav1 and Itga8 (Fig. 5a). Interestingly, females had a distinctly opposite pattern of focal adhesion gene expression (Fig. 5b). The group treated with 10 nm AgNP experienced a significant upregulation in Cav1, Cav2, and Itgb2, while the AgOAc group underwent upregulation of Itgal.
Changes in expression of adherens junctions
Male rats did not experience significant alterations in adherens junction gene expression with AgNP treatment (Fig. 6a). When analyzing adherens junction genes in females, it was noted that Notch1, Notch2, and Pvrl1 were upregulated in the 10 nm AgNP group (Fig. 6b). Interestingly, Notch3 and Notch4 were upregulated significantly in only the AgOAc treated group.
Changes in expression of gap junctions
Gap junction gene expression was not affected by AgNPs in male rats. In female animals, Gja3 expression increased only in the AgOAc group, and this change was greater than tenfold (Fig. 7b).
Changes in expression of desmosomes and hemidesmosomes
Male animals did not experience any changes in hemidesmosome genes, but female animals treated with 10 nm AgNP had a significant upregulation of Dystonin (Dst) (Fig. 7a, b). Furthermore, downregulation of Dsg4, a desmosome gene, was observed in female rats treated with 10 nm AgNP. Interestingly, this is the only downregulated gene observed in any female group of this project.
Changes in protein level of TNF-α
AgNP caused an increased level of pro-inflammatory response (TNF-α secretion) in all experimental animals as compared to respective controls. However, male animals did not show statistically significant difference in the TNF-α secretion. Female animals treated with 10 nm AgNP had a significantly higher level of TNF-α (Fig. 8a, b).
Discussion
The increased use of AgNPs has prompted the urgency to address the knowledge-gap regarding the potential gastrointestinal effects of AgNP exposure [26]. The structural integrity and barrier function of intestinal epithelial cells are regulated by several genes that include notch receptors, claudins, and desmosomes. These genes play a significant role in activating cell signaling for immune activation and mucin secretions to maintain barrier function. Furthermore, the single cell layer of intestinal epithelium plays an essential role in both nutrient absorption and barrier function in healthy individuals. Importantly, cell junctions, such as tight junctions, adherens junctions, and gap junctions, are held responsible for cell adhesion and communication within the intestinal epithelium [27][28][29]. AgNPs can interact with the host mucosa as nanoparticles, as well as, released ions or changed composition (e.g., to AgCl) in the stomach. The present study was designed to examine the changes in the gastrointestinal epithelial layer cell junction gene expression in male and female rats exposed orally to different sizes of AgNPs.
The results from this study indicate that there is a substantial difference of gene expression between male and female animals. In general, male animals experienced downregulation of cell junction genes, while female animals underwent upregulation, and many of those changes were statistically significant (Table 1). In females, 5 out of the 6 groups of cell junction genes were affected by 10 nm AgNPs. Tight junction (Cldn1, Cldn5, Cldn6, Cldn10 and Pecam1), focal adhesion (Cav1, Cav2, and Itgb2), adherens junction (Pvrl1, Notch1, and Notch2), and hemidesmosome (Dst) groups were all upregulated significantly in females treated with 10 nm AgNP, indicating potential changes in intestinal permeability. It was also observed that most of the changes in female gene expression were in the tight junction family, specifically claudin genes. These genes have been studied thoroughly and the altered claudin genes in this study were noted to be classified as the "classic claudin" family [30]. Tight junctions are the most important junctions in the intestinal epithelium for the control of paracellular transport [18]. Specifically, Cldn10 contributes to the formation of pore to facilitate paracellular transport. As mentioned earlier, females exhibited greater changes in gene expression than males. This unambiguous difference between the sexes may be explained by hormonal physiology. Tight junctions are strictly regulated by sex hormones [31,32]. Several of the genes that this study found to be altered significantly, such as Pvrl1, have been associated with progesterone regulation [33]. Additionally, the expression of Cav1 has been linked to estrogen levels in rats [34,35]. Remarkably, sexual dimorphism in response to exogenous substances has been found to be increasingly important in toxicological studies [16,36]. Thus, it may be advantageous to monitor hormone levels in future in vivo studies, specifically with regard to the female menstrual cycle.
In intestinal epithelial cells, Notch signaling is involved in cell-cell communication with neighboring cells, and cross talk through Wnt signaling pathways of intestinal secretory cells [37]. Notch signaling is also responsible for differentiation of proliferated cells into goblet cells [38,39], which is essential for secretion of secretary mucins. Additionally, a desmosome gene (Dsg4) was downregulated significantly in females treated with 10 nm AgNP. Since desmosomes are responsible for cell to cell adhesion in epithelial cells, these results suggest a loss of integrity in the intestinal epithelium.
The central goal of this study was to understand the impact of different sizes of nanoparticles on the permeability of the gastrointestinal system in male and females. Size differences between AgNPs and the release of ions from AgOAc may affect cellular components disparately, eliciting different gene expression patterns. We have previously shown higher microbicidal activity of smaller size AgNP (10 nm) as compared to larger size AgNP (110 nm) when animals were orally gavaged [17]. This difference was attributed to greater production of silver ions by 10 nm AgNP due to high surface area to volume ratio, suggesting it can exert more toxicity than a larger particle could. Moreover, the larger size AgNP (110 nm) may have tendency to agglomerate [17]. It is well known that commensal bacteria form a protective layer and maintains intestinal epithelial cell permeability. In vitro studies by our group [25] showed that the smaller nanoparticles are more capable of passing through cell junctions and disrupting essential processes. Thus, upregulation of the permeability related genes may be a defense mechanism by the host to protect itself.
In males, expression of some genes in tight junction (Ocln) and focal adhesion (Itga8 and Cav1) groups was altered due to the exposure of AgOAc, but not AgNPs. Genes that were observed to have a decrease in expression, indicate looser cell junctions and an increase in intestinal permeability. Thus, it is tempting to speculate that silver ions (release via AgOAc) may have impact on the permeability in male rats, however, AgNPs did not have a significant effect on gene expression in male animals. Females also experienced changes in the expression of mRNA gene in the AgOAc group, albeit in different genes (Cldn15, Cldn9, Gja3, Itgal, Notch3, Notch4). One animal study revealed that Cldn15 is critical for transporting Na + through para-cellular spaces to the intestinal lumen for maintaining the ionic balance, which in turn facilitates the efficient absorption of glucose and other nutrients from the intestinal fluid [40]. Higher expression of Gja3 could contribute to formation of gap junctions between two adjunct cells to release the pressure due to higher absorbance of solute molecules. In this study, the tight junction family is the most adversely affected by AgNP exposure. Increased expression of the tight junction genes in females correlated with the increased secretion of TNF-α by the intestinal tissue. TNF-α is a pro-inflammatory cytokine and affects epithelial permeability. Increased intestinal permeability may further promote the exposure to luminal content and trigger an immunological response and intestinal inflammation [24,41]. It is possible that the genes expressed differently are attempting to compensate the irritated and/or inflamed intestinal epithelium [42]. Barrier function is a critical responsibility assigned to claudins [43] and thus, gastrointestinal infections could be of particular concern in AgNP exposure [44]. Alternatively, it is important to consider that the changes in the cell junction gene expression or permeability could potentially lead to malnutrition and nutrient deficiencies. A recent study found that mice with anorexia experienced alterations in genes controlling intestinal permeability [45]. Additionally, mice with a double knockout of Cldn2 experienced defective paracellular Na + and nutrient transport in gut and died from malnutrition [46], suggesting that alterations in only a few cell junction genes can make a lethal impact on individuals. However, the weight of the female animals used in this study did not change significantly throughout the study when gavaged with AgNP [16]. AgNP gavaged male animals showed some increase in the body weight, but this increase was not considered biologically relevant [16].
Overall, it is important to note that many of the examined cell junction genes were altered significantly in animals exposed to AgNPs. Similarly, the in vitro conclusions from this group's previous publication indicate AgNP exposure may cause subtle alterations in cell junctions and intestinal permeability [25]. Earlier reports described the effect of AgNPs on the blood-brain barrier (BBB) permeability in rat model; where intravenous, intraperitoneal, or intracerebral administration of nanoparticles resulted in the BBB breakdown in vivo [47]. To the authors' knowledge, this is the first time that intestinal permeability alterations from oral AgNP exposure have been studied in a rat model. From this study, it is proposed that due to the oral exposure to AgNP, a pro-inflammatory reaction is initiated and may lead to changes in intestinal permeability. A cascade of these reactions may facilitate direct exposure of luminal content to gut-associated mucosal response and could potentially lead to the development of gastrointestinal inflammation/ disease and/or nutrient deficiencies. More research is necessary for a complete understanding of the genderspecific differences along with the physiological and functional outcomes.
Animal study
The ileal tissues used for this research were taken from an earlier study that evaluated particulate and ionic forms of silver and particle size for differences in silver accumulation, distribution, morphology, and toxicity when administered daily by oral gavage to Sprague-Dawley rats for 13 weeks [16]. Test materials and dose formulations were characterized by transmission electron microscopy (TEM), dynamic light scattering, and inductively coupled mass spectrometry (ICP-MS) as described earlier [16]. Seven-week-old male and female Sprague-Dawley rats (10 rats per sex per group) were randomly assigned to treatment: AgNP (10 or 110 nm) at 9, 18, and 36 mg/ kg body weight (bw); and silver acetate (AgOAc) at 100, 200, and 400 mg/kg bw; and controls. AgNPs (10 nm or 110 nm) or AgOAc were compared to 2 mM sodium citrate/0.1% CMC or water/0.1% MC gavaged rats, respectively. At termination, complete necropsies were conducted, histopathology, hematology, serum chemistry, micronuclei, and reproductive system analyses were performed, and silver accumulations and distributions were determined [16]. Rat ileum (2 cm section) was collected from each rat at necropsy to determine the effects of test materials on the intestinal microbiome and gut-associated immune responses [17]. We showed that exposure to 10 nm AgNP at the lowest dose (9 mg/ kg bw/day) was most detrimental for intestinal microbial population and gut-associated immune responses [17]. Thus, for the present study mRNA expression of the permeability related genes and protein levels of TNF-α in the intestinal tissue were evaluated in the animals gavaged with the smallest size and the lowest dose [10 nm AgNP (9 mg/kg bw/day)]. The mRNA expression levels were further compared with the largest size of the same dose animals [110 nm AgNP (9 mg/kg bw/day)] and AgOAC (400 mg/kg bw/day). AgNPs (10 nm or 110 nm) or AgOAc were compared to 2 mM sodium citrate/0.1% CMC or water/0.1% MC gavaged rats, respectively and served as controls. Each experimental and control group consisted of three individual animals from both male and female. A detailed experimental protocol for RNA extraction is published earlier [17].
RNA extraction and qPCR analysis
The ileal tissues from Sprague-Dawley rats were thawed, then RNA was extracted using Trizol reagent (Molecular Research Center, Cincinnati, OH). Using the Turbo DNA-free kit (Life Technologies, Carlsbad, CA, USA), RNA was treated to remove any DNA contamination and then quantified using the NanoDrop ® ND-1000 (Nan-oDrop, Wilmington, DE). Clean RNA was reverse transcribed into cDNA with the Invitrogen SuperScript IV Vilo kit (ThermoFisher, Carslbad, CA, USA). cDNA was analyzed using the RT 2 Profiler PCR Array Rat Cell Junction Pathway Finder (Qiagen, Valencia, CA, USA) plates in an ABI 7500 Real-Time PCR system (Life Technologies, Carlsbad, CA, USA). Amplification was conducted in the following manner: 95 °C for 10 min, followed by 40 cycles of 95 °C for 15 s and 60 °C for 1 min. In addition, melt curve analysis was performed to verify the purity of each product. Each plate examined 84 unique genes for one sample and three samples were analyzed for each experimental group. mRNA Gene expression data analysis was performed using the Qiagen Data Analysis Center (https ://www. qiage n.com/us/shop/genes -and-pathw ays/data-analy sis-cente r-overv iew-page/). Data was normalized using following housekeeping genes: Beta actin (β-Actin), Beta-2 microglobulin (β2M), Hypoxanthine phosphoribosyltransferase 1 (Hprt1), and Ribosomal protein large P1 (Rplp0). These housekeeping genes are constitutively expressed in all cells and are considered a reliable control in intestinal epithelium. Treatment groups exposed to either 10 nm AgNP or 110 nm AgNP were compared to the control group treated only with 0.1% carboxymethylcellulose (CMC). Furthermore, animals exposed to AgOAc were compared to the 0.1% methylcellulose (MC) control group. Statistical analysis was completed with a Student's t-test. A p value of < 0.05 was chosen a priori to signify statistical significance.
Protein extraction and TNF-α measurement
Protein lysate from the intestine was prepared using a gentleMACS-dissociator (Miltenyi Biotec Inc. Auburn, CA) as described earlier [48]. Levels of TNF-α were measured in the intestinal tissue lysate using beadbased assay described by Gokulan and co-workers [48]. Statistical analysis for TNF-α was conducted to compare of difference in the treatment groups using the Mann-Whitney test and a p value < 0.05 was considered significant.
Table 1 Summary of all differentially regulated genes involved in the maintenance of intestinal epithelial cells integrity
This table shows significant (p ≤ 0.05) alterations across all treatment groups as compared to its respective controls
|
2019-05-16T13:03:44.911Z
|
2019-05-13T00:00:00.000
|
{
"year": 2019,
"sha1": "3fdb1639ea2e0c4a6e45dd3472111bdc3f454d0c",
"oa_license": "CCBY",
"oa_url": "https://jnanobiotechnology.biomedcentral.com/track/pdf/10.1186/s12951-019-0499-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3fdb1639ea2e0c4a6e45dd3472111bdc3f454d0c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
238858775
|
pes2o/s2orc
|
v3-fos-license
|
The molecular evolution of function in the CFTR chloride channel
Features of related ABC proteins provided a unique opportunity for emergence of novel channel function in CFTR by incremental evolution.
Introduction
The ATP-binding cassette (ABC) transporter superfamily includes many members of clinical relevance, such as the multidrug resistance proteins (MRPs) and other proteins involved in generation of antibiotic resistance, transport of a wide variety of substrates in pathogenic bacteria, and transport of bile acids, lipids, and lipopolysaccharides (Ford and Beis, 2019;Jetter and Kullak-Ublick, 2020). ABC transporter genes encode the largest family of transmembrane (TM) proteins among living organisms (Briz et al., 2019) and are expressed in all domains of life (Ford and Beis, 2019;Holland et al., 2003). Either function or dysfunction of ABC transporters is implicated in development or treatment of cancer (Briz et al., 2019;Nobili et al., 2020), neurological disorders (Jha et al., 2019;Sumirtanurdin et al., 2019), detoxification (Briz et al., 2019), visual function (Garces et al., 2018), and, among many other clinical presentations (Moitra and Dean, 2011), in cystic fibrosis (CF; Riordan et al., 1989). In CF, mutations in the gene encoding CFTR lead to loss of anion transport in a wide variety of epithelial tissues . In this review, we use the data generated from >30 yr of intensive structure-function study of CFTR and related proteins to propose and evaluate a potential route by which CFTR may have evolved unique function as a phosphorylation-regulated chloride channel. New insights are made possible by the advent of high-resolution cryo-EM structures of CFTR and the recent cloning and characterization of the evolutionarily oldest known orthologue of CFTR, from sea lamprey (Lp-CFTR; see below), which exhibits many functional differences from the human CFTR orthologue (hCFTR; Cui et al., 2019a).
ABC transporters use the energy of ATP binding and hydrolysis to accomplish the active import or export of various substrates across membranes (Rees et al., 2009). There are seven subfamilies of mammalian ABC transporters (ABCA, ABCB, ABCC,… ABCG), of which the E and F subfamilies do not bear actual transport function (Dean et al., 2001;Ford and Beis, 2019). A new classification of the ABC transporter superfamily that is based on the transmembrane domain (TMD) fold has recently been suggested (Thomas et al., 2020). CFTR is denoted ABCC7 and a member of type IV, respectively, according to these two classification schemes. CFTR bears ATPase activity like that of other ABCC subfamily members (Li et al., 1996;Stratford et al., 2007;Jordan et al., 2008), but biophysical methods have firmly established that CFTR functions as a phosphorylation-activated and ATP-gated ion channel (Anderson et al., 1991a;Anderson et al., 1991b;Bear et al., 1992;Berger et al., 1991;Sheppard et al., 1993), whereas its closest ABCC relatives function as multispecific exporters of organic anions (Jordan et al., 2008). CFTR may directly mediate the flux of glutathione (Gao et al., 1999;Kogan et al., 2003;Linsdell and Hanrahan, 1998), although CFTR-mediated active transport has not been shown, to our knowledge. Glutathione is transported by close ABCC relatives ABCC1/MRP1 (Mao et al., 1999) and ABCC4/MRP4 (Choi et al., 2001;Ko et al., 2002;Kogan et al., 2003;Ritter et al., 2005;Serrano et al., 2006); previous analysis has identified ABCC4 as CFTR's closest relative (Jordan et al., 2008; see also Cui et al., 2019a). The domain organization of CFTR is similar to that of its closest relatives, the "short transporters" of the ABCC subfamily (Jordan et al., 2008;Ford and Beis, 2019;Srikant and Gaudet, 2019), with two nucleotide-binding domains (NBDs) that function in ATP binding and hydrolysis, and two TMDs, each containing six TM helices that comprise the substrate transport pathway (Fig. 1). However, unique to CFTR is an intracellular regulatory (R) domain that contains multiple consensus sites for phosphorylation by PKA (Sebastian et al., 2013).
The opening of CFTR may be simplified to involve three sequential steps that have been uncovered via a combination of functional and structural data. First, PKA binds to (Mihályi et al., 2020) and phosphorylates (Rich et al., 1991) the aforementioned R domain, which results in loss of inhibitory interactions between that domain and the rest of the channel protein. Second, ATP binds to two sites at the interface of the cytoplasmic NBDs, which promotes a stable NBD dimer (Mense et al., 2006;Vergani et al., 2005). Finally, the wave of conformational changes associated with ATP-induced dimerization of the NBDs is transmitted to the pore domain, resulting in pore opening (Rahman et al., 2013;Simhaev et al., 2017;Sorum et al., 2015;Strickland et al., 2019). In related ABC exporters, ATP-dependent dimerization of the NBDs drives an overall transition from inward-to outwardfacing conformation of the TMDs; this function was coopted by CFTR to drive ATP-induced channel opening (Fig. 2). At the level of individual residues, there is high conservation with transporters among amino acids in CFTR that are proposed to stabilize the inward-facing (closed) conformation in the absence of ATP (Wang et al., 2010;Wei et al., 2014;Wei et al., 2016), suggesting conservation of motifs integral to energetic signaling (Wang et al., 2014b;Wei et al., 2014;Wei et al., 2016). The close proximity of intracellular loops 2 and 4 (ICL2 and ICL4, respectively; Doshi et al., 2013;Wang et al., 2014b), constriction of the intracellular vestibule (Bai et al., 2011), and dilation of the extracellular vestibule, relative to the closed state, are all associated with channel opening (Beck et al., 2008;Infield et al., 2016;Norimatsu et al., 2012b;Rahman et al., 2013;Strickland et al., 2019). The CFTR pore opens in stages, requiring the sequential breaking and forming of intraprotein residue-residue interactions (Cui et al., , 2014Rahman et al., 2013), resulting in two subconductance states in addition to the fullconductance state (Gunderson and Kopito, 1995;Zhang et al., 2005a;Zhang et al., 2005b;Fig. 3). Using a particularly informative cysteine mutant at the outer vestibule, R334C-CFTR, the McCarty laboratory found that transitions between these subconductance states are highly dependent upon experimental conditions; for example, closing transitions almost always start from the s2 state in the presence of ATP, and transitions from s2 to f never occur in channels bound with the poorly hydrolyzable ATP analogue AMP-PNP (see also Langron et al., 2018), suggesting that this transition requires hydrolysis of nucleotide at the NBDs (Zhang et al., 2005a;Zhang et al., 2005b). Subconductance states are evident in recordings of WT CFTR from membrane patches and planar lipid bilayers, depending on experimental conditions, indicating that these represent inherent steps in gating of the channel pore (Gunderson and Kopito, 1995). In WT-hCFTR, this open pore is quite stable and does not close until ATP is hydrolyzed at the NBDs (Baukrowitz et al., 1994). Note that because CFTR displays three types of gating in one channel (phosphorylation-mediated, ligand-mediated, and pore-mediated gating), it serves as an exemplary target for studying the evolution of functional mechanisms within a single membrane protein.
Natural history of the CFTR channel in vertebrates Given the structural conservation among CFTR and ABC exporters noted above, and functional conservation in terms of ATP dependence, how CFTR evolved to function as an anion channel regulating passive ionic diffusion has been an enduring question (Srikant, 2020;Srikant et al., 2020). Molecular evolution studies are facilitated by the availability of many orthologues for the protein/gene of interest, spanning as much of the evolutionary record as possible. Currently, ∼300 CFTR orthologues are included in GenBank/UniProt, although not all of these are represented by expressible cDNA clones. Until very recently, the oldest CFTR orthologue known was from the dogfish shark, arising ∼150 million yr ago (MYA; Fig. 4; Marshall et al., 1991); this orthologue bears functional characteristics similar to those of hCFTR. However, reasoning that the identification of an earlier CFTR orthologue with altered structure/function would provide novel insight into the evolution of epithelial anion transport, the Gaggar and McCarty laboratories recently led an effort to clone and characterize the Lp-CFTR (Cui et al., 2019a), which arose ∼550 MYA (Smith et al., 2013). The identification of a CFTR orthologue in the jawless vertebrates establishes that CFTR exists across all vertebrates, predating the divergence of jawed and jawless vertebrates at the end of the Cambrian Period ∼488 MYA. Sequence analysis indicates 46% sequence identity and 65% sequence similarity between Lp-CFTR and hCFTR, which is much lower Modification of ATP-dependent transport activity in ABC transporters led to channel behavior, coopting the conformational changes necessary for unidirectional substrate transport in common ABC transporter systems. CFTR evolved features that break the alternating access cycle (solid-line arrows), enabling it to be open at both ends (box). Color scheme for major domains (again, lacking the R domain) is the same as in Fig. 1. Figure 3. Gating scheme for CFTR. Prephosphorylated channels are shown in the membrane (gray slab) with two TMDs (brown and dark blue) and two NBDs (green and light blue), with ATP (red circle) and ADP (yellow circle). ATP-dependent gating is shown as including NBD-mediated gating steps leading to pore gating between conductance levels. Here, we do not distinguish between s1 and s2 subconductance levels, because s1→s2 occurs very rapidly in WT-hCFTR. than that among jawed vertebrate CFTRs (jv-CFTRs) and includes surprising divergence in functionally relevant motifs. Accordingly, Lp-CFTR differs from hCFTR in multiple functional characteristics ( Table 1). The availability of this new orthologue thus provides the earliest evolutionary evidence of CFTR and lends insight into changes in gene and protein structure that underpin evolution of function from transporter to "optimized" anion channel. One important point to note in this respect is that, although the sea lamprey represents an evolutionary ancestor, it is also, of course, a currently living organism that may have undergone additional adaptation to its environment after the split with jawed vertebrates (Fig. 4). Thus, it cannot be automatically assumed that every position in CFTR that is unique in sea lamprey represents transitional change in the development of regulated channel activity. A good example in this regard is that of F508 in hCFTR, which is conserved across multiple ABC proteins but is leucine in lamprey (Cui et al., 2019a). Sorum et al. (2017) showed that replacing F508 with L in hCFTR significantly reduced its open probability. All known CFTRs other than Lp-CFTR and all known human ABCCs have F at this position, where the aromatic side chain is necessary for stabilizing the outward-facing state (Cui et al., 2006), so finding that this is substituted by a nonaromatic side chain in Lp-CFTR is mechanistically interesting and may represent a speciesspecific adaptation (Cui et al., 2019a).
Below, we identify several potential routes by which CFTR evolved regulated channel behavior. We propose that many features shared among bona fide ABCC proteins and present in recent ABCC ancestors of CFTR provided a unique opportunity for emergence of novel channel function by incremental evolutionary changes.
Molecular evolution of channel function
Construction of an anionic pore from an anionic substrate pathway Both the passive conduction of anions by CFTR and the unidirectional transport of highly structurally diverse organic anions by its ABCC relatives (Sauna et al., 2004) is accomplished by pathways through the TMDs. Therefore, divergence in these pathways would be expected to most closely reflect the principal difference between channels and transporters: channels contain a pore that allows uninterrupted permeation across the plasma membrane, a violation of the "alternating access" mechanism of transporters ( Fig. 2; Bai et al., 2011;Gadsby, 2009). This divergence would be accomplished by evolutionary changes distributed broadly through the TMDs, as suggested by a recent study of mutations that alter substrate specificity in a fungal pheromone transporter (Srikant and Gaudet, 2019;Srikant et al., 2020). In formation of the CFTR chloride channel, this would require both degradation of the "gates" seen in ABC transporters and stabilization of an open pore conformation (Bai et al., 2011). The relationship between substrate binding and opening/ closure of these gates, relevant to establishing the occluded state in transporters, may remain in CFTR in a vestigial state, as evidenced by reports that permeating anions may affect gating transitions (Sorum et al., 2015;Yeh et al., 2015;Zhang et al., 2000;Zhang et al., 2002). Understanding how the CFTR pore evolved requires the integration of functional and structural information. Early 2-D electron crystallography of hCFTR at low resolution (Rosenberg et al., 2004;Rosenberg et al., 2011) confirmed the general ABClike architecture of CFTR predicted in the initial gene discovery study (Riordan et al., 1989). In addition, several homology models of CFTR were developed using structures of related ABC transporters as a template. These studies contributed to the understanding of the molecular interface encompassing the most common CF-causing mutation (ΔF508; Mornon et al., 2008;Serohijos et al., 2008), as well as several details relating to the conformational transitions underlying CFTR gating (Corradi et al., 2015;Dalton et al., 2012;Furukawa-Hagiya et al., 2013;Mornon et al., 2015;Mornon et al., 2009;Rahman et al., 2013;Strickland et al., 2019). However, the disparity between the wide variety of substrates of nonchannel ABC transporters and the chloride channel function of CFTR resulted in intrinsically limited confidence in these homology models, at least with respect to the TMDs.
In the last 5 yr, eight structures of detergent-solubilized CFTR from three orthologues have been released from two laboratories in a large range of resolutions, all solved by single-particle cryo-EM (Table 2 and Fig. 5).
The first structures were of the ATP-free, dephosphorylated zebrafish CFTR (zfCFTR) in inward-facing conformation at a reported resolution of 3.7Å and, under the same conditions, hCFTR at a reported resolution of 3.9Å. In both structures, the NBDs were of significantly lower resolution than the rest of the protein, and thus crystal structures of exogenous NBDs were used to construct the final models Zhang and Chen, 2016). Subsequently, the structures of phosphorylated, ATP-bound, hydrolysis-deficient mutants of zfCFTR and hCFTR in the outward-facing state were resolved at reported resolutions of 3.4Å and 3.2Å, respectively Zhang et al., 2018). In addition to revealing a structural motif unsuspected for CFTR-the lasso motif found in other ABCC transporters (e.g., SUR1, SUR2, MRP1) in which the N-terminus loops into the lipid bilayer ( Fig. 1 A)-these CFTR structures exhibited TM helix positioning and secondary structure that may be unique to CFTR among the ABCs. Of note, TM7 and TM8 are rearranged such that the top-down TM helix symmetry of most ABC transporters is broken. There are also kinks in TM8 and TM5 helices in approximately the same vertical position. We note that two structures from recombinant thermostabilized chicken CFTR (chCFTR), one in dephosphorylated conditions with ATP present (resolution, 4.3Å) and one in phosphorylated conditions with ATP present (resolution, 6.6Å), show TM8 as fully helical and lack the rearrangement of TM7 and TM8, instead positioning TM7 nearly orthogonal to the fatty acid tails of the lipid bilayer (see Fig. 5; Fay et al., 2018).
The positioning of TM8 in the Chen structures has been supported by functional evidence suggesting that some residues of TM8 line the CFTR channel pore (Negoda et al., 2019). The unwound portion of TM8 has been proposed by the Chen laboratory to underlie CFTR's unique channel function , and molecular dynamics studies suggest that this unwinding would be maintained in a lipid bilayer (Corradi et al., ) and nearly open hCFTR with ATP bound (PDB accession no. 6MSM), the oppositely charged ends of these residues essentially overlap. It is very interesting to note that R933 is conserved within CFTR and ABCC4 orthologues among both jawed and jawless vertebrates. However, E873 is conserved within jawed vertebrates but is Q in both Lp-CFTR and all ABCC4s, although this assignment must remain tentative due to the poor alignment between CFTR and ABCC4 sequences in TM7.
Within the unwound stretch of TM8 itself, sequences are poorly conserved even within the CFTR and ABCC4 branches.
Importantly, an open structure of CFTR with a fully conducting ion pore has yet to be published. Currently, all structures have been determined with CFTR in detergent; additional structures of CFTR in a lipidic environment may be needed to elucidate the fully conducting ion pathway as well as to understand the complex conformational transitions between open and closed states. Regardless of these considerations, these structures can certainly be used to spatially locate amino acids that have been implicated in CFTR channel function. In aid of this, significant effort has been expended to functionally map the chloride conduction pathway through CFTR. Many studies have mutated putative pore residues and characterized channel behavior and modulation McCarty et al., 1993;McDonough et al., 1994;Tabcharani et al., 1997). To identify explicitly "pore-lining" residues, several groups have employed the substituted cysteine accessibility method. This approach probes the environment of specific residues by mutating them to cysteine and characterizing their reaction to sulfhydryl-specific chemicals (Karlin and Akabas, 1998).
In the process of going through the channel to exit the cell, the chloride ion first encounters highly conserved basic residues in the ICLs, including K190, R248, R303, K370, R1030, K1041, and R1048. These residues are proposed to play roles in attracting chloride ions into the pore because charge-eliminating mutations reduce single-channel conductance (Aubin and Linsdell, 2006;El Hiani and Linsdell, 2015;Zhou et al., 2008). Considering that they mediate anion conduction, it is initially surprising that this group of residues is very highly conserved in transporter ABCCs: all seven residues analogous to those listed above are basic in ABCC4 and most (five of seven) are basic in ABCC5. To our knowledge, the effect of mutations at these positions on the function of ABCC4 or ABCC5 has not been directly tested. However, functional studies of MRP1 (ABCC1) have specifically implicated several basic residues in analogous regions in the binding of organic anionic substrates (Conseil et al., 2006;Haimeur et al., 2004) that are transported by the majority of ABCCs, including ABCC4 and ABCC5 (Jansen et al., 2015;Ritter et al., 2005). These data are intriguing because they suggest that one way in which CFTR evolved chloride channel activity was to use residues already functionally important in the transport of organic anionic substrates and repurpose them toward the novel function of conducting inorganic anions through the channel pore. In further support of this, several substrates of ABCC transporters inhibit CFTR by blocking the pore from the intracellular side (Linsdell and Hanrahan, 1999). Hence, these residues may contribute to a vestigial binding site for these substrates within CFTR. Another intriguing possibility is that ABCC4 and ABCC5 may allow the conductance of chloride along with their traditional substrates during transport, in a manner akin to the leak current associated with the function of neurotransmitter transporters (Fairman et al., 1995;Sonders and Amara, 1996;Wadiche et al., 1995). Such a substrate-induced current has not yet been measured from cells expressing ABCC4 or ABCC5, although this would be expected to be of very low amplitude (due to the slower nature of transporter function) and would likely be challenging to measure because substrate binds intracellularly in these proteins.
Strikingly, the pore-lining residues of several TMs are highly conserved between CFTR and ABCC4; for example, in TM1, six of seven pore-lining residues in CFTR are identical in ABCC4. Regarding this conservation, TM6 (see region bounded in red in Fig. 6) is an outlier, both in terms of the number of biochemically divergent pore-lining residues and as calculated as a sum of the Grantham scores (incorporating differences in composition, polarity, and molecular volume; Grantham, 1974) to gauge evolutionary distance between consensus amino acids of CFTR and ABCC4 sequences from jawed vertebrates (Table 3). In addition, the substituted cysteine reactivity pattern for the extracellular end of TM6 is anomalous for an α-helix in a membrane protein; the stretch of residues from L331 to V345 is nearly uninterrupted in terms of accessibility to membrane-impermeant thiol-directed reagents applied extracellularly (Alexander et al., 2009;Bai et al., 2010;Norimatsu et al., 2012a), whereas residues F337 through V345 exhibit a helical pattern of modification by MTS reagents applied intracellularly (Bai et al., 2010;El Hiani and Linsdell, 2010). This also contrasts with better-conserved helices such as TM1 and TM11, wherein reactivity follows a helical periodicity (Table 3).
Divergence in TM6, a highly discriminatory region of the CFTR pore (McCarty and Zhang, 2001), may play important roles in neofunctionalization toward channel activity while retaining glutathione transport capacity (Kogan et al., 2003). Divergent residues such as R334 in TM6 also play important enough roles in the electrostatic attraction of Cl − and in pore stability (Zhang et al., 2005b) that their mutation causes CF (Sheppard et al., 1993). showing major domains, with sections of non-pore-lining helices removed in order to visualize the chloride ion permeation pathway. Dark blue residues, identical between jawed vertebrate consensus CFTR and ABCC4; black residues, biochemically similar; magenta, biochemically divergent. The highly divergent pore-lining TM6 is bounded in red. (B) hCFTR (PDB accession no. 6MSM) is again shown, highlighting a lateral portal proposed to enable unique chloride channel activity among ABCCs. Inset is a closeup view of a kink in TM6. P355 is conserved with ABCC4, whereas R352 and Q353 are divergent.
How may this divergence be responsible for the structural changes necessary for the development of ion channel activity? First, divergence in TM6 may play a central role in the degradation of an intracellular transporter gate. In the human and zebrafish ATP-bound CFTR cryo-EM structures (PDB accession nos. 6M2M and 5W81), the intracellular region of TM6 is subtly kinked outward (Fig. 6 B), as opposed to being curved but tightly packed in ABCC1, the closest relative to CFTR for which a structure exists. It has been proposed that this change may have created an aqueous "portal" that contributes to the ion permeation pathway . Both functional and structural studies support the importance of these changes (El Hiani and Linsdell, 2015;El Hiani et al., 2016;Li et al., 2018;Zhang et al., 2017). Sequence comparisons in this region reveal that a proline was already present in this region in an ancestral ABCC. In the place of conserved hydrophobic residues in ABCC4, CFTR has hydrophilic residues in this region, including R352 and Q353. These residue changes may be responsible for fundamentally altering the interaction of TM6 with surrounding helices, ultimately contributing to the degradation of the intracellular gate. Notably, the Lp-CFTR sequence uniquely contains a serine residue analogous to position 353.
Second, divergence in the TMDs also apparently enabled the formation of several intraprotein interactions that stabilize the open CFTR pore, which would be antithetical to the rapid transitions in conformation of the substrate binding pocket in a transporter undergoing alternating access. Previously, to identify important loci of divergence between CFTR and transporters of the ABCC subfamily, the McCarty laboratory performed type II divergence analysis between CFTR and ABCC4 sequences (Jordan et al., 2008). This approach identified residues maximally conserved within groups and biochemically divergent between groups. Type II divergence is exemplified by residue positions within an alignment that (1) are completely conserved within paralogous groups and (2) have amino acids with biochemically different properties between paralogous groups (e.g., acidic charge versus basic charge; Gu, 1999;Gu, 2001). The concept as applied here is that use of type II divergence analysis would identify the specific domains and residues most likely to be involved in the evolutionary transition from transporter activity (ABCC4) to channel activity (CFTR). In this study, we found that two salt bridges (Fig. 7) that stabilize the open pore architecture of CFTR (R347-D924 [Cotten and Welsh, 1999] and R352-D993 ) consist of one residue that is highly conserved between CFTR and ABCC4 (R347 in TM6 and D993 in TM9) and one that is type II divergent (D924 in TM8 and R352 in TM6). Interestingly, both interactions include residues mutated in CF disease (Jordan et al., 2008). Here we note that in both of these salt bridge interactions, the residue 188,189,190,32 TM3 191,192,193,194,195,196,197,199,200,203,205,207,211,213 1127, 1129, 1131, 1132, 1134, 1135, 1137, 1138, 1139, 1140, 1141, 1142, 1144, 1145, 1147, 1148, 1150, 1152, 1156 561 Italics = identical; underlined = divergent; unformatted = similar. a A higher Grantham score indicates less conservation. biochemically conserved between CFTR and ABCC4 is divergent in ABCC5. Thus, in each pair, the first residue likely emerged in a common ancestor of CFTR and ABCC4 after divergence from ABCC5, thereby providing the basis of a salt bridge when the other residue subsequently emerged in CFTR (Fig. 7). For the R352-D993 pair, the evolution of R352 from divergent hydrophobic residues in the ancestors was highly adventitious because it appears to have simultaneously contributed to the formation of a pore-stabilizing salt bridge and the destabilization of the secondary structure of TM6 that potentially contributed to a cytoplasmic gate (see above). Similar evolutionary pathways may have been at play with interactions involving charged residues in extracellular loop 1, such as R117 (Cui et al., 2014). Of these, it is notable that R117 is not found in Lp-CFTR, where it is instead a hydrophobic residue as in ABCC4 and ABCC5. Thus, it is likely that additional residues, such as R117, emerged late in evolution to stabilize the pore in jv-CFTR. The existence of high-resolution structures for hCFTR in closed and nearly open states will facilitate the identification of other intraprotein interactions and allow us to ask whether these residues exhibit evolutionary patterns across species. Testing of the above will require structural and functional interrogation of CFTR transporter chimeras.
Evolution of CFTR regulation by phosphorylation of its R domain
CFTR is activated by PKA-mediated phosphorylation at consensus sites in the R domain representing a functional linker encoded between NBD1 and TMD2 ( Fig. 1; Ford et al., 2020;Hunt et al., 2013). The structural mechanism for the phosphorylationmediated regulation of CFTR by this intrinsically disordered domain is poorly understood but evidently involves dynamic, phosphosensitive interactions between R domain helices and nearby domains of CFTR, including NBD1 and NBD2 (Baker et al., 2007;Bozoky et al., 2013a;Bozoky et al., 2013b;Chappe et al., 2005). The R domain also has been suggested to plug the channel pore in a phosphorylation-dependent manner (Meng et al., 2019). Interestingly, although the fully dephosphorylated R domain precludes ATP-induced channel opening (Rich et al., 1991), biophysical studies strongly suggest that channel activity depends on the degree of PKA-mediated phosphorylation, in a rheostat-like manner, and that these sites play specific roles in "graded" activation of the channel (Csanády et al., 2005a;Csanády et al., 2000;Csanady et al., 2005b;Wilkinson et al., 1997). The phosphorylation of ABC proteins other than CFTR has not been extensively studied; however, there is some evidence that several members of the superfamily, including P-glycoprotein (ABCB1; Mellado and Horwitz, 1987), are phosphorylated in cells (see Stolarczyk et al., 2011 for a comprehensive review on this subject). There is evidence that several ABCB and ABCC proteins are phosphorylated in a region connecting NBD1 and TMD2 (Ford et al., 2020;Mellado and Horwitz, 1987;Stolarczyk et al., 2011). However, there is no clear evidence that mutation or phosphorylation of this region significantly affects the function of these transporters, as it profoundly does in CFTR (Stolarczyk et al., 2011). Moreover, the relevant PKA consensus sites in CFTR's R domain are located in an ∼200-aa region that is absent in other ABC transporters (including other ABCCs; Sebastian et al., 2013). Based on data available at the time, the McCarty and Jordan laboratory suggested that this region arose in CFTR specifically as the result of the loss of an RNA splice site at the end of exon 14 in the lineage between jawless and jawed vertebrates (Sebastian et al., 2013). However, revised sea lamprey gene assemblies (see https:// genomes.stowers.org/organism/Petromyzon/marinus and Smith et al., 2018) no longer indicate this splice junction, which explains the presence of an R domain in the cloned sea lamprey sequence (Cui et al., 2019a). The unique functional phosphoregulation of CFTR by the R domain may directly relate to its identity as the sole ion channel in the ABC superfamily. In the case of many bona fide ABC transporters, the activity of the protein, including hydrolysis of ATP (Senior et al., 1998), is highly dependent on the availability of substrates. These substrates, which include xenobiotics (Chen and Tiwari, 2011), are typically present at low concentrations in the cell, resulting in low transporter-associated ATPase activity. By contrast, CFTR always has access to chloride, and binding of chloride is not required for ATPase activity in the same way that binding of substrate is required for ATPase activity in other ABC superfamily members. Because ATP is present in the cell at concentrations well above the half-maximal effective concentration for channel opening (Csanády et al., 2000), without some other means of regulation, CFTR would allow unproductive high ATPase rates and the uninterrupted flow of chloride down the electrochemical gradient-in either direction with respect to the cell. By coupling the R domain-mediated regulation of the channel to PKA-mediated phosphorylation, the CFTR-expressing epithelial cell ensures that chloride is brought to the appropriate electrochemical potential by the coordinated action of basolateral chloride transporters, which are also regulated by PKA (McCann and Welsh, 1990), and CFTR-mediated permeability in the apical membrane.
The overall sequence of the R domain is poorly conserved across CFTR orthologues, but the PKA consensus sites shown to be functionally relevant in hCFTR are highly conserved across jv-CFTRs (Sebastian et al., 2013). However, half of the consensus dibasic PKA sites are missing in Lp-CFTR (Fig. 8); furthermore, some of those that are found in both human and lamprey orthologues exhibit substantial divergence in the context surrounding the phosphorylated serine, which may contribute to differences in the rate of phosphorylation or to changes in conformation after phosphorylation. This is consistent with the observation that Lp-CFTR exhibits a greatly slowed response to PKA-induced activation (Cui et al., 2019a). The additional sites may have evolved in jv-CFTRs, after the split from jawless vertebrates, as a means of fine-tuning the graded activation intrinsic to hCFTR. Future work may explore the functional effects of transplantation of PKA recognition motifs and surrounding primary sequence from hCFTR into Lp-CFTR.
An inherited ATPase defect intrinsic to CFTR NBD-mediated gating kinetics In ABC transporters, ATP binds at two composite sites (ABS1 and ABS2) formed by conserved motifs from NBDs positioned in a head-to-tail arrangement . Fig. 9 A depicts a simplified model of these sites, wherein each ABS is shown to consist of the so-termed Walker A, Walker B, and H loop regions from one NBD and the ABC signature and D loops from the other NBD. ATP binding to an ABS promotes NBD dimerization, which "powers" active transport by driving conformational changes in the TMDs (Rahman et al., 2013;Strickland et al., 2019); in ABC exporters, this flips the TMD conformation from inward to outward facing (Rees et al., 2009). ATP hydrolysis at these sites leads to dissociation of the NBD dimer, which allows the readoption of the inward-facing conformation to bind new intracellular substrates, although there is significant disagreement regarding the degree of dissociation undergone at the NBDs to accomplish this (George and Jones, 2012;Hohl et al., 2014;Puljung, 2015;Zoghbi et al., 2012). Structural and functional (Chaves and Gadsby, 2015) studies support the idea that CFTR uses the same overall scheme, wherein opening involves binding of ATP to both ABSs and dimerization of the NBDs, whereas closing results from ATP hydrolysis, which promotes the subsequent dedimerization of the NBDs.
Many ABC proteins feature homodimeric NBDs that together form two ABS sites with equivalent functions, but the monomeric ABCCs contain significant divergence in ABS1 . A sequence alignment of the relevant motifs ( Fig. 9 B) demonstrates major points of divergence as compared with P-glycoprotein (ABCB1), which has essentially homodimeric NBDs. Note that the ABCC family shows divergence Figure 8. Conservation among CFTR orthologues in PKA consensus sites in the R domain. Primary sequences equivalent to each of the eight consensus sites for PKA-mediated phosphorylation found in hCFTR are shown for mouse, chicken, frog, shark, and lamprey. Numbering for consensus sites at the top of the table refers to the hCFTR orthologue. Residues bearing divergence from the consensus dibasic sequence are shown in bold and underlined. Other variability in the primary sequence surrounding the target serine also is evident, which may contribute to altered response to phosphorylation. Figure 9. Evolutionary divergence within the NBD1-NBD2 interface. (A) Schematic representation of a prototypical headto-tail NBD dimer sandwich and the interfacial regions that interact with ATP. (B) Alignment of several relevant regions of the NBDs from CFTR and more distant homologues. Numbering is of hCFTR NBD1. Note that jv-CFTR represents the consensus sequence from CFTR from jawed vertebrates, whereas Lp-CFTR specifically refers to the sequence of Lp-CFTR. Significant ABCC-and CFTR-specific divergence is seen in ABS1, particularly in the NBD2 signature sequence, the NBD1 Walker B motif, and the NBD1 His region. To facilitate identification of differences, amino acids in the table are colored according to common chemical properties (charge, polarity, etc.). Note that the ABCC family shows divergence adjacent to the NBD1 Walker B loop that is integral to ABS1 at the position indicated by an asterisk.
Infield et al.
Journal of General Physiology adjacent to the NBD1 Walker B loop that is integral to ABS1 at the position indicated by an asterisk in Fig. 9 B. Here, a critical catalytic glutamate conserved in canonical ABS sites (Orelle et al., 2003) is substituted in most ABCCs with an aspartate or serine in NBD1, and the following alanine is substituted with a proline (Payen et al., 2003). In ABCC1, these two substitutions may be responsible for increased affinity for ATP and significantly slowed ATP hydrolysis at ABS1 (the so-called incompetent site) as compared with the canonical ABS2 site (the "competent" site; Gao et al., 2000;Hagmann et al., 1999;Hou et al., 2000;Payen et al., 2003;Qin et al., 2008). In addition, the NBD2 signature sequence contributing to ABS1 is F/LSVGQ in most ABCCs, as opposed to the canonical LSGGQ as in ABCB1; this also may impact affinity for ATP . In CFTR, where ATP hydrolysis at ABS1 is essentially absent (Aleksandrov et al., 2002;Basso et al., 2003), there is additional, lineage-specific divergence evident in these alignments. In NBD1, instead of the conservative ABCC aspartate substitution for the catalytic glutamate adjacent to the Walker B region (asterisked position noted above), all CFTRs have a serine residue (e.g., S573 in hCFTR). Additionally, the NBD2 signature sequence integral to ABS1 of CFTR is also unique among ABCCs. What purpose in CFTR may degeneration/divergence in the NBD dimer interface serve? As explained previously, the ABC transporter duty cycle requires the consumption of ATP. Adaptation of the cycle for optimal chloride channel activity would ideally allow a maximal amount of chloride to be diffused per ATP consumed. In this regard, it is highly advantageous that members of the ABCC subfamily of proteins harbor a degenerate ABS1, because any ion channel built on this scaffold would only consume one ATP molecule per gating cycle rather than two. This potential is generally borne out by biochemical studies. Recently developed spectroscopic methods for measuring ATP hydrolysis from model ABC transporters support the general inference that homodimeric transporters catalyze ATP at a significantly higher overall rate than heterodimeric transporters (Collauto et al., 2017). Specific to mammalian transporters, the absolute ATP turnover rate for hCFTR as calculated from channel closing rate is ∼0.5/s (Li et al., 1996), which correlates well with published rates from purified, detergent-solubilized protein (∼130 nmol/mg/min; Liu et al., 2017). This rate is roughly half that of the homodimeric P-glycoprotein expressed and purified similarly (∼230 nmol/mg/min in the presence of substrate; Kim and Chen, 2018).
It is not yet well understood how additional divergence found in CFTR orthologues may contribute to any unique behavior(s). In all jv-CFTRs, the signature sequence in NBD2 is LSHGH-more divergent from consensus than ABCC homologues in its substitution of histidine for the C-terminal glutamine found in canonical ABSs (Fig. 9 B ;Smith et al., 2002). Interestingly, uniquely among CFTRs, the NBD2 signature sequence from the Lp-CFTR orthologue retains this canonical glutamine (LSEGQ). Whether the unique composition of the CFTR ABS1 is necessary for normal gating or ATP hydrolysis is a question that needs further study using rigorous biochemical and electrophysiological methods. One intriguing explanation has been proposed on the basis of recent FRET experiments on ABCC1/MRP1 demonstrating important differences in its NBD dynamics as compared with CFTR. Electrophysiological data from CFTR suggest that ATP hydrolysis is quickly followed by dedimerization of the NBD heterodimer (Csanády et al., 2010). However, in MRP1, the post-hydrolytic NBD dimer is apparently much longer lived (Wang et al., 2020). Could CFTR-specific divergence in the NBD interface play a role in tuning CFTR gating, making it highly responsive to ATP hydrolysis at ABS2? Support for this possibility is found in a study demonstrating that mutating certain amino acids in the CFTR NBD interface to ABC transporter consensus results in a highly stable ATP-dependent dimer and prolonged open channel burst durations (Tsai et al., 2010).
Hypothesized route for the evolution of regulated channel activity in CFTR How did CFTR evolve its indispensable channel function? Our analyses demonstrate that many of the amino acid residues and motifs that bestow on hCFTR its function and regulation were already present to different degrees in closely related but functionally divergent ancestors. Hence, it is possible to compare the sequence of CFTR with that of increasingly distant homologues, infer what features are common, and propose a chronology for the molecular evolution of CFTR function and its optimization (Fig. 10). From such analysis, we suggest that residues underpinning interdomain energetic signaling, degeneration of the ATPase activity in ABS1, and intracellular basic residues critical to future CFTR Cl − channel activity were present in a common ancestor of the ABCC family (Fig. 10, point 1). Following divergence from ABCC5, an ancestor of ABCC4 and CFTR retained these features and added to them; at this point, many residues that would eventually line and stabilize the Cl − channel pore of CFTR emerged, possibly in use to bind and transport anionic substrates (Fig. 10, point 2). A common CFTR ancestor accumulated critical channel-specific residues in TM6 and elsewhere, which led to secondary structure changes around a conserved proline (P355 in CFTR) and pore-stabilizing salt bridges. Some degree of phosphoregulation was present as well ( Fig. 10, point 3). Finally, fine-tuning of channel regulation and pore architecture continued after the split between jawless vertebrate CFTRs and jv-CFTRs (Fig. 10, point 4), but was largely consolidated before significant additional speciation in jv-CFTRs. This timeline is ripe for exploration in functional experiments with mutagenesis guided by structural and bioinformatics analysis.
Translational relevance: Toward therapeutic development across ABC transporters As discussed above, CFTR is clinically relevant to the pathogenesis of CF, an impactful genetic disease. The continued development of efficacious CFTR modulators requires a better understanding of the function of this channel. The modulators from Vertex, although highly efficacious, do not impact all patients with eligible CFTR genotypes, nor do they solve all of the problems in this multiple organ system disease or lead to longterm stabilization of lung function (Flume et al., 2018;Gauthier et al., 2020;Guimbellot et al., 2017;Konstan et al., 2017;Li et al., 2019;McKinzie et al., 2017;Moheet et al., 2021;Patel et al., 2020; Infield et al. Journal of General Physiology Phuan et al., 2018), revealing a need to continue to study CFTR to develop new therapies (Davies et al., 2019;Grand et al., 2021;Veit et al., 2018). Understanding the nature of the stable open state may aid in the rational design of drugs that can lock mutant CFTR channels open, leading to increased Cl − secretion and amelioration of CF disease and potentially some forms of chronic obstructive pulmonary disease and other lung disorders (Raju et al., 2016;Solomon et al., 2016a;Solomon et al., 2016b). Conversely, overactivity of CFTR may contribute to polycystic kidney disease (Hanaoka et al., 1996) and secretory diarrhea, including cholera (Thiagarajah and Verkman, 2003). A better understanding of CFTR may lead to the design of clinically useful inhibitors to treat these secretory disorders. Comparative pharmacology is conceptually tangential to evolution of function, particularly for synthetic drugs that are not mimics of natural ligands that CFTR could have "evolved" to bind. That being said, an improved understanding of the structural relationships between groups of ABC transporters may be relevant to the investigation of the mechanisms of action of CFTR-targeted drugs discovered through high-throughput screening. In fact, distant CFTR orthologues and transporter homologues may assist in the elucidation of mechanisms and binding sites of the Food and Drug Administrationapproved CFTR-directed therapeutic compounds using approaches similar to those used to understand the action of CFTR inhibitors (Stahl et al., 2012). While data suggest that many pharmacological agents correct the folding of trafficking mutants of both CFTR (ABCC7) and P-glycoprotein (ABCB1; Loo et al., 2012), lumacaftor, which may bind MSD1 of CFTR (Loo et al., 2013), is unable to correct trafficking mutants of P-glycoprotein (Loo et al., 2012). The drug is, however, able to correct trafficking mutants of ABCA4 associated with macular degeneration (Sabirzhanova et al., 2015). VX-770/ivacaftor has been shown in some studies to potentiate (and therefore likely directly bind) CFTR from multiple species, including human, murine (i.e., in Cui et al., 2016;Cui and McCarty, 2015;but not in Van Goor et al., 2009;Bose et al., 2019), and Xenopus orthologues. Surprisingly, Lp-CFTR is not potentiated by VX-770 (Cui et al., 2019a); in fact, a small degree of inhibition was observed. Recently, the Chen laboratory solved a cryo-EM structure of CFTR in the presence of VX-770 at 3.3Å resolution (PDB accession no. 6O2P) and identified residues contributing to the binding energy . This study revealed that VX-770 binds at a cleft formed by TMs 4, 5, and 8 deep inside the membrane core (see Fig. 5) at the interface between protein and the membrane lipid. Whether this structure demonstrates the binding site responsible for therapeutic potentiation is currently unclear (Csanády and Töröcsik, 2019;Yeh et al., 2019), although the same site also coordinated another potentiator, GLPG1837 . The conservation in this binding site is mixed; of the amino acids whose mutation strongly affect affinity, some are highly conserved across CFTRs and ABCC4s (e.g., R933 in hCFTR, but not S308), whereas others are conserved among CFTRs but not with ABCC4s (e.g., Y304), and some sites are uniquely divergent in Lp-CFTR (e.g., F931, a proline in lamprey).
A very recent study from the Bear laboratory (Laselva et al., 2021) explored VX-770 binding sites using photo-induced crosslinking. This study confirmed a position proximal to the site identified by the Chen laboratory, noted above, but also identified a site within the ICLs linking the TMDs to the NBDs. This second location, formed by residues in ICL4, was previously Figure 10. ABCC subfamily dendrogram and proposed chronology of molecular evolution of CFTR function. (A) Dendrogram adapted from two previous studies on CFTR evolution (Jordan et al., 2008;Sebastian et al., 2013). Proteins discussed in this review are indicated with *. (B) Chronology of emergence of functional features of jv-CFTR, as supported by the analyses in this review. Ancestors labeled with circled numbers correspond to the dendrogram points in A.
Infield et al.
Journal of General Physiology nominated as a VX-770 binding site by the observation that ICL4 was protected from hydrogen/deuterium exchange in the presence of drug (Byrnes et al., 2018). Note that ICL4 also is the portion of the TMDs that most closely approaches position F508, which is deleted in most CF alleles in North America (Mornon et al., 2008;Serohijos et al., 2008). Residues making the strongest contribution to binding energy at this second site include K1041, E1046, P1050, F1052, H1054, Y1073, and K1080. In their hands, mutation F1052A at the second site had a significantly (approximately fivefold) larger effect on VX-770 affinity than alanine mutations of aromatics within the first site. This site also is much closer to the NBDs and interestingly is adjacent to residues E543 and K968 (Fig. 11), which were previously identified as involved in signaling the state of NBD occupancy by ATP to the TMDs (Strickland et al., 2019; of note, K968 is type II divergent between CFTRs and ABCC4s, with the exception of Lp-CFTR, where the equivalent position bears a glutamine). Hence, this newly identified pocket may contribute to the mechanism by which VX-770 stabilizes the channel open state (Cui et al., 2019b;Langron et al., 2018). We note that all of the residues listed above that contribute to this second site are conserved in Lp-CFTR, which is not potentiated by VX-770 (Cui et al., 2019a), other than K1080 (a glutamine, in lamprey). A lack of functional potentiation is, however, at most indirect evidence of loss of binding. In fact, because a small degree of inhibition was observed, it is possible that the drug binds to a site or sites on Lp-CFTR similar to that on hCFTR but that the nature of the interaction is subtly altered by divergence in the site such that potentiation does not occur. Conceptual precedence for such a scenario may be found in the pharmacology of closely structurally related drugs that bind to similar sites on receptors but induce opposing functional outcomes, such as the dihydropyridine class of voltage-sensitive Ca 2+ channel modulators (Zhao et al., 2019). The emergence of a biotinylated, photo-crosslinkable ivacaftor analogue (Laselva et al., 2021) is expected to significantly aid in the dissection of the effect of a given mutation on binding versus potentiation or inhibition.
Conclusion
There are many questions that have yet to be answered with respect to the structure-function relationship in CFTR and related transporters. Many of these questions now can be answered through the study of revertant mutants between groups, retracing a possible evolutionary path. The results of these studies have the potential to shed light on the structures of both channel and nonchannel ABC proteins and may reveal channelspecific features in CFTR that serve as levers for the pharmacological repair of mutant channels in patients with CF. Although this article focuses on only one member of the ABC transporter superfamily, CFTR (ABCC7), many others have been implicated in disease, including close relatives, such as Pglycoprotein (ABCB1) and MRPs 1, 4, and 5 (ABCC1, 4, and 5), which confer life-threatening resistance to therapeutics when overexpressed (Chen and Tiwari, 2011). The extent to which structural and functional information gained about one ABCC can be mapped to another is an important consideration in both the discovery and mechanistic understanding of therapeutics directed against these proteins. Looking forward, the study of the molecular evolution of function in ABC proteins may therefore lead to exciting advances in the pharmacological and structural understanding of these highly medically relevant proteins. Figure 11. Residues contributing to second potential binding site for VX-770 are located in a domain tightly linked to channel opening and to the most common mutation causing CF disease. Residues from Laselva et al. (2021) are mapped onto the 6MSM structure from the Chen laboratory. Purple, lasso domain; orange, TM10 and TM11, whose cytoplasmic tails comprise ICL4; blue, sites contributing to VX-770 binding site; yellow, E543 and K968, identified by Strickland et al. (2019) as responsive to the occupancy of the NBDs by ATP; red, F508.
|
2021-10-15T06:16:49.424Z
|
2021-10-14T00:00:00.000
|
{
"year": 2021,
"sha1": "5ada1a2c4ac250a9d14da5726268aea2bee4c49f",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.1085/jgp.202012625",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f9541a0d8a6e2b0a77890a8b574aaa6880f6d0d8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
482260
|
pes2o/s2orc
|
v3-fos-license
|
The Cost of Arbovirus Disease Prevention in Europe: Area-Wide Integrated Control of Tiger Mosquito, Aedes albopictus, in Emilia-Romagna, Northern Italy
Aedes albopictus (tiger mosquito) has become the most invasive mosquito species worldwide, in addition to being a well-known vector of diseases, with a proven capacity for the transmission of chikungunya and dengue viruses in Europe as well as the Zika virus in Africa and in laboratory settings. This research quantifies the cost that needs to be provided by public-health systems for area-wide prevention of arboviruses in Europe. This cost has been calculated by evaluating the expenditure of the plan for Aedes albopictus control set up in the Emilia-Romagna region (Northern Italy) after a chikungunya outbreak occurred in 2007. This plan involves more than 280 municipalities with a total of 4.2 million inhabitants. Public expenditure for plan implementation in 2008–2011 was examined through simple descriptive statistics. Annual expenditure was calculated to be approximately €1.3 per inhabitant, with a declining trend (from a total of €7.6 million to €5.3 million) and a significant variability at the municipality level. The preventative measures in the plan included antilarval treatments (about 75% of total expenditure), education for citizens and in schools, entomological surveillance, and emergency actions for suspected viremias. Ecological factors and the relevance of tourism showed a correlation with the territorial variability in expenditure. The median cost of one antilarval treatment in public areas was approximately €0.12 per inhabitant. Organizational aspects were also analyzed to identify possible improvements in resource use.
Introduction
Aedes albopictus (Ae. albopictus) is commonly known as the Asian tiger mosquito, and originates from South-East Asia. Over the last three decades, this insect has become increasingly widespread worldwide and is now considered the most invasive mosquito species [1], ranking in the top 100 invasive species of any kind [2]. It can currently be found in the temperate and tropical areas of Asia, most of the islands in the Pacific Ocean, South and Central Africa, South and North America, in addition to being found in all of Southern Europe [3][4][5][6]. The increasing international movement of people and goods has been the determinant for the global expansion of Ae. albopictus [5,7], which could be considered a negative consequence of international trade brought to previously unknown dimensions authority (RHA) provides guidelines, coordinates the activities at a regional level through the LHAs, and co-finances the expenditure of municipalities.
This research evaluated the public costs related to the implementation of the Regional Plan between 2008 and 2011. The study collected data on the expenditure required by all the public administrations carrying out this plan, with the aim of assessing public spending in relation to some key indicators, in order to analyze differences in expenditure among municipalities and to examine correlations between the expenditure and relevant territorial variables. Despite the time gap from the period under analysis, the lasting scarcity of data and information about the costs to be supported by public health systems for area-wide integrated control of mosquitoes and other arbovirus vectors convinced the authors to publish this study.
Materials and Methods
The AW-IPM activities supported by the Regional Plan for tiger mosquito control in the examined period are listed in Table 1, with the indications of the financial contribution provided by the RHA. Most of the measures are implemented between May and early October, when biting and reproduction of tiger mosquitoes are more intensive in the region. Although the plan is hierarchically coordinated at the regional level, the participation of municipalities is not compulsory, and each may individually decide which activities will be carried out in its own territory, as well as the modalities of implementation. The consequence is an extremely varied and fragmented situation in which one municipality may directly perform a given activity, but also-and much more frequently-will contract it to external operators, either by calls for tender or by direct award procedures. The types of contractors vary from large public utility corporations owned by mixed public-private investors (in the region, these organizations generally result from the merging and opening of former municipal utilities to private investors) to small private businesses. The commitments of contractors may cover several activities, only one, or just some specific tasks. Subcontracting is also largely practiced, especially by the large contractors. Table 1. Activities supported by the Emilia-Romagna (ER) Regional Plan for tiger mosquito control and type of financial contribution from the Regional Health Authority (2008-2011).
Activities of the Regional Plan Implemented by Municipalities
Financial Contribution of the Regional Health Authority (RHA) (a) Entomological surveillance monitoring of the intensity of the tiger mosquito infestation through a network of about 2700 ovitraps distributed over the ER territory; Lump sum paid for each ovitrap check (supposed to cover 100% of the cost supported by municipalities) *; (b) Regular anti-larval treatments (from May to October) of road drains in public areas; Variable % of the municipality expenditure, depending on the RHA budget remaining after payment of (a), (d), (f), and (h) **; (c) Door-to-door anti-larval treatments in private areas; The same as (b); (d) Quality controls on the efficacy of anti-larval treatments (b) in public areas ***; 50% of the municipality expenditure; (e) Information to citizens through various activities (information campaigns, free distribution of anti-larval products, inspections in private areas under request, etc.); The same as (b); (f) Information activities in primary schools ***; Lump sum paid for each class involved (supposed to cover 100% of the cost supported by municipalities); (g) Other activities undertaken by municipalities ****; The same as (b); Table 1. Cont.
Activities of the Regional Plan Implemented by Municipalities Financial Contribution of the Regional Health Authority (RHA)
(h) In case of the detection of potentially viremic patients, a protocol activates emergency actions to reduce the possibility of epidemic outbreaks: this includes treatments against adult mosquitoes aimed at isolating the potential outbreak hotspots; 100% of the municipality expenditure; (i) Delivering of municipality ordinances requiring citizens to adopt good practices to prevent proliferation of tiger mosquitoes in private areas (courtyards, gardens, etc.); No specific expenditure from public administrations; * In the years 2008 and 2009, the ovitraps were checked between the end of May and early October with a frequency of once per week, and the lump sum provided by the Regional Health Authority as financial contribution (€3.5 per ovitrap check) was supposed to cover 50% of the cost of the monitoring activity. Since 2010, a technical change has allowed the monitoring to be performed by checking the ovitraps only once every two weeks [46], and the lump sum provided by the Regional Health Authority (€9 per ovitrap check) is supposed to cover 100% of the cost; ** The RHA contribution for the activities (b), (c), (e), and (g) was 18.24% of the municipality expenditure in 2008, 18.88% in 2009, 12.28% in 2010, and 11.64% in 2011; *** Activity not included in the Regional plan in the year 2008; **** Only for the year 2008, this item included the expenditure for census and cleaning of road drains and adulticide treatments in sensible sites (public parks, school gardens, cemeteries, etc.). In 2009-2011, the item includes other various actions undertaken by municipalities and admitted to the RHA financial support on a case-by-case basis.
Every year the municipalities and the other public administrations participating in the Regional Plan submit a technical and financial report of the activities implemented in their respective territories to the RHA in order to obtain the financial contribution. This reporting is the main data source of the study: an English translation of the reporting form filled in by municipalities in 2009-2011 is displayed in the Appendix A (Table A1). The reports display the total expenditure supported by municipalities for the implementation of the activities listed in Table 1, with the exception of entomological surveillance. This activity is fully financed by the RHA on the basis of the mosquito ovitrap checks provided by municipalities and recorded in a specific accounting manner.
In the reporting form, only some basic technical information is requested about the ordinary activities of antilarval disinfestations in public areas (i.e., the number of road drains treated and the number of treatments performed in the year) and in private areas (i.e., the number of private courtyards involved and the number of treatments performed). However, since the lack of such data does not entail a report rejection by the RHA, many municipalities provide just approximate information or no figures at all. The financing of information activity in primary schools ((f) in Table 1) is subject to a separate description of the actions performed, as well as the "other activities" undertaken by municipalities ((g) in Table 1), which are co-financed by the RHA after evaluation occurring case-by-case.
All the expenditure reported by municipalities should be accompanied by invoices and other documents testifying the payment of declared amounts, but no structured technical data are available from the reports about consumption of materials and work employed in the different activities. The dossiers submitted to the RHA are checked before the calculation of the reimbursements to be paid to municipalities according to the criteria indicated in Table 1. Furthermore, all the payments for municipalities are subject to the administrative and financial audits laid down by the law, which is the same for the RHA reimbursements.
This study excludes the overhead costs supported by public administrations for the implementation of the activities of the Regional Plan and the costs of all other activities against tiger mosquitoes undertaken by public or private organizations and individual citizens that do not receive the RHA financial support. The information available from municipality reporting and from the RHA was analyzed through simple descriptive summary statistics. Figures relative to the population were used to compare the expenditure for the planned activities at different territorial levels: municipalities, LHAs, and the whole ER region. Population data were taken from the online database of the ER regional Statistic Services [47]. Unavailability of reliable data covering the entire region prevented the use of indicators of expenditure related to the extension of the urban areas subject to larvicide treatments, as well as to the road drains treated in public areas and in private courtyards (most municipalities did not have a registry of the road drains set in public roads, and data from the financial reports were often imprecise or incomplete).
All the monetary values presented in the study correspond to the nominal values of the expenditures reported in the examined years, including the value-added tax (VAT), which is a cost for the public administrations involved (in the analyzed period, the VAT rate for the type of services taken into consideration was 20%). The findings of the analysis were discussed with the Entomological Working Group of the RHA and with the regional coordination committee for the implementation of the plan formed by representatives of the RHA and LHAs, as well as of the province and municipality administrations involved.
The Implementation of the Plan over the ER Region's Territory
Between 273 and 291 municipalities out of the 341-348 municipalities existing in ER took part in the Regional Plan for tiger mosquito control from 2008 to 2011 (see details in the Appendix A, Table A2). The involved population included between 4.06 and 4.28 million inhabitants, which is 95%-96.5% of the regional total ( Table 2).
As mentioned, the participation of municipalities in the activities supported by the Regional Plan is not compulsory. The municipal administrations follow the RHA guidelines and coordination, but have significant autonomy in the organization of the activities they decide to implement. For these reasons, not all the municipalities participating in the plan operate all the activities listed in Table 1, and in the same way. Moreover, the range of activities executed by one municipality may change from one year to another, in addition to changes in the population involved (see Table 2). * The figures include all the municipalities receiving financial contribution from the Regional Health Authority for the implementation of the corresponding activity of the Regional Plan; ** Activity not included in the Regional plan in the year 2008; the available information on the number of municipalities and the respective population involved in this activity in the year 2009 is not complete.
Expenditure Supported for the Plan's Activities by Municipalities and the Regional Health Service
The expenditure supported by public administrations for the implementation of the different activities of the Regional Plan between 2008 and 2011 is reported in Table 3. The total expenditure decreased from €7.60 million to €5.28 million, with most of the reduction occurring between 2008 and 2009 (-22.6%) and a softer decline in the following years (-11% between 2009 and 2011). Indeed, after the 2007 CHIKV outbreak, many municipalities performed the activities of the plan very intensively, fearing new epidemics and consequent impacts on public health and economic activities, principally tourism. This resulted in higher plan expenses in 2008 than in the later years. The higher expenses in 2008 could also be attributed to the fact that this year was when province administrations and LHAs were also heavily involved in the direct implementation of the plan activities in some cases. For these reasons, the year 2008 and the relatively homogenous period 2009-2011 have been treated separately in Table 3 and further below. Table 3. Expenditure for the activities of the ER Regional Plan for tiger mosquito control and share between municipalities and the Regional Health Authority (RHA) (2008-2011). * For the expenditures related to this activity, it has been assumed that in the years 2010 and 2011 they corresponded to the contribution paid by the Regional Health Authority, and in the years 2008 and 2009 to twice the contribution paid (see note * in Table 1); ** In some cases, this item may include expenditure supported by province administrations and local health authorities (LHAs); *** Activity not included in the Regional plan in 2008; **** Only for the year 2008, this item includes expenditure for census and cleaning of road drains and adulticide treatments in sensible sites (public parks, school gardens, cemeteries, etc.).
Activities of the Regional
The anti-larval treatments of road drains in public areas ((b) in Table 3) are the main AW-IPM activity of the plan, aimed at preventing the development of infestation hotspots in the areas that are not under the responsibility of individual citizens or private organizations. This is the only activity for which the expenditure increased after 2009, with this expenditure eventually reaching 62.4% of the total expenditure for the plan in 2011. The expenditure for the related quality controls introduced in 2009 ((d) in Table 3) covered about 6% of the plan's expenditure over the 2009-2011 period, with an eventual decline in line with the general trend.
The door-to-door anti-larval treatments ((c) in Table 3) are intended to avoid the formation of infestation hotspots in private areas caused by negligence or ignorance of owners. This activity consists of systematic interventions in private courtyards by operators, who treat water drains, identify potential infestation hotspots, and provide advice to owners. All of this creates a high involvement of the population, but is also costly. The municipalities that implemented door-to-door anti-larval treatments were already a minority in 2008, and significantly diminished in the following years (see Table 2). The corresponding expenditure of approximately €1.13 million in 2008 almost halved in 2009, before being diminished by about one third between 2009 and 2011.
The expenditure for emergency actions to isolate viremic cases ((h) in Table 3) gradually declined to become close to zero in 2011, following a progressive decrease of alerts for these situations. The activity of entomological surveillance monitors ((a) in Table 3)-through a network of about 2700 ovitraps distributed over the ER territory-the presence of mosquitoes with respect to risk thresholds, indicating the possibility of disease transmission if viremic cases are introduced [48,49]. The corresponding expenditure was reduced by one third between 2009 and 2011, mostly as a result of a new protocol allowing a diminution of the ovitraps' checking frequency from weekly to biweekly over the season of mosquito reproduction [46,50].
The expenses for information to citizens ((e) in Table 3), which mainly support free distribution of anti-larval kits, information materials, site inspections in private areas, and counselling, covered around 9-10% of the Regional Plan's total expenditure over the 2009-2011 period, with a final reduction of about one quarter in the amount. Information activities in primary schools ((f) in Table 3) were introduced in 2009 to involve a selected number of school classes in the territory of each LHA every year. Beyond the education of pupils, the initiative also targets the responsiveness of their families. From 2009 to 2011, these activities accounted for around 3% of the plan's total expenditure. The other activities undertaken by municipalities ((g) in Table 3) embrace a variety of actions subject to a case-by-case evaluation by the RHA for financial contribution. In many cases, they are specific initiatives to improve citizens' awareness or to develop some technical and organizational aspects of mosquito disinfestation, such as the identification, registration, and geo-referencing of the road drains in public areas. This information is relevant for the identification of potential infestation hotspots and for the management of the whole disinfestation activity, and was lacking in almost all the ER municipalities when the plan started. Under this item, the RHA also agreed only in 2008 to financially contribute to municipality expenditure for aerosol treatments against adult tiger mosquitoes in public areas for reducing insect nuisance and outside the specific emergency protocol for suspected transmissible viruses. For this reason, the item recorded the highest expenditure decrease between 2008 and 2009 (a reduction of €721 thousand). Over the 2009-2011 period, the diminution continued, but in line with the general trend and the other activities. Finally, this item was found to have contributed to about 6% of the Regional Plan's total expenditure.
The Coordination
Strategy of the Regional Health Service, Financial Aspects Figure 1 provides an overview of the expenditure for the Regional Plan recorded by the municipalities and the RHA, respectively. After the first year of implementation, both the municipalities and the RHA recorded an important and comparatively similar expenditure decrease (around -22%). In the following years (2010 and 2011), while the municipality expenditure was quite steady (-2.6%), the co-financing from the RHA continued to diminish significantly (-31.7%) following progressive cuts in the budget fielded for the plan by the regional government. Consequently, the municipality share in the total expenditure for the plan grew from 73.1% to 79.2%. 3.3. The Coordination Strategy of the Regional Health Service, Financial Aspects Figure 1 provides an overview of the expenditure for the Regional Plan recorded by the municipalities and the RHA, respectively. After the first year of implementation, both the municipalities and the RHA recorded an important and comparatively similar expenditure decrease (around -22%). In the following years (2010 and 2011), while the municipality expenditure was quite steady (-2.6%), the co-financing from the RHA continued to diminish significantly (-31.7%) following progressive cuts in the budget fielded for the plan by the regional government. Consequently, the municipality share in the total expenditure for the plan grew from 73.1% to 79.2%. The differentiation of the RHA's financial contributions among the various activities of the Regional Plan (see Tables 1 and 3, and Figure 1) denotes a strategy aimed at stimulating municipalities to concentrate resources in anti-larval disinfestation of public areas and in information to citizens, while the provisions from the regional budget were devoted to prioritizing emergency interventions, entomological surveillance, information in primary schools, and quality controls on larvicide treatments. This strategy and the decline of emergency interventions allowed the RHA to significantly reduce the burden of the plan on its own budget, while an intensive disinfestation activity was maintained by municipalities, also in terms of financial effort. Between 2009 and 2011, the RHA financial contribution to larvicide treatments of road drains in public areas declined by 37.1%, while the expenditure of municipalities for this activity increased by 11.2% by attaining around 70% of the total municipality expenditure devoted to the plan.
The Territorial Variability of the Plan's Expenditure: Ecological, Economic, and Structural Factors
The examined data indicate that the expenditure for the plan implementation may vary considerably from one municipality to another and from one year to another. The boxplots in Figure 2, which refer to the expenditure per inhabitant in the ER municipalities, show that there has been The differentiation of the RHA's financial contributions among the various activities of the Regional Plan (see Tables 1 and 3, and Figure 1) denotes a strategy aimed at stimulating municipalities to concentrate resources in anti-larval disinfestation of public areas and in information to citizens, while the provisions from the regional budget were devoted to prioritizing emergency interventions, entomological surveillance, information in primary schools, and quality controls on larvicide treatments. This strategy and the decline of emergency interventions allowed the RHA to significantly reduce the burden of the plan on its own budget, while an intensive disinfestation activity was maintained by municipalities, also in terms of financial effort. Between 2009 and 2011, the RHA financial contribution to larvicide treatments of road drains in public areas declined by 37.1%, while the expenditure of municipalities for this activity increased by 11.2% by attaining around 70% of the total municipality expenditure devoted to the plan. The examined data indicate that the expenditure for the plan implementation may vary considerably from one municipality to another and from one year to another. The boxplots in Figure 2, which refer to the expenditure per inhabitant in the ER municipalities, show that there has been some decrease in the extreme values of this indicator accompanying the progressive decline of the plan spending over the analyzed years. However, the corresponding indicators of dispersion did not significantly diminish, despite starting from relatively important levels. Table 3). With respect to the Plan's total expenditure, they accounted for 91. The spatial variability of the plan expenditure may depend on various factors. One is the changing intensity and nuisance of the tiger mosquito infestation, which is related to changes in local ecological and climatic conditions. Other factors are related to anthropic variables, such as the characteristics of the urban areas where the plan activities are implemented and the specific organization set up at the municipality level, including how many activities one municipality decides to implement and the methods these are implemented with.
The most suitable zones for mosquito infestation in the region are the coastal plain along the Adriatic Sea and the lowlands bordering the Po river, especially in the delta, which was an area of endemic malaria until the first half of the 20th Century [51]. The expenditure per inhabitant recorded for the tiger mosquito control plan was significantly higher in the territory of the LHAs that cover such areas. These are namely the LHAs of Ferrara and Ravenna, located in the delta of the Po river, as well as the LHAs of Cesena and Rimini, which include the region's south-eastern coast, inner hills, and mountains (see Table 4). Table 3). With respect to the Plan's total expenditure, they accounted for 91. The spatial variability of the plan expenditure may depend on various factors. One is the changing intensity and nuisance of the tiger mosquito infestation, which is related to changes in local ecological and climatic conditions. Other factors are related to anthropic variables, such as the characteristics of the urban areas where the plan activities are implemented and the specific organization set up at the municipality level, including how many activities one municipality decides to implement and the methods these are implemented with.
The most suitable zones for mosquito infestation in the region are the coastal plain along the Adriatic Sea and the lowlands bordering the Po river, especially in the delta, which was an area of endemic malaria until the first half of the 20th Century [51]. The expenditure per inhabitant recorded for the tiger mosquito control plan was significantly higher in the territory of the LHAs that cover such areas. These are namely the LHAs of Ferrara and Ravenna, located in the delta of the Po river, as well as the LHAs of Cesena and Rimini, which include the region's south-eastern coast, inner hills, and mountains (see Table 4). The higher propensity of the municipalities located along the Adriatic coast to pay for the implementation of the tiger mosquito control plan may also be motivated by economic reasons. The importance of tourism in these areas seems to stimulate local administrations to strive to reduce both the level of nuisance caused by mosquitoes and the risks of disease outbreaks, which could heavily impact the presence of tourists in local seaside resorts during summertime. Table 5 shows that the total expenditure per inhabitant for the implementation of the plan's activities was, in most cases, significantly higher in the 12 resort municipalities located along the Adriatic coast than in respective LHAs and in the whole region. In these municipalities, a relevant implementation of anti-larval treatments in private areas contributes to the higher level of expenditure and attests to the determination of local administrations to prevent Ae. albopictus proliferation. The nine municipalities of the ER region characterized by important spa activities did not show similar correlations (see Table 5), despite the relevance of tourism for local economies. In fact, spa municipalities are mostly located in the foothills and in hilly or mountain areas, which are comparatively less favorable for mosquito infestations than the coastal lowlands. Asterisks indicate the number of years between 2009 and 2011 in which door-to-door anti-larval treatments were implemented in the municipality: * door-to-door anti-larval treatments implemented for one year; ** door-to-door anti-larval treatments implemented for two years; *** door-to-door anti-larval treatments implemented for three years.
Expenditure for Larvicide Treatments in Public Areas, the Pillar Activity of the Plan
The anti-larval treatments of road drains in public areas can be considered the pillar activity of the Regional Plan for the expected efficacy in the containment of the infestation with respect to the number of municipalities and population involved (see Table 2), and in terms of expenditure (see Table 3 and Figure 1). Thus, the territorial variability observed in the spending for the Regional Plan is strongly related to the implementation of this activity.
Road drain treatments are operated at regular intervals during the active periods of the tiger mosquito in the public spaces managed by municipalities within the urbanized areas, such as roads, squares, parks, carparks, and cemeteries. The RHA technical guidelines propose the usage of active principles and dosages allowing intervals of at least four weeks between two treatments in order to contain costs, with a minimum of four interventions in a year [40,41]. More frequent interventions may be needed due to seasonal weather trends, and the RHA experts suggest a standard of five treatments in a year as a technically correct benchmark. However, final decisions are taken at the municipality level, with the decision-making processes possibly changing significantly from one municipality to another and not necessarily following the RHA's technical advice. This is due to other influential factors possibly intervening to induce choices for either a reduction or increase in the number of treatments, such as municipality budget constraints, advice from pest control companies, pressures from citizens, and economic operators disturbed by mosquitoes or fearing disease outbreaks, in addition to the use of active principles and dosages different from those suggested by the RHA guidelines. Consequently, in the analyzed period, there was significant variability in the number of treatments performed, and in a large majority of cases, this number was higher than the benchmark of five treatments per year indicated by the RHA experts (see Figure A1 in the Appendix A).
The calculation of the expenditure per inhabitant recorded in each municipality to operate one larvicide treatment of the road drains allowed comparisons among municipalities by isolating the variability related to the extension of the urban areas treated and to the number of treatments performed over the year. For the former, municipality population was assumed as a proxy. However, as shown in Figure 3, the level of dispersion was also important for this indicator.
outbreaks, in addition to the use of active principles and dosages different from those suggested by the RHA guidelines. Consequently, in the analyzed period, there was significant variability in the number of treatments performed, and in a large majority of cases, this number was higher than the benchmark of five treatments per year indicated by the RHA experts (see Figure A1 in the Appendix A).
The calculation of the expenditure per inhabitant recorded in each municipality to operate one larvicide treatment of the road drains allowed comparisons among municipalities by isolating the variability related to the extension of the urban areas treated and to the number of treatments performed over the year. For the former, municipality population was assumed as a proxy. However, as shown in Figure 3, the level of dispersion was also important for this indicator. The influence of the urban area size on the expenditure per inhabitant of the anti-larval treatments was tested under the hypothesis that economy-of-scale effects could be obtained by operating the service on a wider urban area and explain expenditure variability. However, the evidence of such a correlation was not found (see Table A3 in the Appendix A). In fact, the largest towns showed a tendency to attain levels of expenditure per inhabitant that were significantly higher than the medium and the small centers. This may depend on a variety of factors needing specific investigations that could not be afforded within the study. Table 6 depicts the expenditure per inhabitant for one anti-larval treatment in the territory of the LHAs and of the whole ER region. As in the case of the total plan expenditure per inhabitant shown in Table 4, it can be observed that in the LHAs located along the Adriatic coast, the expenditure values were significantly above the regional mean, with the only exception being the LHA of Cesena. Among the seven LHAs located inland, only Reggio Emilia had expenditure values significantly above the regional mean, while the extreme low values of Parma LHA may be explained by the huge financial crisis that affected the capital municipality of this territory during the examined period. For example, it was possible to compare the expenditure during 2011 for one antilarval treatment in the public areas of 249 municipalities with a benchmark set at 120% of the expected value resulting from the regression equation shown in Figure 4. The benchmark was overshot by 70 municipalities, and the overall exceeding expenditure was calculated as €334,716, which considered the total number of treatments that they actually performed in the year. This amount corresponded to the 10.2% of the total expenditure for this activity of the whole group of 249 municipalities examined. Figure 3, but only the values within a standard deviation from the mean value were selected. They formed a group of 176 municipalities which represented 76.3% of the population of the ER municipalities implementing anti-larval treatments in public areas and 71.1% of the total expenditure for the activity. For example, it was possible to compare the expenditure during 2011 for one antilarval treatment in the public areas of 249 municipalities with a benchmark set at 120% of the expected value resulting from the regression equation shown in Figure 4. The benchmark was overshot by 70 municipalities, and the overall exceeding expenditure was calculated as €334,716, which considered the total number of treatments that they actually performed in the year. This amount corresponded to the 10.2% of the total expenditure for this activity of the whole group of 249 municipalities examined.
Aedes albopictus Invasion and the Role of Public Health Systems
This study analyzed the cost of an AW-IPM plan for tiger mosquito control in a region with environmental and climatic characteristics that can be found in many other areas of Europe, and where the health risks due to the presence of this vector became clear with the 2007 CHIKV outbreak. The costs to society of this bio-invasion is not limited to expenditure for pest management and health care, but should also include reduced use of recreational goods caused by mosquito nuisance, such as public and private parks and gardens. This damages users and/or owners, and may have heavy consequences on the economic activities that depend on their fruition, such as tourism [52].
If the eradication of Ae. albopictus-or the reduction of its population density below an epidemic risk threshold, a nuisance tolerance threshold, or a combination of both-is considered as a valuable benefit, the complex inter-causal relations that should influence public policies for mosquito control should be explored more. In the theory of public goods, the control of the Ae. albopictus invasion can be interpreted as a "weakest-link" problem [9,53]. This means that the effort performed by each individual actor to contain the infestation affects the result obtained by all the other actors involved, with the least effective actor (i.e., the weakest link) determining the overall level of protection for the whole community. In such a situation, individual free-rider behaviors may be particularly harmful to society. This gives strategic relevance to public health systems and enhances the role of a multi-level territorial coordination for the AW-IPM Plans.
Social Cost of Ae. albopictus Invasion
An assessment of the total social cost of the Ae. albopictus invasion in ER would require information on many types of costs that is still unavailable: private expenses supported by households (e.g., for mosquito nets, repellents, and anti-larval treatments in private courtyards) [54], the direct and indirect damages to economic activities, the reduced utilization of parks and gardens [55,56], the health care costs for viremic cases, in addition to the related productivity and utility losses [57][58][59][60]. This study focused on the analysis of the expenditure supported by public administrations for an area-wide plan aimed at limiting tiger mosquito proliferation over a wide territory. Furthermore, some mosquito control actions performed by ER municipalities were not considered, including adulticide treatments in sensible sites such as school gardens, sport centers, and cemeteries, or in the occasion of outdoor events that concentrate large amounts of people in squares or public parks. These were not reported to the RHA in the 2009-2011 period, since they were not co-financed by the Regional Plan. Moreover, there was no information on the entity of administrative costs supported for the implementation of the plan by municipalities and other public entities involved.
Economic Evaluation of the Plan's Effectiveness
Despite the presence of a large-scale entomological surveillance system, there is a lack of quantitative data comparing the effects of the plan activities (e.g., reduction of mosquitoes' density and nuisance or epidemic risk) with control areas where the same activities are not or were not performed. This lack of data prevented the possibility of evaluating the economic efficacy of the Regional Plan through cost-effectiveness or cost-benefit analysis within the framework of this study. In fact, there are no directly comparable data about the levels of tiger mosquito infestation in ER before 2008 (when the plan became operative), or about the levels of infestation in nearby regions similar for ecological and climatic characteristics where activities against tiger mosquito proliferation are not performed.
Improvements in the capacity to predict the seasonal evolution of Ae. albopictus population in the region with relation to changes in influencing variables (e.g., weather conditions) could allow a cost-effectiveness analysis of the plan activities by comparing the levels of infestation resulting from the ovitrap monitoring with hypothetical no-activity scenarios drawn from predictive models. However, such a capacity is not easily achievable for the well-developed area-wide monitoring system set up in the ER region, given the high influence of numerous microecological anthropic variables on density and spreading of tiger mosquitos in urban ecosystems in addition to bias caused by disinfestation activities on the possibility of using ovitrap data for those purposes [46,48]. An alternative method which is unaffordable and thus not performed in this study could be to evaluate the effects perceived by the population of the territories involved in the plan through surveys and contingent valuations. One of the very few and possibly unique cost-effectiveness and cost-benefit analysis of an AW-IPM plan for Ae. albopictus control followed this approach in an analysis in two counties of New Jersey. However, the dimension of the territories involved and their population (8.2 thousand inhabitants in the AW-IPM area and 13.1 thousand in the control area) were considerably smaller than the context of this study [55].
Recommendations for Plan Improvements
The total expenditure of the ER Regional Plan was about €7.6 million in 2008 (the year following the CHIKV outbreak), and between €5.9 million and €5.3 million in 2009-2011. This reduction was in part due to diminution of emergency interventions, which followed the decline of imported CHIKV and DENV cases. The rest of the lowered expenditure was mainly related to the reduction in the activities for citizens' involvement, in addition to cost-saving changes in the entomological surveillance practices.
There was high variability in expenditure per inhabitant supported by municipalities for the plan implementation. This appeared to be mostly related to the large autonomy of municipality administrations for the modalities of implementation. Great differences were found in the number of activities implemented by municipalities and in the intensity of implementation of some activities. For example, the higher expenditure in coastal towns indicated the willingness to safeguard the tourism appeal for outdoor activities by reducing mosquito nuisance and the risk of infectious disease outbreaks due to their potential harm to health and to the popularity of the regional seaside resorts.
There was also high variability in municipality expenditure per inhabitant for a standard operation, such as the expenditure per inhabitant of one treatment of road drains in public areas. This could depend either on structural or casual factors that would need deeper investigations at the local level (e.g., differences either in the extension of the urban area treated or in the number of road drains relative to population). However, the cost of pesticides and other materials and tools (portable sprayers, personal protective equipment, bicycles, and other means of transport, etc.) commonly used by operators for road drains disinfestation is almost negligible with respect to the cost of the work employed and companies' overheads. Therefore, most of the expenditure variability is probably related to organizational issues and to the capacity of municipalities to perform an effective control of prices practiced by contractors.
Some main topics about these aspects and possible improvements in the management of plan activities emerged in discussions with the experts and the representatives of municipalities participating in the regional coordination committee.
•
In general, municipalities entrust the different functions related to operation, coordination, and technical control of the plan activities to external operators. It was found that one contactor undertakes all those functions in many cases. Subcontracting is also used, with obvious consequences on the ability of the municipality administrations to practice effective cost control and design cost saving strategies; • Related to the above, it was also found that many municipalities award the services needed for the plan implementation through procedures of direct procurement. As an alternative, open tenders could offer more opportunities to reduce costs and improve efficacy; • Municipalities often lack precise information on the road drains set in public areas. These data are necessary to optimize the management of anti-larval treatments. In the years following the period examined by this study, the RHA has been committed in helping municipalities to collect such data by ensuring partial reimbursement for identifying and geo-referencing the existing manholes and road-drains. In addition, the definitions utilized for identifying the urban areas should be standardized at the regional level, since the parameters used by municipalities were too discretional for technical and economic comparisons. A standard definition would contribute to improving the coordination of plan activities at the regional level; • Administrative costs supported by the municipalities for the plan implementation should be taken into consideration to allow a more complete economic evaluation and a correct assessment of possible benefits coming from merging the activities of neighboring municipalities; • Mechanisms rewarding good practices could accompany regional co-financing of municipality activities. For example, the indication of standard costs for anti-larval activities could be used to identify an acceptable range of municipality expenditure for the payment of RHA contributions. Moreover, since many municipalities seemed to operate an excessive number of treatments for road drains, the rewarding mechanism should also stimulate the fulfilment of RHA instructions and technical advice regarding the number and timing of interventions.
Further significant progress in the containment of Ae. albopictus infestation in ER may depend on improvements in the control activities undertaken by individual citizens in private areas, where infestation hotspots can develop with scarce possibility of rapid intervention by public authorities. Therefore, it could be useful to intensify the efforts to raise public awareness through appropriate communication strategies.
Conclusions
The ER Plan for Asian tiger mosquito control can be considered an effective initiative during the examined period. The plan's flexibility was demonstrated by the large participation of municipalities and the amount of the population involved as well as the preservation of an intensive antilarval activity confronted with a relevant decline of the total expenditure required by public administrations. This flexibility occurred with respect to the changing conditions-environmental, but also in terms of budget, organizational capacity, and political will-of ER municipalities. When not caused by organizational inefficiencies needing correction, variability of expenditure is also an aspect of the heterogeneity of specific situations and decision-making autonomy devolved to municipalities.
With due adaptations, this plan could be proposed in other European regions already affected by Ae. albopictus infestations or subject to a highly probable invasion in the near future. Risk factors such as globalization and the consequent movement of people and goods worldwide will not be reduced, exacerbated by the growing concentration of population in urban areas, which facilitates the transmission of vectored viruses. Climate change expands the areas potentially suitable for the invasion of this insect, which is capable of adapting at latitudes and altitudes higher than its original habitat, and of other similar vectors even potentially more dangerous (e.g., Ae. Aegypti) [61,62]. On this basis, it is likely that area-wide control activities should be undertaken in other European regions, and according to the principle that integrated control has higher effectiveness in larger areas, a coordination of these initiatives at the European level may be needed.
Giovannini (CAA), Luciano Donati (CAA) and Carmela Matrangolo (AUSL-Romagna) for their contributions in collecting information and data. We are also very grateful to all the other members of the SSR-ER Entomological Working Group and to the officers of the Emilia-Romagna municipalities that kindly collaborated to the study. The precious support of the Antwerp Study Centre for Infectious Diseases (ASCID) at the University of Antwerp is also acknowledged. Table A1. Form to be filled in by municipality administrations to report the annual expenditure supported for the implementation of the ER Regional Plan and to obtain co-financing by the RHA (2009-2011 period) *.
Municipality of
Emergency actions to isolate viremic cases * The figures include all the municipalities that received financial contribution from the Regional Health Authority for the implementation of at least one activity of the Regional Plan. ** Starting from year 2010, seven new municipalities-which were formerly part of the bordering Marche Region-have been included in the territory of the LHA of Rimini. The table was elaborated with the same set of data used for Figure 3.
|
2018-04-03T03:06:25.655Z
|
2017-04-01T00:00:00.000
|
{
"year": 2017,
"sha1": "5d00021196805fdfe676ad093945eb08fde2c43b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/14/4/444/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d00021196805fdfe676ad093945eb08fde2c43b",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
250402938
|
pes2o/s2orc
|
v3-fos-license
|
Areca catechu L. and Anredera cordifolia (Ten) Steenis supplementation reduces faecal parasites and improves caecal histopathology in laying hens
ABSTRACT Some studies have shown that the betel nut Areca catechu L and “binahong” leaves Anredera cordifolia (Ten) Steenis have anti-parasite and wound healing properties. This study evaluated the effect of A. catechu nut and A. cordifolia leaves powder supplementation on faecal parasite number and type, histopathology of the intestine, caecum, associated organs, some serum biochemistry, and egg production of laying hens. Twenty-four 54-week-old ISA-brown laying hens from local layer farmers were assigned randomly into 4-treatment groups: 1) without supplementation (T0), 2) supplemented with 0.25% (T0.25%), 3) 0.5% (T0.5%), 4) 1.0% (T1.0%). We carried out the supplementation for 18 days by administering A. catechu nut powder for 3-days, and subsequently, A. cordifolia leaves powder for another 3-days for 3-rounds to control the parasite larvae. Faecal parasite count and type were enumerated at the beginning and end of treatment. Egg production was recorded daily during the 18 days experiment. Blood was sampled at the end of the experiment to determine serum albumin, globulin, and transaminases. The intestinal tract, liver, and spleen samples were collected at the end of the study for histopathological examination. Faecal Ascaridia galli in control hens increased by 87.5% after 18 days of the experiment, while A. catechu nut and A. cordifolia leaves powder supplementation prevented such an increase. Supplemented hens have a better reduction of Railentina cesticillus compared to control birds. Supplementation improved intestinal and other tissue histopathology, especially in the caecum (free of erosion), improving serum albumin and transaminases without affecting egg production.
Introduction
The challenges of global warming manifested by sudden changes between rainy and hot-humid weather in tropical climates combined with antibiotic resistance are amplifying environmental stress and hardship for local producers. Globally, the nematode infestation in laying hens is widespread [1]. Ascaridia galli infection is the most persistent infestation in layers [2][3][4][5][6][7][8][9]. Small-scale-independent farmers in a tropical country such as Indonesia raise commercial layers to produce commercial eggs as the primary income source. Layer farmers bred the most common hybrid chicken the ISA Brown to lay eggs for up to two years or as long as egg production is economically feasible. Ascaridia galli is not the only harboured endoparasites in commercial hens but also endoparasites such as coccidian [5][6][7][8][9]. To prevent microbial challenge, farmers use readily available feed.
The anti-helminthic properties of the Areca nut have been demonstrated in numerous in vitro and in vivo studies. Ethanol extract of Areca nut powder added at 40% showed an effective reduction in the motility of the liver fluke (Fasciola spp.) in the petri dish, resulting in shrinkage and deformation of the body shape and shrunken edges of the tegument [29]. At 1 g/Kg body weight (approximately 1.5%), Areca nut powder effectively expels round and tapeworm and eggs from indigenous local chicken. However, at a higher dosage of more than 1.5%, the chicken was not in good condition [30]. Areca nut extract posseses anticoccidial activity by reducing faecal oocyte counts, mucosal damage, and caecal lesion in broiler chicken experimentally infected with Eimeria tenella [31]. The mechanism appeared to be mediated by nitric oxide production, which was up-regulated in the inflammatory stage (3-days post-infection) and down-regulated 6-days post-infection. Ethanol extract and ethyl acetate fraction of betel nut can efficiently increase the number of goblet cells in the colon and caecum of mice orally infected with parasitic worm T. muris infective eggs [32]. Numerous studies showed a protective effect of Areca nut ethanol extract on a wound (excision and burn) healing and were likely to be due to its phenolic antioxidant activity [33][34][35][36][37].
Anredera cordifolia (Ten) Steenis ("Binahong") leaves comprise flavonoids (vitexin, isovitexin, morin, and myricetin) and the sapogenins ursolic acid [38]. It has antibacterial and antioxidant activities [38][39][40][41]. Some studies demonstrated that A. cordifolia leaves have healing qualities on some wounds (cut, burn, post-partum perineum) [42][43][44][45][46][47][48][49][50]. A topical application of ethanol extract of A. cordifolia leaves twice a day on a 2 cm long excision wound in Guinea pig enhanced the length of excision closure, compared to control with povidone-iodine 10% [45]. A granulation network and re-epithelialization of the open physical wound appeared to be involved in healing [50]. Thus, the wound healing properties could be beneficial to healing the intestine and associated organ lesions due to endoparasite infestation. A study conducted by our group recorded an anti-microbial/endoparasite activity of A. cordifolia leaves powder in Saanen goat with mastitis. It exhibited that it could reduce total faecal oocytes [51]. However, there was no detail on oocyte types and their possible mechanism.
As the in-feed drug has been banned or increasingly reduced in animal husbandry practice and layer hens are an important source of quality protein (egg) with long production time, the potential of A. catechu and A. cordifolia to prevent natural endo-parasitic infestation is warranted to be studied. In a recent work, we found that administration of very low dosage, i.e. 0.025% -0.1% A. catechu nut powder and A. cordifolia leaves powder alternately in 42 weeks layer hens reduced liver transaminases that could indicate cell regeneration properties of both additives [44]. However, the low dosage was insufficient to reduce faecal A. galli and other endo-parasite numbers than control without supplementation [5]. Therefore, we further study the supplementation of A. catechu nut and A. cordifolia leaves powder alternately every 3-days in 54-weeks-old layer hens with a higher dosage, i.e. ten times our first study. We investigated the faecal endo-parasite number, some serum biochemistry, and egg production, but importantly the histopathology of the intestinal tract, liver, and spleen. The histopathological examination of the affected tissues may give a better understanding about the anti-endoparasite properties of both phytogenic additives.
Ethical statement
Our study was conducted in ISA Brown laying-hens obtained from the existing small-scale layer farmer, and the hens for supplementation were raised similarly in a wire-battery cage but separately. During the experiment, we applied animal ethics provided the hens with free access to drinking water and feed according to standard hens' age. Hens were sacrificed at the end of supplementation by cutting the jugular vein according to standard animal welfare. Ethical Committee approved the protocols of this in vivo animal study with approval number: 133/EA/KEPK-FKM/2022.
Collection of plant material and preparation of powder
Fresh dry A. catechu nuts were obtained from a local market, and the pericarp was discarded. Fresh A. cordifolia leaves were collected from local farmers and air-dried. The dried plant materials were ground to a powder and filtered through a 25-50 mesh screen to get a homogenous powder. The dried plant powder was kept in a refrigerated container for further use. A detailed description of the preparation process has been provided elsewhere [5,44].
Supplementation of A. catechu nut and A. cordifolia leaves powder
We obtained the experimental hens from a small-scale (2500 hens) independent-layer farmer in Semarang-Central Java, Indonesia. The farmer raised Isa Brown hens in a typical V-type three-tiers battery bamboo housing. Each battery cage was 40 cm x 40 cm x 30 cm in size, which housed one hen per cage. The three-tiers battery housing was located in an open space (outdoor) with a roof to protect it from sun and rain. The experimental hens (24) were randomly selected based on age (54-week-old) and body weight (with an average body weight of 1.88 ± 0.02 Kg). We raised the selected hens in standard wire-battery housing separately. The 24-hens were adapted for two weeks, after which they were randomly assigned into four treatment groups: control with no supplementation (T0); supplemented with 0.25% A. catechu nut and 0.25% A. cordifolia leaves powder (T2.5%); supplemented with 0.5% A. catechu nut and 1.0% A. cordifolia leaves powder (T0.5%); supplemented with 1.0% A. catechu nut and 1.0% A. cordifolia leaves powder (T1.0%). Each group contained 6 hens; hence, there were 24 hens. Each hen was given a 130 g commercial layer diet (mesh) from the farmers (2900 Kkal/Kg ME, 17-19% crude protein, 3-11% crude lipid, 5-6% crude fibre, 3.5% calcium, and 0.45% phosphor, with no antibiotic nor coccidiostat) per day and free access to drinking water. Supplementation was carried out by mixing the powder in the diet so that each supplemented group received the respective dosage. We designed the supplementation of A. catechu nut powder for 3days, subsequently A. cordifolia leaves powder for 3days, and the alternate administration was carried out for a total of 18 days. Figure 1 depicts a summary of the supplementation design.
We designed a 3-days alternate supplementation based on the life cycle of endo-parasites and to control parasite larvae. We assumed that the first round of 3days of betel nut (with anti-helminth potential) would kill the mature larvae. The next 3-days of A. cordifolia could recover the infected tissue with its wound healing properties, while allowing the larval stage of endoparasites to mature. The second round of 3-days betel nut supplementation would kill the mature larvae (from the larva of the first round), and then 3-days A. cordifolia would recover the tissue again. The third or final round of supplementation would kill the residual mature larvae from the previous round (round 2), and the following 3-days A. cordifolia supplementation provides the subsequent tissue recovery. To the best of our knowledge, no other publications has utilized a similar approach to ours in supplementing layer hens A. catechu and A. cordifolia except our previous study [44].
Collection of faecal samples, parasites identification, and enumeration
One day before phytogenic powder administration, we collected fresh faeces from each bird from all groups (total of 24 birds). Fresh faeces from each bird were immediately preserved in 10% formalin (T 1.0% ): supplemented with 1.0% A. catechu nut and 1.0% A. cordifolia leaves powder. A total of 18 days supplementation consisted of first round administration of A.catechu for 3-days, followed by A. cordifolia for 3-days. Subsequently, followed by the second and third round which is the same as the first round i.e. administration of A.catechu for 3-days, followed by A. cordifolia for 3-days.
for parasite diagnosis [52]. The microscope was equipped with a camera, connected to a computer screen, and moved horizontally and vertically to scan the whole sample to identify and enumerate parasites. Parasites were grouped into types. The enumeration was carried out by experienced staff at the Laboratory of Animal Health Semarang city, Regional Office of Animal Husbandry and Health. Fresh faeces were collected at the end of the experiment from each bird from all groups (23 birds) and preserved in 10% formalin for endoparasite determination.
An increase or decrease in faecal parasite number for each type of parasite was calculated as follows:
Determination of serum albumin, globulin, AST, and ALT
At the end of the experiment, we collected blood samples from each bird of all groups (23 birds) through the brachial vein. Serum was immediately separated by centrifugation and stored frozen until analyses. The activity of alanine transaminase (ALT) and aspartate transaminase (AST) (previously known as serum glutamic pyruvic transaminase (GPT) and serum glutamic oxaloacetic transaminase (GOT)) were measured by the kinetic method according to the International Federation of Clinical Chemistry [10,11,13].
Histopathological examination
After blood sampling, birds were sacrificed by cutting the jugular vein according to Standard Animal Welfare. Duodenum, ileum, caecum, liver, kidney, and spleen were removed and preserved in buffered formalin. Preserved samples were embedded in paraffin, cut, and stained with Haematoxylin Eosin (HE) [12]. The samples were analysed by a histopathologist.
Egg production
We recorded each bird's egg production and weight during 18 days of supplementation daily.
Statistical analysis
Excel and SPSS were used to conduct statistical analyses. Parasite numbers for each treatment group were compared to the control group (T0) before and after supplementation using a percentage of reduction, increase, or no change. A descriptive histopathological investigation was performed. Analyses of Variance (Anova) were performed on serum albumin, globulin, AST, and ALT data. Duncan's multiple tests were carried out when the means among groups were significantly different. Paired T-tests were performed before and after supplementation for serum albumin, globulin, and transaminases. Significance was set at p ≤ 0.05. Table 1 shows the frequency of faecal parasite types and number before and after 18-day supplementation of A. catechu nut and A. cordifolia leaves powder. Before supplementation, helminth A. galli and R. cesticillus were observed in all hens with varying numbers in each group. It determines that the hens obtained from local farmers and raised in an outdoor battery housing have a varying endo-parasitic infestation. After 18-day supplementation, the control birds without supplementation have increased number of A. galli (87.5% increase). Interestingly, with 0.25% supplementation, the A. galli number was reduced by 12.5%, whereas with higher supplementation, there was no change in the number of A. galli. The reduction in the T 0.25% group was slight, leaving 7-parasites (from 8), which was still higher compared to T 0.5% and T 1.0% . At 1.0% supplementation, the absence of A.galli was constant. In contrast, in the control group without supplementation showed an increase of 87.5% in A. galli number. Table 2 represents a histopathological investigation of the duodenum, ileum, caecum, liver, kidney, and spleen after 18 days of supplementation of A. catechu nut and A. cordifolia leaves powder alternately in 54 weeks laying hens were presented in. In the control group, histopathology of the small intestine (duodenum, jejunum, ileum) and caecum displayed the presence of erosion (Er). In 0.5% supplemented groups, erosion was still present in the ileum and caecum to variable degrees. Strikingly, after 1% supplementation, no erosion can be detected in the caecum, and Figure 2 displays the samples of caecum histopathology of control and T 1.0% ).
Results
There was inflammation (I) in control duodenum, ileum, and caecum, but none after supplementation at 0.5% and 1%. In the ileum, goblet cells were not found in control birds but found in one of 0.25% supplemented birds, four of 0.5%, and 3-of 1% supplemented birds. There was helminth found in one bird with 0.5% supplementation. One control bird and two 1% supplemented birds had nodular lymphoid (NL).
Liver histopathology exhibited the presence of necrosis (N), infiltration of lymphocytes around blood vessels (Ic), congestion (C), and haemorrhage (H). Necrosis was found in the control liver (4/5 birds) and reduced after supplementation. Infiltration of lymphocytes (Ic) around blood vessels was found in four of 5 control birds. There were only two birds with Ic. after 1% supplementation. Congestion was found in one of 1% supplemented birds. Haemorrhage (Hr) was only found in one control bird and none in the supplemented group.
Several birds from all groups had inflammation in the kidney. There was necrosis in one of 0.5% and 1% of supplemented birds, while congestion (C) was found in one 1% supplemented bird. Spleen's observation presented that follicle lymphocytes (FL) were found in two of five birds in the control group. No FL was observed after supplementation at all dosages. White pulp (WP) was observed in all supplemented birds but none in the control group. Inflammation was found only in 1 of 0.5% supplemented birds. Table 3 presents the serum albumin, globulin, and transaminase levels of all groups before and after supplementation. There was a significantly higher level of globulin in groups supplemented with phytogenic additives before supplementation (p < 0.05). However, supplementation did not affect serum albumin, globulin, ALT, and AST (p > 0.05) in all groups.
Paired T-test revealed that serum albumin improved but serum globulin reduced significantly after supplementation (p < 0.05). Serum transaminase were significantly reduced after supplementation (p < 0.05). Table 4 showed that all groups' average total egg production and egg weight at the end of phytogenic powder supplementation was not significantly different (p > 0.05). However, the values are higher in all supplemented groups.
Discussion
Ascaridia galli is a parasitic roundworm that belongs to the phylum Nematoda. In our case, the birds obtained from local farmers were already infected with the parasite during their rearing. In the tropics, Ascaridia galli is the most prevalent parasite that affects layer husbandry [2-9, 53,54]. Infestation of A. galli in control birds without supplementation increased by 87.5% after 18 days (Table 1), indicating constant infection. On the other hand, a slight reduction occurred (12.5% reduction) at the lowest dosage of supplementation (0.25%) and a higher dosage of supplementation showed no change in faecal A. galli number. Hence, the supplemented groups that did not experience an increase in the number of faecal A. galli suggested that alternate supplementation of A. catechu and A. cordifolia every 3-days can prevent endo-parasite development thereby, preventing an increased parasitic growth that occurred in the control group without supplementation. Based on our methodological hypotheses, alternate supplementation allowed A. catechu nut powder (with anti-helminth potential) to kill the A. galli larvae, and followed by supplementation of A. cordifolia (with anti-wound healing activity) that healed some lesions resulting from the infection without interference from the A. catechu. After healing, another round of 3-days A. catechu nut re-supplementation again killed some more parasites, after which A.cordifolia was re-administered to continue the healing process ( Figure 1). Thus, prevention of an increase of faecal A. galli in the supplemented group was likely attributed by the active alkaloid of Areca nut (arecoline, arecaidine, guvacoline, and guvacine), as shown by some studies [29][30][31]. When both supplements were administered as a mixture, the healing process became disturbed as a higher dosage of A. catechu (1%) could harm the organ's cell lining and interfere with the healing process. Such harmful effect of A. catechu has been demonstrated and could be due to its alkaloid content [22,26]. Therefore, an alternate supplementation for 3-days would give time for each supplement to exert its biological function without interfering with one other. Another possibility is that when the supplements were administered together, an antagonistic effect was seen, i.e. A. catechu, with its cytotoxic activity was capable of killing the larvae but was also cytotoxic to intestinal cells lining [28], that interfered with the healing action of A. cordifolia. A study [55] conducted by purposely infecting indigenous chicken with A. galli and supplementing a mixture of A. catechu nut and A. cordifolia leaf powder for only ten days showed that the supplemented group had more A. galli eggs per gram in the faeces and the duodenum than the control group. It was unclear why the mixture could not reduce A. galli in their study, but we speculated that the administration schedule and combined dosage play an important role. Therefore, our study added significant evidence that 1% supplementation of A. catechu and A. cordifolia powder alternately every 3-days resulted in a better reduction of faecal A. galli.
For R. cesticillus, it was naturally reduced by 85.7% in the control group (Table 1). However, all supplemented groups had reduced R. cesticillus by 100%, suggesting the ability of alternate supplementation to improve the reduction. Raillietina species are found in the jejunum and ileum of the chickens and can reduce growth, causing weakness and digestive tract obstruction, whereas their larval stage (cysticercoid) resides in various invertebrate intermediate hosts, such as ants, beetles, small mini-wasps, or termites [56][57][58]. Railentina cesticillus is known to spread via flies and cockroaches due to dirty housing [59][60][61][62]. With several thousand or more hens, some small-scale farmers have no assistance to do daily sanitation. Furthermore, the outdoor cage-housing with its daily excretes quickly attracted flies and cockroaches, facilitating the spread of the parasites. In control hens, a natural reduction is experienced due to our management, namely daily cleaning of all excretes, feed, and drinking containers. Therefore, the intermediate host was eliminated, and with clean feed and drinking water daily, the hens' natural immune defence can work better, reducing faecal R. cesticillus in the control Figure 2. Samples of caecum histopathology showing heavy erosion in the control group without supplementation (T 0 replicate-4, 4x magnification), and free of erosion in 1.0% A. catechu nut and 1.0% A. cordifolia leaves powder supplemented group (T 1.0% replicate-3, 10x magnification). Histopathology of all groups were presented and summarized in Table 2 Table 3. Serum albumin, globulin, and transaminase in 54-week-old laying hens before and after 18 days alternate supplementation of A. catechu seed and A. cordifolia leaves powder. Each group consisted of 6 hens. with a control group of 5 hens (one was excluded due to accident) Values represent the average of 6 hens (except control group with 5 hens) ± standard deviation p with no superscript * means there is no effect of supplementation (p > 0.05) (Anova). Different superscript within the same row is significant at (p < 0.05) (post-hoc test). P with superscript * means significantly different between before and after supplementation (T-test). group. The supplemented group improved the natural reduction, indicating A. catechu nut and A cordifolia leave powder contribution. In 0.5% of supplemented birds, one type of helminth T. americana sp. appeared after 18 days. Tetrameres americana sp. are small parasitic roundworms that infect chickens by ingesting an intermediate host, such as grasshoppers, cockroaches, earthworms, and water fleas. This type of helminth was already found in previous layers from the same farm [5]. Many studies show that helminth infestation can never be eradicated. It is due to its life cycle in which a fertile egg can be re-ingested via faeces and grow in the intestinal tract. Later, the eggs are shed along with the faeces, and the cycle is repeated. The appearance of T. americana in the 0.5% supplemented group could be from the expelled parasite from the intestine by the action of the active alkaloid of Areca nut (arecoline, arecaidine, guvacoline, and guvacine) powder [30]. The ability of Areca nut alkaloid to penetrate mucosal lining could reach the buried parasite, and the gastrointestinal peristaltic activity can expel it into the faeces [22,63].
Histopathological examination of all groups indicated that the intestinal and caecal samples of all the birds from the control group with no supplementation showed erosion (E) ( Table 2). Erosion in these tissues is a sign of infection from helminth and cocci infestation, common in layers worldwide [5,44]. It also indicates that the laying hens from the farmers have already experienced chronic infections whose severity could not be determined. As the hen ingests the infective egg of A. galli, the first larva stage hatch in the proventriculus or duodenum. It moults into second and third-stage larvae in the lumen, with some attaching the mucosal lining and feeding. Lesions and haemorrhaging of the mucosa occur during the third stage of larval feeding, causing enteritis, anaemia, and diarrhoea [54,64]. In supplemented groups, erosion was still present in some hens in the duodenum, jejunum, and ileum, suggesting that infection to the organ from endo-parasite can only be healed partly by supplementation. We speculate that the healing is attributed to both phytogenic as both contain polyphenolic compounds with antioxidant properties and re-epithelialization activity [33,34,36,37].
Goblets cells appear at 0.5% and 1% supplementation in the jejunum, ileum, and caecum but not in the duodenum. At 1% supplementation, three hens did not show erosion, and the presence of goblet cells indicated the presence of regeneration as these cells function to produce and maintain mucus layers along the intestinal lining. Two birds in this group had lymph nodules, that indicated an ongoing immune response against endoparasites antigen. The parasites could be hiding under epithelial layers and cause reinfection to occur. Strikingly, for caecum, supplementation of the powder decreased erosion remarkably as compared to control with no supplementation. At 0.25% supplementation, half of the samples were free from erosion, and strikingly at 1%, all samples were free from erosion. It could be due to the inability of sporozoite and merozoite to develop due to A. catechu nut powder (Table 1, after 1% supplementation), while A. cordifolia assists in healing the damaged tissue [33][34][35]65,66]. As we have previously described, the mechanism could be due to the action of the active alkaloid of Areca nut (arecoline, arecaidine, guvacoline, and guvacine) powder [30]. The lipophilic nature of the alkaloid to penetrate the lipid bilayer of mucosal lining could reach and kill the buried parasite [22,63]. During 3-days of Areca nut administration, the cytotoxicity against parasites was more noticeable than the healing activity. Therefore, the next round of A. cordifolia leaves administration would further support the healing activity mediated by re-epithelialization. Furthermore, the ursolic acid of A. cordifolia can induce epidermal keratinocyte differentiation (via peroxisome proliferator-activated receptor-alpha) that also assists in healing the erosion in the caecum [67]. The caecum is the blind sac of hens before the excretion of undigested feed. The A. cordifolia leaves powder and Areca nut with its fibre could partly reach the caecum and stay longer and hence could act longer to heal all the erosion. It is warranted to study such a possibility further.
Necrosis in the liver showed that after 0.25 and 0.5% supplementation, there is only one tissue sample that showed necrosis. At 1% supplementation, three samples showed necrosis, which could be due to several factors. One is that increasing the dosage of A. catechu nut powder could be toxic to the liver. Arecoline of the beetle nut is cytotoxic and can cause fibrosis [68]. Necrosis appears due to an inflammatory response, after which regeneration occurs and continues with tissue remodelling. However, uncontrolled tissue remodelling can lead to fibrosis, which could occur at 1% supplementation. Another possibility is that at a lower dosage, Areca nut powder can function as an anthelmintic, antioxidant, antibacterial, and anti-inflammation [24,28,36,69]. Therefore, in 0.25% and 0.5% of supplemented birds, the number of liver samples with necrosis and lymphocytes infiltration around blood vessels was reduced. The histopathology of the kidney showed that supplementation did not affect it, as the samples' initial and final conditions were similar on average. The kidney functions to filter the blood, excreting the end product of metabolism and regulating the concentration of hydrogen ions and minerals in extracellular fluid. To the best of our knowledge, there have been no studies that relate kidneys to endoparasite infestation, and the presence of inflammation in some samples in each group indicated a normal condition of 56weeks-old layers.
The spleen is a secondary lymphoid organ in chickens. The presence of follicle lymphocytes (FL) in the control group's spleen indicated an immune response against an antigen. The antigen likely comes from the Marek virus, which usually infects layers worldwide, except in flocks purposely raised under pathogen-free conditions. Significant swelling in the visceral organ during organ sampling supports the possible Marek viral infection. Marek infection occurs in layer chicken from 1 to 3 weeks old with or without clinical signs, and this infection can reduce growth and egg production [12]. No follicle lymphocytes were found in all supplemented birds, but white pulp appeared. White pulp (WP) indicates the presence of coccidial infection, mainly from Emeria tennela. Reinfection stimulates lymphocytes to respond more quickly to proliferate, followed by an increase in diameter and weight of the white pulp [70]. Although no E. tennela was found in bird faecal samples, the remnants of inflammatory response in the spleen were still observable. White pulp appearance indicates that supplementation improves the immune response against endoparasite infection.
Our present results showed that serum albumin concentration after supplementation improved significantly (p < 0.05) ( Table 3). As albumin synthesis occurs in hepatocytes, it indicates an improvement in liver function. The possible improvement is supported by the results that showed supplementation reduced transaminase significantly (p < 0.05). The result is also consistent with our previous study that showed reduced transaminase activity after alternate supplementation using both powders at one-tenth of the present dosage in 42-week-old layers [5,44]. All phytogenic taken via gastro-intestine (GIT) are carried to the liver. A higher dosage used in the present study could put a higher liver workload and cause cell damage, especially from the active alkaloid of Areca nut. However, the polyphenol as the main content of Areca nut (11.1-29.8%) is a well-known antioxidant that could counter-act cell damage by the active alkaloid (0.11-0.24%) [23] and radicals' generation. Therefore, its use depends on the dosage, administration schedule, and combined use with another herb. Subsequent administration of A. cordifolia leaves with epithelialization activity 3-days after Areca nut administration [35] added further support in preventing and healing cells' damage. Reduction in serum globulin after supplementation indicates a reduction in inflammation due to immune response to endoparasites. Reduction of inflammation is supported by the results of faecal parasites number and histopathology after supplementation, as described previously.
Our results support the anti-endoparasites and wound healing function of Areca nut and A. cordifolia leaves powder in vivo layer hens, especially in the caecum, improving serum albumin and transaminase without affecting egg production.
Conclusion
Our results demonstrated that supplementation of phytogenic Areca catechu nut and Anredera cordifolia leave powder alternately every 3-days for 18 days reduced faecal-endoparasites and improved histopathology endoparasites-affected tissues in layer hens, especially in the caecum.
|
2022-07-10T15:05:16.067Z
|
2022-07-07T00:00:00.000
|
{
"year": 2022,
"sha1": "c195a6cbd508addd5c19ff82060f6d2fb502a6ff",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23144599.2022.2090732?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4a8e39410b26179355c50c0fad7de55b0425515c",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5134292
|
pes2o/s2orc
|
v3-fos-license
|
A Sensorless Predictive Current Controlled Boost Converter by Using an EKF with Load Variation Effect Elimination Function
To realize accurate current control for a boost converter, a precise measurement of the inductor current is required to achieve high resolution current regulating. Current sensors are widely used to measure the inductor current. However, the current sensors and their processing circuits significantly contribute extra hardware cost, delay and noise to the system. They can also harm the system reliability. Therefore, current sensorless control techniques can bring cost effective and reliable solutions for various boost converter applications. According to the derived accurate model, which contains a number of parasitics, the boost converter is a nonlinear system. An Extended Kalman Filter (EKF) is proposed for inductor current estimation and output voltage filtering. With this approach, the system can have the same advantages as sensored current control mode. To implement EKF, the load value is necessary. However, the load may vary from time to time. This can lead to errors of current estimation and filtered output voltage. To solve this issue, a load variation elimination effect elimination (LVEE) module is added. In addition, a predictive average current controller is used to regulate the current. Compared with conventional voltage controlled system, the transient response is greatly improved since it only takes two switching cycles for the current to reach its reference. Finally, experimental results are presented to verify the stable operation and output tracking capability for large-signal transients of the proposed algorithm.
Introduction
In recent years the current mode digital controlled DC-DC converter has become a hot research topic [1][2][3][4][5][6][7]. As one of the most used DC-DC converters, the research on boost converters control has been well developed [8][9][10]. Compared with voltage mode controlled system, it has higher response speed and larger loop gain bandwidth. However, to realize high quality current feedback control, precision current sensors are essential. State-of-art current sensing technologies are reviewed in [11]. There are many kinds of technologies for current sensing, for example the giant magnetoresistance effect based current sensor provides low cost isolation solution [12,13]. However, for a boost converter, there are three most common types of current sensors. The first type uses a shunt resistor in series with the switching device, the second type uses current mirror to reconstruct the switch component current [14,15], and the third type uses Hall effect sensors [16]. The first type may add power losses while the second type may suffer EMI problem [17]. The third type is the most accurate method, and it can be designed highly immune to EMI [18]. However, the cost of most Hall current sensors is relatively high. The current sensors and their signal processing circuits introduce delay and noise to the control circuitry and also contribute to the overall cost of the converter. Therefore, the sensorless current controlled boost converter which acts in current control mode with all the above advantages but without needing a current detecting module has got great potentials in both academic and industrial applications.
To realize sensorless current control, a current observer is normally used to estimate the current. Performance of the current observer deeply relies on the accuracy of system modeling [19][20][21]. In [22][23][24], a variety of boost converter modeling strategies are investigated. P. Midya proposed a sensorless current control strategy based on current observer in 2001 [25], the model is quite accurate. However, for real time digital control, the implementation of this strategy is far too complex. An easier algorithm by feed forward current observer based on the input voltage was published in 2004 [26], the input voltage feed forward was introduced in the observer, and it can effectively avoid the impact of the output voltage variations on the current observer. In this algorithm, however, the influence of the parasitic parameters was not considered and the current estimation error is relatively large. To improve the boost converter dynamic response, a Control-Lyapunov Function based sensorless current control strategy for a boost PFC is proposed in [27]. A new sensing technique by measuring the maximum and minimum values to get the output voltage mean value is proposed, which is able to eliminate the double frequency ripple. Thus, the bandwidth of the voltage controller can be increased significantly. Cho investigated a state observer based sensorless controller using Lyapunov's direct method for boost converters [28]. A state observer is constructed to estimate the inductor current through input and output voltages together with a switch control signal. The system shows good performance in terms of transient response.
An optimized reduced order current observer is proposed for a buck converter by Min [1]. Valley current control with trailing edge (TE) PMW modulation is employed. The current estimation is quite accurate and the algorithm is easy to implement. In [29], a reduced order current observer is used for current estimation for a boost converter. Its current control mode is different from [1] since the peak current control with TE PWM modulation is applied. According to [30], this kind of combination can cause the system unstable. To solve this issue, the reference valley current of two switching cycle ahead is derived from the reference peak value, and then the duty ratio of next switching cycle is calculated through this reference valley current. With this approach, the stable peak current control is realized. Furthermore, in some applications, if the estimated current is average current, the average current control can be implemented directly to reduce the computational complexity. Moreover, the main contribution for this literature is that the root reason for output steady state error is found out. To eliminate the voltage steady state error and achieve high accuracy current estimation, a comprehensive compensation strategy was proposed to eliminate the effect of component parasitic parameters and signal sampling error.
To choose a current control algorithm, predictive current control (PCC) is a good candidate. It is feathered with high robustness, high response speed and low implementation complexity. Therefore, combining the sensorless current control with PCC is an optimized strategy for boost converter control. There are many literatures focusing on PCC. In [31], Stephane Bibian proposed a high performance predictive Dead-beat digital control algorithm to eliminate the computational delay affection. Since the duty ratio is updated every two switching cycles, its response speed is not high. To achieve high response speed, Chen proposed an algorithm to eliminate the inductor current disturbance in two switching cycles for peak, average and valley current control modes [30]. Lai further investigated PCC based peak current mode control in [32]. The effectiveness to eliminate the disturbance in limit cycles by PCC with leading edge PWM modulation scheme was verified by theoretical derivation. In [33], the authors combined the predictive and feed forward control with the PID controller to achieve fast transient response and low overshoot. Its transient response time is reduced by approximately 50%.
The aforementioned literature has made huge contributions to the development of boost converter control. In this paper, an accurate boost converter model, which includes a number of parasitics, is derived. As can been seen from this motel, the boost converter is a nonlinear system. Since EKF is suitable for nonlinear system state observation and measurement noise filtering, so it is chosen to act as a current sensor to estimate the boost converter inductor current. There is much literature on EKF implementation on state estimation of nonlinear systems [34,35]. However, for a boost converter, the load value is necessary for EKF design and it is subject to change with working conditions. This variation can lead to errors of current estimation as well as output voltage filtering. Unfortunately, there has not been any solution for the load variation issue in EKF based current observer yet. Therefore, a load variation effect elimination (LVEE) method is proposed together with the EKF. The current estimation accuracy, system dynamic response and no output voltage steady state error can be guaranteed by the introducing of LVEE module. What is more, the implementation of a PCC controller improves the system dynamic performance. The proposed method can be used in applications with mainly resistive load such as resistive electrical heating, electric oven, filament lamp, etc. For inductive and capacitive load such boost converter in Hybrid Electric Vehicles, inductive oven and battery charger, LVEE module needs further investigation and this is for the next stage research. However, for practical applications, more elements should be taken into account. First, it is suitable for CCM condition. Extra modifications are needed if the application works in DCM condition. Second, the temperature variation and aging effect can cause the system parasitics subject to change. Especially inductor parasitic resistance is easily affected by temperature and capacitor ESR changes dramatically (100% increase) through aging effect. If the system model is not updated accordingly, it can lead to current estimation error. A look up table can be used to store these parameters under different conditions and the model parameters can be updated. However, the ultimate way to solve this issue is to use online parameters identification. Finally, please bear in mind, there is no pure resister in the real world, the load should be treated as a parasitic inductor in series with a resistor or inductive capacitive types with resistive parasitic, which depends on the actual applications. In this paper, the load is treated as a resistor because most academic literatures use this way to make the presentation easy to understand. In addition, the load parasitic inductance for the test in this paper is negligible.
The paper is organized as follows. In Section 2, the overall control structure and mathematical of a boost converter with proposed algorithm is presented. An accurate model of this boost converter, which contains a number of parasitics, is also derived. In Section 3, the current estimation module, which consists of an EKF together with a LVEE module, is designed. It can not only estimate the inductor average current accurately and filter the measurement noise of output voltage but also can be helpful for improving system steady and dynamic performance. In addition, the detailed analysis on LVEE module is carried out to explain its effect on eliminating output voltage steady state error. An average current mode based PCC controller is designed in Section 4. The error between reference current and estimated average current can be eliminated in two switching cycles. Finally, the experimental results are given in Section 5. The structure of a boost converter with proposed algorithm is shown in Figure 1. The system is comprised of two control loops. The outer loop is a voltage control loop, using a PI controller to regulate the output voltage ( ) O V k , which is also filtered by the EKF. Its output is the current reference. The inner loop is a current control loop consists of a current estimation module and a PCC controller. The whole process for current loop design is investigated in this paper.
The System Control Structure
The current estimation module consists of an EKF and a LVEE module. The inductor average current estimation and output voltage measurement noise filtering are realized by the EKF. Load variations are eliminated by the LVEE. For current control, the average current control mode is used. The PWM duty ratio of next switching cycle is derived according to the sampled input and output voltages. Then the error between reference and actual average currents can be eliminated.
The Accurate Mathematical Model of the Boost Converter
Since the model accuracy affects the performance of current estimation, an accurate system model is necessary. An accurate model with a series of parasitic parameters for a boost converter is derived as follows. Setting inductor current IL(t) and capacitor voltage VC(t) as the state variables, the system state function is derived as follows.
When the switch is on, the capacitor is discharged to supply energy to load, then During switching on period, discharging current of capacitor is −VO(t)/R, the system state function is When the switch is off, the inductor charges capacitor and provides energy for load Equally as Equation (4) ( ) ( ) ( ) During the switching off period, charging current of capacitor is (3) and (6) can be presented as Switching off: ( ) ( ) where F1, G1, F2 and G2 are ( ) At first, integrate Equations (7) and (8), respectively, during switching on 0 ~ dT and switching off dT ~ T periods, then add them together and divide the sum by switching period T, the average state function during the whole switching cycle is obtained as Equation (11) According to capacitor charging balance principle, when the system is in steady state, average output voltage VO is equal to average capacitor voltage VC. So X(t) can be described as ( ) ( ) There is a nonlinear item X(t)d(t) in Equation (11), which demonstrates that boost converter is a nonlinear system. So the EKF, which is always used for nonlinear systems, is chosen for current estimation and output voltage filter.
Proposed Current Estimation Strategy
In this section, an EKF for boost converter current sensorless control is first proposed. Then a LVEE method is investigated. In addition, further theoretical analysis on the filtered output voltage steady state error elimination by LVEE module is carried out.
An EKF for Current Estimation
The accurate model of a boost converter has been built in Section 2. The EKF for current estimation and voltage filter can be derived accordingly. In order to realize the EKF based digital control, the state function of boost converter has to be converted to discrete domain at first.
Converting Equation (11) to discrete domain, a nonlinear random differential function of boost converter is obtained where, the input is duty ratio d(k), Z(k) is the measurement variable, which is the output voltage; VO(k), w(k − 1) and v(k) are noises of the process and measurement. They are not coupled. Essentially, an EKF consists of a group of mathematical functions to realize prediction, adjusts and estimation. By using EKF, the covariance of estimation error can be reduced as low as possible. According to Equation (12), the updating functions of EKF are Equations (14) and (15) ( ) ( ) ( ) Using measurements to correct the previous state and covariance of estimation errors, then the measurement update matrix of EKF are 1 ( ) ( ) ( )
T T Kg k P k H HP k H R
Equations (17)- (19) are the correction equations for state and covariance predictions. Kg(k) is the filter gain. R is a covariance. All the equations of the EKF are presented from Equations (14) to (19). As above process shows, only input and output voltages need to be sampled. Then the inductor current can be estimated and the output voltage can be filtered.
The LVEE Module
Load R is involved both in F1 and F2. In practical, load varies with environmental and working conditions. If this variation is not considered, it can lead errors of current estimation and steady state output voltage. In this paper, the load R is replaced by an incremental resistance to eliminate the load variation effect. Therefore, the LVEE module is proposed.
Using average state method, in steady state, the relationship between the load current and inductor current IL(k) is described as Equation (20) The relationship between the output voltage and load current is Combine Equations (20) and (21), the relationship between the inductor current and output voltage is obtained.
Using outputs of EKF ˆ( ) L I k and ˆ( ) O V k to replace the actual output voltage VO(k) and inductor current IL(k) in Equation (22), then R is derived as the following incremental resistance.
Substituting Equation (23) into F1 and F2, the load variation affection on EKF steady state outputs can be eliminated. The accuracies of current estimation and voltage regulation are improved.
Analysis on the LVEE Module
To find out the effect of LVEE module on output voltage stead state error, theoretical analysis is carried out as following. RL is considered in the verification process. The state function of output voltage is derived from discretization of Equation (11).
In steady state, the sampled output voltage stays constant, then VO(k) = VO(k + 1). Since steady state error of output voltage is ( , substituting Equation (23) into Equation (24), then In steady state ( )=0 X t , converting Equation (11) into discrete domain, IL(k) in steady state is described as Estimated inductor current ˆ( ) L I k is expressed as Equation (27) Subtracting Equation (26) with Equation (27), estimation error of inductor current is Substituting Equation (28) into Equation (25), then During d(k)T period, discharging current is IL(t) [1 − d(k)], the peak to peak value of output voltage VPP (voltage ripple) is shown as Equation (30) Substituting Equation (30) into Equation (29), In practical, VPP is always far low than output voltage, which means E1 << 1. Normally Td' 2 (k)/CRL << 1, since the error between the input and output of EKF is low ˆ( ) ( ) So Equation (31) is less than 1, the error of EKF output decreases and finally equals to actual output voltage.
Average Current Control Based on PCC
In this paper, a leading edge PWM modulation scheme is used. According to [31], sub harmonic oscillation exists in average control mode. Because when the average current equals to reference current in steady state, the peak current may not be a constant value and this causes the oscillation. In this section, a novel PCC based control algorithm is proposed. When any disturbance happens in current control loop, the current controller regulates the peak current to constant at first, and then the average current is also regulated to reference value in the following switching cycles. Figure 3 shows the inductor current waveform under proposed average current control mode by using the leading edge PWM modulation method. Assuming there is a disturbance on inductor peak current in the kth switching cycle, and it is described as Equation (32).
where ∆d(k) = d − d(k), d is the duty ratio of steady state, and it can be expressed as is the positive slope of inductor current in the kth switching cycle, and M2(k) is the negative slope absolute value. They are described as Equations (33) and (34).
where RCOMP = RC + d'd/2fC, f is the switching frequency. In Figure 3, the shade area is the difference required to maintain the (k + 1)th cycle peak current stays the same as the kth cycle and it can be derived as ( ) ( ) ( ) When the peak current of the (k + 1)th cycle stays constant, its average current is First, average current of the (k + 2)th is guaranteed equal to reference current by adjusting the duty ratio of the (k + 1)th cycle, and this duty ratio is derived from Equation (32).
As can be seen from Figure 3, if the errors of average and peak currents of the (k + 2)th cycle are all zero, peak current variation of the (k + 1)th cycle is Because the switching period is relatively short compared with system electrical time constant, the slopes of the continuous two switching cycles can be regarded as constant M1(k) ≈ M1(k + 1), M2(k) ≈ M2(k + 1). Substituting Equation (38) into Equation (34), duty ratio of the (k + 1)th cycle is derived Using Equation (39) to regulate the system, ( ) 2 L I k + in the (k + 2)th cycle is equal to reference current. Then proper duty ratio of the (k + 2)th cycle d(k + 2) is derived from Equation (39). d(k + 2) makes estimated average current equal to reference current and keeps peak current constant ∆IP(k + 2) = 0. So the proposed current control algorithm can eliminate the current in two switching cycles without causing any oscillations even d ≥ 0.5.
Experimental Results
In order to verify the proposed algorithm, a series of experiments in steady state and transient state with load and line voltage changing conditions are carried out for a boost converter. For comparison the same experiments are implemented by using conventional voltage control mode with the same hardware. Design parameters of the target boost converter are shown in Table 1. The built boost converter consists of control and power sections. The core of the control section is a Texas Instruments digital signal processor (DSP) TMS320F2812. The power section includes the main power stage and signal sampling circuits and the input and output voltages are sampled at the beginning of each switching cycle. The switching device of the power stage is an Infineon BSZ110N06NS3 MOSFET, the output capacitor is Panasonic EEHZC1E101XP, and the diode is Liteon SB350. The components specifications are presented in Table 2.
For monitoring, the estimated average current is output synchronously by a 12-bit (digital to analog converter) DAC TVL5616. The actual inductor current is measured by a current probe with a resolution of 200 mV/A. For easy comparison, the DAC output is set at the same scale. In the following voltage waveforms Channel 1 is sampled by AC mode with fine resolution to show voltage ripple clearly while Channel 2 shows output voltage itself with the resolution of 2 v/div. In current waveforms, Channel 1 is the actual current waveform marked as il and Channel 2 is the estimated average current waveform marked as IL. (1) Experiments for the LVEE module function verification Figure 4a,b are the steady state voltage and current waveforms, respectively, without using LVEE module. As shown in Figure 4a, the output voltage in steady state is 11.57 V and there is a steady state error up to 0.43 V. In Figure 4b, the estimated current average is 1.03 A while the actual average current should be 1.16 A, the current estimation steady state error is 0.13 A. When the LVEE module is added, the steady state output voltage and current waveforms are presented in Figure 5a,b. Neither the output voltage nor the estimated average current has steady state error. (2) Experiments with load changing condition Figure 6a,b are the output voltage and inductor currents waveforms in load changing condition (R changes from 24 Ω to 16 Ω) by using the proposed algorithm. In Figure 6a, the output voltage declines to 11.52 V, then returns to 12 V within 710 µs after the load changes. As shown in Figure 6b, the actual inductor average jumps from 1.15 A to 1.77 A in 710 µs after the load changes. It equals to its estimated value. The output voltage waveform under load changing condition with conventional voltage control is shown as Figure 7. It decreases to 11.15 V, then return to stable in 1 ms. Compared with Figure 6a, the voltage decline is 77% larger and response time is 41% longer. So the system with proposed algorithm shows much better performance in terms of load changing. Figure 8a,b, respectively. From Figure 8a, the output voltage decreases to 11.81 V and returns to stable state (12 V) in 680 µs. In Figure 8b, both the estimated current and actual current converge to stable in 680 µs. In addition the estimated control equals to the actual average value. For conventional voltage control mode, when the line voltages steps down from 6 V to 5 V. The output voltage waveform is shown in Figure 9. It decreases to 11.36 V first, then returns to 12 V in 1 ms time. Compared with Figure 8a, the response time is 47% longer and voltage drop is 237% larger. From the above experimental results, the system presents very good robustness in both load and line voltage variation conditions by using proposed algorithm. Its transient response speed is much higher and its voltage drop is much less than the system with conventional voltage control mode.
Conclusions
In this paper, a precise boost converter mathematical model, which includes a number of parasitic parameters, is built. The current estimation for boost converter sensorless control is realized by using an EKF current observer together with a LVEE module. The detailed analysis for LVEE module is carried out and the reason why it has the ability to eliminate output voltage steady state error can be clarified. For current control, the average current mode based PCC is applied. The current can reach its reference in two switching cycles. With all the above approaches, the system shows good performance in current estimation and dynamic response. In addition, the output voltage steady state error is also eliminated in load variation conditions by using LVEE module. These claims are all verified by experimental results.
|
2016-01-15T18:20:01.362Z
|
2015-04-28T00:00:00.000
|
{
"year": 2015,
"sha1": "ea7201234717220fda808593152e96ef537ffa1f",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/1424-8220/15/5/9986/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea7201234717220fda808593152e96ef537ffa1f",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering",
"Medicine",
"Computer Science"
]
}
|
220507456
|
pes2o/s2orc
|
v3-fos-license
|
Predicting microcystin concentration action-level exceedances resulting from cyanobacterial blooms in selected lake sites in Ohio
Cyanobacterial harmful algal blooms and the toxins they produce are a global water-quality problem. Monitoring and prediction tools are needed to quickly predict cyanotoxin action-level exceedances in recreational and drinking waters used by the public. To address this need, data were collected at eight locations in Ohio, USA, to identify factors significantly related to observed concentrations of microcystins (a freshwater cyanotoxin) that could be used in two types of site-specific regression models. Real-time models include easily or continuously-measured factors that do not require that a sample be collected; comprehensive models use a combination of discrete sample-based measurements and real-time factors. The study sites included two recreational sites and six water treatment plant sites. Real-time models commonly included variables such as phycocyanin, pH, specific conductance, and streamflow or gage height. Many real-time factors were averages over time periods antecedent to the time the microcystin sample was collected, including water-quality data compiled from continuous monitors. Comprehensive models were useful at some sites with lagged variables for cyanobacterial toxin genes, dissolved nutrients, and (or) nitrogen to phosphorus ratios. Because models can be used for management decisions, important measures of model performance were sensitivity, specificity, and accuracy of estimates above or below the microcystin concentration threshold standard or action level. Sensitivity is how well the predictive tool correctly predicts exceedance of a threshold, an important measure for water-resource managers. Sensitivities > 90% at four Lake Erie water treatment plants indicated that models with continuous monitor data were especially promising. The planned next steps are to collect more data to build larger site-specific datasets and validate models before they can be used for management decisions. Electronic supplementary material The online version of this article (10.1007/s10661-020-08407-x) contains supplementary material, which is available to authorized users.
Introduction
The increasing prevalence of cyanobacterial harmful algal blooms (cyanoHABs) and the toxins they produce are a global water-quality issue that threatens human and wildlife health and necessitates additional monitoring of recreational and drinking water source waters (Harke and Gobler 2015;O'Neil et al. 2012). With changes in rainfall and hydrology and increasing temperatures from climate change, preventing and managing cyanoHABs are likely to become more challenging in the future (Paerl et al. 2016). Multiple strategies to address cyanoHABs are ongoing and include reducing nutrient sources, monitoring for and predicting concentrations of toxins, minimizing exposures to humans and animals, and treating waters to reduce or eliminate cyanoHAB toxins once they occur. Identifying monitoring and prediction tools to help make informed decisions on the potential occurrence of harmful levels of toxins in recreational and drinking waters used by the public is an immediate need.
In 2014, the City of Toledo, located in Western Basin of Lake Erie, was forced to issue a do-not-drink advisory due to high concentrations of microcystins found in tap water (Jetoo et al. 2015;Qian et al. 2015). Microcystins, a class of more than 100 cyclic peptide congeners, are one of the most frequently detected freshwater cyanotoxins (Carmichael 1992). To provide warnings of potential cyanoHAB occurrence, area water managers have proactively turned to the HAB Bulletin, a bi-weekly forecast of cyanobacterial density based on remote sensing of cyanobacterial pigments (National Oceanic and Atmospheric Administration-Great Lakes Environmental Research Laboratory 2019). Microcystins, however, are not pigments and cannot be directly detected by remote sensing (Stumpf et al. 2016).
Site-specific predictive models may be used to augment remote sensing based predictions by quantifying the potential for toxin occurrence. These models provide the opportunity to protect the public from exposure to toxins and are based on a variety of factors associated with toxin production. Factors related to bloom formation and (or) toxin concentrations that could potentially be used in models have been previously identified (Joung et al. 2011;Lee et al. 2015;Otten et al. 2012;Wood et al. 2011) including water temperature, concentrations of phosphorus and nitrogen, water turbidity, lake depth, concentrations of toxin and general cyanobacterial genes, and wind direction and speed. High-frequency measurements (several measurements per hour) from optical sensors that measure algal pigments (chlorophyll and phycocyanin) have also shown promise for early-warning systems (Genzoli and Kann 2016;Izydorczyk et al. 2005;McQuaid et al. 2011). A network of water-quality multiparameter instruments has been operating in Lake Erie to measure these pigments, as well as other physical or chemical waterquality parameters such as temperature, pH, specific conductance, and turbidity (Great Lakes Observing System 2019).
In an earlier study at recreational sites in Ohio lakes, measures of the algal community (phycocyanin, cyanobacterial biovolume, and cyanobacterial gene concentrations) and pH were significantly correlated with microcystin concentrations . Two types of multiple linear regression models could be developed to estimate microcystin concentrations: (1) real-time models that include easily-or continuouslymeasured factors that do not require a sample to be collected and (2) comprehensive models that use a combination of discrete laboratory-based measurements on samples and real-time factors. Although comprehensive models take more time and effort, they may provide an early warning for and help identify factors associated with microcystin toxin production.
This article describes the results of research by the U.S. Geological Survey (USGS), in cooperation with local and state agencies, to identify factors significantly correlated with microcystin concentrations. Real-time and comprehensive linear regression models were developed to predict an exceedance of a microcystin standard or action value at recreational and water treatment plant sites in Ohio, building on the knowledge gained in a previous study . Samples and data were collected at six Lake Erie and two inland lake sites with histories of elevated microcystin concentrations (Ohio Environmental Protection Agency 2019). In addition to describing development of models, strategies for validating and using models for management decisions and public notification are discussed.
Study sites and sampling frequency
The study was done at eight locations in Ohio-six in the Western Lake Erie Basin and two in northeast Ohio on inland lakes (Fig. 1). Samples were collected twice a month to twice a week from May to November in 2016-2017, with more frequent sampling during the cyanoHAB season (July-September). At Maumee Bay State Park Lake Erie beach (MBSP Beach), samples were collected during 2013-2014 as part of a previous study (Francy et al. 2015). Site names, official USGS site identification numbers, and agencies collecting and processing samples are listed in Table 1. Samples were collected on predetermined sampling dates, not to target a bloom.
The study was done at two recreational sites and six water treatment plant sites. MBSP Beach, operated by Ohio State Parks, is in the southwest corner of Lake Erie along Maumee Bay, east of Toledo, Ohio. The Put-in- Bay recreational site is located off South Bass Island in the village of Put-in-Bay, Ohio. Samples were collected offshore near the north side of the island in a semienclosed bay frequented by boaters and jet skiers. Four of the water treatment plant (WTP) sites-Oregon, Carroll, Ottawa County (hereinafter "Ottawa"), and Marblehead WTPs-draw water from Lake Erie at intake locations 2.4, 0.3, 0.5, and 0.2 km offshore, respectively; water depths at the intakes were approximately 2.5-7 m. The inland lake water treatment plant sites draw water from Deer Creek Reservoir (Alliance WTP) and Tappan Lake (Cadiz WTP). The Cadiz WTP draws water from two intakes (upper and lower), 1.5 m from the lake bottom in 4.5 m water depths (summer pool level), that are approximately 25 m from the shoreline. The Alliance WTP draws water from a 0.9-m intake at approximately 8-m water depths. Permission was granted by participating water treatment plants to be included in this article.
Sample collection and field measurements
Samples were collected and analyzed for concentrations of microcystins, cyanobacterial genes, and nutrients, and for phytoplankton community analyses. Sample bottles were pre-washed with non-phosphate detergent, rinsed with tap water, soaked 30 minutes in a 50 mg/L sodium hypochlorite solution, neutralized with 0.05% sterile sodium thiosulfate, dipped in 5% reagent grade hydrochloric acid, and rinsed with sterile deionized water. Before a sample was collected, sample bottles were rinsed three times with native water. If a pump was used to collect a sample, the pump tubing was flushed three times before the sample was collected.
At recreational sites, grab samples were collected approximately 0.3 m below the water's surface. At MBSP Beach, three 1-L subsamples were collected from cove 3 (a popular swimming area) at 0.7-1.0 m water depths and composited into a 5-L glass bottle. Water temperature, pH, dissolved oxygen, specific conductance, chlorophyll, and phycocyanin (a pigment produced by cyanobacteria) were measured at each subsample location using a hand-held multiparameter instrument calibrated and operated per standard USGS methods (Wilde n.d.) and manufacturer's instructions (YSI 6-series, YSI Incorporated, Yellow Springs, Ohio). The manufacturer refers to phycocyanin as blue-green algae (BGA) pigment. At Put-in-Bay, samples were collected from the side of a boat by hand using a sterile bottle for cyanobacterial gene analyses and an integrated tube sampler at 0-2 m depths for nutrients, microcystins, algal pigment fluorescence (lab measurement), and phytoplankton community analyses. At WTP sites, raw water samples were collected from a tap or wet well. At three of the Lake Erie WTP sites (Oregon, Carroll, and Marblehead WTPs), it was not possible to easily collect a sample before a low dose of potassium permanganate (approximately 1 mg/L) was added in the feed intake for mussel control. Permanganate was not used at the inland lake sites. During 2016 at the Oregon WTP, samples were collected from a tap at their low service pump station after permanganate was added; in 2017, samples were collected at the same location after the addition of permanganate was periodically halted for regulatory sampling. At Alliance, Carroll, and Marblehead WTPs, samples were collected from a plant tap. At Ottawa WTP, raw water was collected from the wet well using a bailer with a sterile glass bottle or a submersible pump. At Cadiz WTP, samples were collected from two spigots (one for the upper and one for lower intake) at the pump station at the lake before carbon was added to the wet well. In 2016, the upper and lower intake bottles were analyzed separately; in 2017, the two bottles were composited and analyzed.
Strict quality-assurance and quality-control practices were implemented to ensure collection of accurate, consistent datasets at all sites. Written protocols were distributed to all participating agencies (Table 1). The USGS did several on-site checks of procedures performed by field and laboratory personnel, and any needed corrective actions were taken. In addition to the regular sampling, field quality-control samples were collected and analyzed for all constituents except for phytoplankton community analysis. These included 1 or 2 field blanks and 2 concurrent or sequential replicates per site per year. For quantitative polymerase chain reaction (qPCR) results, if detection occurred in one replicate and not the other, the result from the positive replicate was used; otherwise, an average of two replicates was used for data analysis. Results from qualitycontrol samples were carefully monitored; data were qualified, retests were done, and (or) corrective measures were taken when needed.
Measurement of microcystins and nutrient concentrations, phytoplankton community composition, and cyanobacterial genes Depending on the capability of personnel and facilities, samples were processed at a local laboratory or, for some analyses with longer holding times, were shipped to and processed by either the USGS Ohio Water Microbiology Laboratory (USGS OWML) in Columbus, Ohio, or the Ohio Environmental Protection Agency Division of Environmental Services (OEPA DES) in Reynoldsburg, Ohio. Details of sample processing and analytical methods are described elsewhere (Francy et al. 2015).
Samples for total and dissolved nutrients were stored in a dark cooler and processed and preserved within 3 h of sample collection. Processing and analysis for nutrients at Put-in-Bay and Ottawa WTP were done by The Ohio State University Franz Theodore Stone Laboratory (OSU) in Put-in-Bay, Ohio. At all other sites, processing was done by local agencies and shipped to the USGS National Water Quality Laboratory (NWQL) in Denver, Colorado, for analysis. For nutrients analyzed by OSU, water (50 mL) was filtered through a 0.45-μm polycarbonate filter for dissolved nutrients. Approximately 500 mL of whole water for total nutrients and 50 mL of filtrate for dissolved nutrients were frozen until analyses. At OSU, samples were analyzed on a SEAL Analytical QuAAtro continuous segmented flow analyzer using standard methods for nitrate, nitrite, ammonium, and orthophosphate concentrations on filtered samples (EPA 353.1,353.2,350.1,and 365.1, respectively) and for total phosphorus and total Kjeldahl nitrogen (TKN) on whole water samples (EPA 365.4 and 351.2,respectively). Total nitrogen concentration was calculated as the sum of TKN, nitrate, and nitrite. For nutrients analyzed by the USGS NWQL, processing procedures were done as per standard USGS methods (Wilde et al. 2002). A four-layer 0.45-μm, 25-mm diameter syringe filter (Tisch Scientific, GD17034) was used to collect 10-20 mL for subsequent analysis of dissolved nutrients. Whole water samples for total nitrogen and total phosphorus analyzed by the USGS were preserved with 1.0 mL of 1:7 sulfuric acid and chilled on ice. Samples were analyzed at the USGS NWQL for concentrations of dissolved nitrite, dissolved nitrate plus nitrite, dissolved ammonia, dissolved orthophosphate, total nitrogen, and total phosphorus per standard USGS methods (Fishman 1993;Patton and Kryskalla 2003;Patton and Kryskalla 2011).
At Put-in-Bay, Ottawa WTP, and Cadiz WTP, a 250-mL aliquot was removed for analysis of phytoplankton abundance and community composition and preserved with 3% Lugol's iodine. Samples were analyzed for phytoplankton abundance and community composition by BSA Environmental Services, Inc., in Beachwood, Ohio. Phytoplankton slides were prepared using standard membrane-filtration techniques (McNabb 1966;American Public Health Association 1998). A minimum of 400 natural units (colonies, filaments, and unicells) were counted from each sample as described in Lund et al. (1958); counting 400 natural units provides accuracy within 90% confidence limits. In addition, an entire strip filter was counted at high magnification (usually × 630) along with one-half of the filter at a lower magnification (usually × 400) to ensure complete species reporting. Phytoplankton identifications were confirmed by at least two phycologists, and taxonomic nomenclature followed AlgaeBase, a global species database (Guiry and Guiry 2020;Beaver et al. 2013). Biovolume was calculated by using mean measured cell dimensions (Hillebrand et al. 1999).
At Put-in-Bay and Ottawa WTP, samples were analyzed for algal pigment fluorescence using the FluoroProbe benchtop reader (bbe-Moldaenke, Kiel, Germany). The FluoroProbe uses the selective excitation of pigments to partition the signal among four functional phytoplankton groups (green algae, cyanobacteria, diatoms, and cryptophytes; Chaffin et al. 2018b).
Processing and preservation for microcystin analyses were completed within 24 h of sample collection. Two 125-mL high-density polyethylene (HDPE) bottles were triple rinsed with native water, filled with sample, and stored frozen until analysis. Samples were analyzed for total (extracellular and intracellular) microcystins by means of enzyme-linked immunosorbent assay (ELISA) (Microcystins-ADDA ELISA, Abraxis LLC, Warminster, Pennsylvania) by several laboratories per Ohio Environmental Protection Agency (2015). These included the USGS OWML (MBSP Beach and Cadiz WTP samples), Oregon WTP laboratory (Oregon WTP, Carroll WTP, Ottawa WTP, and Marblehead WTP samples), OSU laboratory (Ottawa WTP and Put-in-Bay samples), Alliance WTP laboratory (Alliance WTP samples), and MASI Laboratories in Dublin, Ohio (Cadiz WTP samples). Laboratories are certified by the Ohio Environmental Protection Agency for analysis of total microcystins by ELISA.
All samples for cyanobacterial genes were analyzed at the USGS OWML. At the USGS OWML, aliquots to be analyzed for cyanobacterial genes by qPCR were filtered onto three or four replicate, 0.4-μm pore size Nuclepore polycarbonate filters (Whatman/GE Healthcare, Piscataway, New Jersey) and frozen within 30 h of sample collection. Molecular assays for cyanobacteria associated with microcystin production were done to enumerate (1) general cyanobacteria (16S rRNA); (2) general Microcystis, Dolichospermum, and Planktothrix (16S rRNA); and (3) microcystin toxin genes (mcyE) for Microcystis, Dolichospermum, and Planktothrix (Doblin et al. 2007;Ostermaier and Kurmayer 2009;Rantala et al. 2006;Rinta-Kanto et al. 2005;Sipari et al. 2010;Vaitomaa et al. 2003). DNA extraction/purification, standard curve, and limit of detection/quantification calculation procedures are presented elsewhere (Francy et al. 2015); sample inhibition was determined according to procedures in Stelzer et al. (2013). Standard curve and limits of detection and quantification data are listed for the current study (Table S1 in supplemental materials).
In addition to the analyses done by USGS for cyanobacterial genes, samples for cyanobacterial genes were analyzed at OEPA DES as part of regulatory requirements. At the OEPA DES, draft method 705.0 (Ohio Environmental Protection Agency 2016) was followed for filtration, extraction, and qPCR analyses. Differences between USGS OWML and OEPA DES methods include the following: 0.4-μm pore size filter versus 0.8-μm, kit-based DNA extraction/purification versus crude extraction, and molecular assays listed above run in singleplex versus the CyanoDTec assay kit (Phytoxigene™, Akron, Ohio) run in multiplex. CyanoDTec assay results used in this study included a general cyanobacteria (16S rRNA) and microcystin/ nodularin toxin gene ("General microcystin mcyE"). Standard curve and limits of detection and quantification data are listed in supplemental materials (Table S1). Although results from USGS OWML and OEPA DES were not identical due to the different methods used, concentrations did trend together in a nonlinear relation and were deemed to be comparable, with the USGS OWML reporting higher concentrations.
Environmental factors
Environmental and water-quality data were compiled for the airport weather station, stream or lake-level gage, and (or) continuous water-quality monitor nearest to the site of interest. These data came from locations that were within 40 km of the study site, and most were within 16 km (Fig. 1). Data definitions and sources are summarized for the current study (Table S2 in supplemental materials). Environmental data were compiled from the National Oceanic and Atmospheric Administration (NOAA), USGS, and (or) The Ohio State University and included rainfall and wind direction and speed (National Oceanic and Atmospheric Administration-National Centers for Environmental Information 2019; USGS 2019a), water levels (National Oceanic and Atmospheric Administration-Tides and Currents 2019; USGS 2019a), daily mean streamflow or gage height (USGS 2019a), and solar radiation (The Ohio State University 2019). Continuous water-quality monitor data collected from multiparameter instruments at Lake Erie sites were obtained from the Great Lakes Observing System (GLOS) HABS Data Portal (GLOS 2019) or for inland lake sites through a private system (WQData Live, NexSens Technology Inc., Fairborn, OH). Sitespecific remote sensing satellite data were provided by the National Aeronautics and Space Administration (NASA) from Landsat 8 and reported as mg/m 3 chlorophyll-a (Sandeep Kumar Chittimalli, NASA, written commun., 2018). Chlorophyll-a values were reported for the previous day's measurement or the most recent antecedent data from the satellite.
Downloaded data were checked and transformed by the USGS for use in data analysis and model development. Other agencies coordinated the maintenance of continuous water-quality monitors, which were calibrated twice each year and cleaned periodically from fouling at most sites. For quality assurance, time-series continuous monitor data were plotted by the USGS. Data points in these plots were analyzed in detail and removed if they represented an improbable variation in value when compared to neighboring points or if excessive monitor drift was evident.
Data manipulations for explanatory variables are described elsewhere (Francy et al. 2015) and listed for the current study (Table S2 in supplemental materials). The 24-h averages of continuous monitor measurements up to the approximate time the microcystin sample was collected (i.e., 10 a.m.-10 a.m.) were computed; these 24-h averages were used to calculate averages for 3-, 5-, 7-, and 14-days antecedent to the time of sampling. Rainfall was summed for the 24-h period up to 8 a.m. on the day of sampling to facilitate data compilation from an existing system (USGS 2019b); various multiday antecedent totals were subsequently calculated. Change in water level was calculated based on the differences between the 10 a.m. water-level value on the date of sampling as compared to the previous day and 7-and 14-day period prior measurement and from the spring average water level. Daily mean streamflow, gage height, and total solar radiation were calculated for the previous day before sampling (midnight to midnight).
Data management, statistical analysis, and modeling
Environ Monit Assess (2020) 192: 513 Page 7 of 27 513 Daily data for wave heights and field water-quality parameters measured on-site and for turbidity and nutrient, microcystins, and cyanobacterial gene concentrations in discrete samples are available through the USGS National Water Information System database (USGS 2019a) using USGS station identification numbers (Table 1). Data on phytoplankton community composition and datasets used to develop site-specific models are available through data releases Francy et al. 2020); phytoplankton community analysis data were not used in models. Data analysis to identify variables significantly correlated to microcystin concentrations and model development were done based on the procedures described in Francy et al. (2016). The factors were segregated based on their potential use in real-time and (or) comprehensive models and whether continuous monitor data were used in model development. Nonparametric correlation coefficients (Spearman's rho) were calculated to identify associations between microcystin concentrations and other factors. Spearman's rho measures the strength of the monotonic association between two variables (whether linear or nonlinear) and is resistant to effects of outliers (Helsel and Hirsch 2002). Results from correlation analyses were used to help identify which variables needed to be included for model building, even when multiple data points for the explanatory variable were missing. Multiple minimum reporting limits for cyanobacterial gene concentration data were accommodated in correlation analyses by assigning them a value less than the lowest detection for each assay. Censored nutrient and microcystin data (values below the minimum reporting limit) were assigned one-half the censored value. Scatterplots of key factors were reviewed to ensure relations between other factors and microcystin concentrations were related and were not influenced by one or two outliers.
Additional data analysis and linear regression model development were done with Virtual Beach version 3.07 (U.S. Environmental Protection Agency (USEPA), 2018). Explanatory variables were mathematically transformed as necessary to linearize the relation with the dependent variable. Transformations included log10, inverse, square, square root, and quad root; which one was used (if any) was based on the Pearson's correlation coefficient (r) between the explanatory variable and microcystin and if an x/y plot of the same indicated improved linearity over the untransformed variable. To identify the best candidate models, models were ranked by a user-selected evaluation criterion such as Predicted Error Sum of Squares (PRESS) or Corrected Akaike Information Criterion (AICC) (Cavanaugh and Neath 2019). Explanatory variables were limited to a maximum variance inflation factor (VIF) of five (to avoid multi-collinearity among explanatory variables), and models were limited to no more than five explanatory variables (due to small sample sizes and to avoid overly complex models). The assumptions associated with ordinary least squares regression required to predict concentrations (Helsel and Hirsch 2002; Chapter 9, Table 9.1) were met. For this study, we were not seeking to predict a variance for the prediction or test hypotheses. Therefore, the two assumptions that had to be met were that (1) the model form is correct and (2) the model is fit with observed and explanatory data that are representative of the range of conditions over which the model will be applied. Model selection and diagnostics were done to meet these assumptions and included tests for statistical significance of explanatory variables and influence and leverage of observations. Tests for influential outliers included Cook's D, as described in USEPA ( 2018). If a data point was identified as above the critical value for Cook's D, it was carefully examined and only removed if determined to be erroneous. If a scatterplot between an explanatory variable and the dependent variable indicated a relation was not evident (linear or nonlinear) and (or) was influenced by one or two outliers, the variable was removed and the model selection process was repeated. Finally, a cross-validation step was included in the model selection process to examine the predictive power of all candidate models. This step was done regardless of the model selection criterion used (PRESS is a cross-validation statistic).
Because the models can be used for management decisions and not to explicitly predict a microcystin concentration, the output from each selected model was the probability of exceeding a recommended advisory-level (recreational sites) or action-level (WTPs) microcystin concentration threshold. Model outputs were examined in terms of sensitivity, specificity, and accuracy in estimating concentrations above or below thresholds. The sensitivity is the percentage of exceedances of the advisory or action level that are correctly predicted by the model, the specificity is the percentage of nonexceedances correctly predicted, and the accuracy is the overall percentage of correct responses. A threshold probability was set for each model by examining model-output sensitivities and specificities at different probability levels. The selection of the threshold probability is a compromise between false negative and false positive responses while maintaining a high number of overall correct responses (Francy and Darner 2006). After comparing evaluation criterion for ranked candidate models, the best model was selected based on (1) significance of explanatory variables (p < 0.05), (2) sensitivity and specificity to estimate above and below a threshold microcystin concentration, and (3) ability to reasonably explain how each explanatory variable could potentially affect the observed variation in microcystin concentrations.
Microcystin concentrations
Microcystin concentrations, the number of samples, the percent of detections, and the percent of detections above an advisory or action level for each study site are shown in Fig. 2. The action level at recreational sites was based on the USEPA recommended recreational water-quality advisory of 4 μg/L (USEPA 2016). A 1 μg/L threshold was used as practical action level for WTP managers to adjust treatment. At the time of the study (2016-2017), recommended 10-day drinking water health advisories for microcystin were 0.7 μg/L for pre-school-age children and 1.6 μg/L for school-age children through adults (USEPA 2015).
During the study period, the percent of microcystin detections at each site ranged from 26 to 83%, and the highest median microcystin concentration (1.6 μg/L) was found at the Cadiz WTP. Microcystin concentrations found at MBSP Beach ranged from < 0.30 to 240 μg/L, necessitating a split of the y axis to adequately view the range of microcystin concentrations. The percent of samples exceeding the recreational or WTP action level ranged from 0% at Put-in-Bay recreational site to 62% at the Cadiz WTP.
Phytoplankton community analysis
The differences in cyanobacterial community dynamics are shown for three of the study sites-Cadiz WTP, Ottawa WTP, and Put-in-Bay recreational site (Fig. 3). The most common cyanobacterial genera that produce microcystin in freshwaters are shown (Aphanizomenon, Dolichospermum, Microcystis, and Planktothrix), along with other potential m i c r o c y s t i n p r o d u c e r s ( A p h a n o c a p s a a n d Pseudanabaena) (USEPA 2014; Bernard et al. 2017). Maximum cyanobacterial biovolume at the Cadiz WTP was 21 times higher than at the Ottawa WTP and four times higher than at Put-in-Bay. At the Cadiz WTP (Fig. 3a), non-microcystin producers were often dominant, even when microcystin was Environmental and water-quality factors Summary statistics for selected physical, chemical, and cyanobacterial gene results, later used in model development, are shown for each site (Table S3 in supplemental materials). A suite of water-quality measurements made with hand-held instruments were routinely collected at MBSP Beach and Alliance WTP; average values for all measurements were higher at MBSP Beach than at the Alliance WTP. In discrete samples sent to a laboratory, dissolved and total nutrient concentrations were measured at five sites and only total nutrient concentrations at three sites. The highest average concentrations for key nutrient constituents were found at Alliance (ammonia, average = 0.13 mg/ L), Ottawa (nitrate plus nitrite, average = 0.60 mg/ L), and MBSP Beach (orthophosphate, total nitrogen, and total phosphorus; average = 0.032, 2.65, and 0.12 mg/L, respectively). Average nitrogen to phosphorus (N to P) mass ratios ranged from 11.7 at Cadiz WTP to 44.4 at Put-in-Bay.
A variety of assays were used to quantify c y a n o b a c t e r i a l g e n e s b y q P C R . G e n e r a l cyanobacteria 16S rRNA was detected in all samples except for two samples at MBSP Beach (98% detected). The highest average concentrations for general Microcystis 16S rRNA and Microcystis-specific microcystin mcyE were found at MBSP Beach and Carroll WTP (> 5.00 log copies/100 mL) and for general Planktothrix 16S rRNA and Plantothrix-specific microcystin mcyE at Alliance WTP and Cadiz WTP (> 6.00 log copies/100 mL). For general Dolichospermum 16S rRNA, the lowest average concentration was found at the Ottawa WTP (4.90 log copies/100 mL). The Dolichospermum-specific microcystin mcyE gene was not found at any site, which agrees with previous research that Lake Erie Dolichospermum is not a microcystin-producer (Ouellette et al. 2006). The general microcystin mcyE gene was found at all sites where measured (range of detection 39-81%).
Average, minimum, and maximum values for continuous water-quality measurements (24-h antecedent averages) and environmental measurements retrieved from existing sources are shown for selected sites (Table S4 in supplemental materials). This is not an exhaustive list, but rather examples of measurements at Lake Erie and inland lake sites. Parameters measured for at least 2 years were later used in data exploration and model development. Average 24-h phycocyanin measurements were highest at the two inland lake sites (3.6 and 3.7 RFU), and the highest maximum value was found at the Oregon WTP (13.6 RFU). Average pH measurements were the same among the three sites (Oregon, Ottawa, and Alliance WTPs) with pH data presented (average pH = 8.3), with the highest maximum value found at the Oregon WTP (pH = 10). The averages and maximum values for chlorophyll (average 6.0 and max 14.7 RFU) and specific conductance (average 657 and max 815 μS/cm) were highest at the inland lake sites-Alliance WTP and Cadiz WTP, respectively. Environmental data included daily mean streamflow, daily average gage height, lakelevel change over 24 h, and rain in the past 24 h.
Correlations between microcystin concentrations and factors for models
Spearman's correlation coefficient (rho) was computed to determine the correlation between microcystin concentrations and factors identified as potential explanatory variables for models using data from 2016-2017; 2013-2014 data were also included for MBSP Beach. Factors for real-time models were grouped as follows (Table 2): (a) water-quality hand-held measurements or observations at the site, (b) continuous monitor waterquality measurements, and (c) environmental and seasonal data. Factors for comprehensive models were grouped based on the type of analysis (Table 3) (Tables 2 and 3).
Hand-held measurements or observations at the site
Among measurements made with hand-held multiparameter instruments or observations at the site, significant positive correlations were seen for four parameters measured at the Lake Erie site (MBSP Beach), but significant negative correlations were seen for pH and water temperature at the inland lake site (Alliance WTP) (Table 2a). Water temperature was significantly negatively correlated with microcystin at the Alliance WTP. On closer examination, the highest microcystin concentrations (> 2 μg/L, Fig. 2) at the Alliance WTP were found in late October and November 2016 and 2017 when temperatures were between 7.8 and 14.3°C, whereas 17 out of 20 samples collected in summer were < 0.30 μg/L when temperatures were > 24.3°C.
Continuous water-quality monitor measurements
Spearman's correlation coefficients between microcystin concentration and continuous monitor data are presented as ranges of coefficients for 1-, 3-, 5-, 7-, and 14-day average measurements (Table 2b). Correlations between microcystin concentrations and phycocyanin or pH were significant for all time periods at seven out of eight sites; significant correlations were positive except for pH at the Alliance WTP. Negative significant correlations were found for specific conductance at six out of seven sites. Chlorophyll was significantly positively correlated with microcystin for all time periods at five out of eight sites, although significant coefficients were generally lower in magnitude than those found for microcystins with phycocyanin, pH, or specific conductance. The other continuous factors were inconsistently correlated to microcystin concentrations for multiple time periods, significance, and study sites. The plotted relations between 24-h average phycocyanin and microcystin concentration are shown in Fig. 4. Overall, the relations were linear with higher microcystin concentrations seen in 2017 than 2016. Measurements obtained from multiparameter waterquality instruments and microcystin concentrations were lowest at the two most eastern Lake Erie sites, Marblehead WTP and Put-in-Bay, with the latter showing smaller phycocyanin to microcystin increases (Figs. 4a and b). At Marblehead WTP and Put-in-Bay recreational site, Spearman's rho values were statistically significant except for Put-in-Bay during 2016. For Carroll, Ottawa, and Cadiz WTPs, phycocyanin measurements and microcystin concentrations were in a mid-range group (Figs. 4 c-e). At these sites, all Spearman's rho values were statistically significant except for Ottawa WTP in 2016. For MBSP Beach (Fig. 4f), the 24-h average concentrations of phycocyanin and microcystins were statistically significant in 2014, 2016, and 2017 with the strongest correlation found during 2014.
Environmental and seasonal factors
Spearman's correlation coefficients reported for environmental factors were ranges of coefficients determined for several time periods for gage height, streamflow, rainfall, and lake-level change; one or two coefficients are presented for wind speed and one coefficient for day of the year and satellite data (Table 2c). Streamflow was significantly correlated to microcystin concentrations at all sites where this variable was measured. Rainfall, lake-level change, and wind speed were inconsistently correlated to microcystin concentration for multiple time periods, significance, and study sites. At the two sites where satellite data were available, there were weak but significant positive correlations to microcystin concentrations. A seasonal variable, sine day of the year, was significantly negatively correlated to microcystin concentration at six out of eight sites.
The plotted relations between daily mean streamflow or gage height (previous day) and microcystin concentrations are shown in Fig. 5. At the five sites with streamflow from a nearby river (Fig. 5a-e), the highest streamflows were associated with microcystin concentrations below detection, and the highest microcystin concentrations were often associated with low streamflows. Streamflow was significantly correlated to microcystin concentration during both 2016 and 2017 at three out of five sites. The relation between gage height and microcystin concentration at the Alliance WTP was statistically significant during 2016, but not during 2017 (Fig. 5f) and was influenced by two
Comprehensive factors
Comprehensive factors for concentrations of nutrients and cyanobacterial genes were lagged up to 2 weeks, as these were laboratory measurements and were expected to be used in advance of a period of cyanoHAB toxin production (Table 3). At least two of the lagged dissolved nutrients (ammonia, nitrate plus nitrite, nitrite, or orthophosphate) were significantly correlated with microcystin concentration at four out of five sites measured; at Lake Erie sites (Ottawa WTP and MBSP Beach), all significant correlations were negative, whereas at inland lake sites (Alliance and Cadiz WTPs), all significant correlations were positive except for ammonia at Cadiz WTP (Table 3a). Lagged total nitrogen or nitrogen to phosphorus ratios were significant at only one site each. Spearman's correlations for unlagged nitrate plus nitrite and orthophosphate concentrations with microcystin concentrations were included because continuous real-time instruments are available for these constituents. Unlagged nitrate plus nitrite was significant at four out of five sites; unlagged orthophosphate was significant only at the Ottawa WTP. At the Ottawa WTP and Put-in-Bay, all algal pigment fluorescence measurements were significantly correlated with microcystin concentrations except for diatoms (DiaFluoro) ( Table 3b). At least one lagged cyanobacterial gene was significantly correlated with microcystin at all sites except for Put-in-Bay (Table 3c). Strong correlations (rho > 0.70) were found for the Microcystis-specific mcyE gene at four sites and the general microcystin mcyE gene at two sites. The relations between lagged gene concentrations and microcystin concentrations are shown graphically for six sites (Fig. 6). At the Ottawa and Carroll WTPs (Figs. 6 a and b), Microcystis-specific mcyE concentrations greater than approximately 4.0 and 5.0 log copies/100 mL, respectively, were associated with elevated microcystin concentrations (> 1.0 μg/L). At MBSP Beach (Fig. 6c), however, several samples with microcystin concentrations at or near the minimum reporting limit (0.3 μg/L) had elevated Microcystis-specific mcyE concentrations. At Oregon WTP (Fig. 6d), the correlation between general microcystin mcyE and microcystin concentration was significant during 2017, but not in 2016, owing to lower microcystin concentrations during 2016. At Cadiz and Alliance WTPs (Figs. 6e and f), the relations between Planktothrix-specific mcyE and microcystin concentration were similar, although microcystin and gene concentrations were lower at the Alliance WTP.
Models for estimating the probability of exceeding an action threshold for microcystin
Site-specific models were developed to demonstrate the feasibility of using models to estimate exceedance of microcystin concentration thresholds for management decisions. Models were developed for each site when a minimum of 2 years of data were available and were based on real-time variables only (Table 4) and on real-time and comprehensive variables (Table 5). All models included real-time environmental and seasonal variables as these data were available for nearly every day a microcystin sample was collected. Equations for the best models for each site are listed in supplemental materials S5.
Microcystin thresholds were determined based on potential action levels at each site. For MBSP Beach and Put-in-Bay recreational sites, threshold microcystin concentrations were set at 4 and 1 μg/L, respectively. The threshold concentration at MBSP Beach was based on the USEPA recommended primary contact recreational advisory of 4 μg/L (USEPA, 2016); a lower threshold was used at Put-in-Bay because no samples had microcystin concentrations that exceeded 4 μg/L. At WTP sites, a 1 μg/L threshold was used, except at Marblehead WTP, where a 0.30 μg/L (minimum reporting limit) threshold was used. The 1 μg/L threshold was used as practical action level for WTP managers to adjust treatment; the lower threshold was used at Marblehead WTP because only one sample exceeded 1 μg/L.
Real-time models
Real-time models are presented in two categories: (1) models with continuous water-quality data and (2) models without continuous data, having only on-site measurements (Table 4). Adjusted R 2 values for real-time models with continuous monitor data (range 0.53-0.88) were higher than those with on-site measurements (range 0.49-0.62). Threshold exceedance probabilities for action established for each model ranged from 40-65%. Sensitivities > 90% were achieved in models for the four Lake Erie WTPs (Carroll, Marblehead, Oregon, and Ottawa), 80-90% for the inland lake WTPs (Alliance and Cadiz), and < 80% for the recreational site models. At the Alliance and Cadiz WTPs, models were developed with only on-site measurements because only 1 year of continuous monitor data were available. Specificity of the Cadiz WTP model (73%) was lower than models for the other sites because a large percentage of the samples (21 out of 32) were above the 1 μg/L action threshold at Cadiz WTP. At MBSP Beach, data were available to develop both types of models. The real-time on-site model for MBSP Beach had a slightly higher adjusted R 2 than the real-time continuous monitor model (0.60 and 0.53, respectively); however, specificity, sensitivity, and accuracy were equivalent for both models. Environmental data were used in all real-time models except for the Oregon WTP and Put-in-Bay models, and continuous monitor data were used in all models except for the Alliance WTP model. Phycocyanin fluorescence was used in six models, pH and streamflow or gage height in four models, and rainfall, season, and specific conductance in three models. Other variables were used in one or two site-specific models. The MBSP Beach and Put-in-Bay continuous monitor models included weather variables available from the Toledo Crib buoy (wind speed and dew point) and Gibraltar Island buoy (wind speed), respectively (Table S2).
Comprehensive models
Comprehensive models (those with laboratory measurements) are presented in three categories (Table 5): (1) models with continuous monitor data and lagged comprehensive variables, (2) models with no continuous monitor data and lagged comprehensive variables, and (3) a model with continuous monitor data and same-day (unlagged) comprehensive variables. The adjusted R 2 values for comprehensive models with continuous monitor data (range 0.65-0.94) were generally higher than those for models with no continuous monitor data (range 0.56-0.72). Threshold probabilities ranged from 20 to 60%. Sensitivities > 90% were achieved in five WTP models (Oregon, Ottawa (2), Cadiz, and Carroll), 80-90% for three models (Put-in-Bay, Alliance WTP, and Marblehead WTP), and < 80% for the MBSP Beach model.
Three comprehensive models were developed that had both continuous monitor and comprehensive variables. The comprehensive variables included general Microcystin mcyE genes for Oregon WTP, nitrogen to phosphorus ratio for Ottawa WTP, and total fluorescence at Put-in-Bay (Table S3). Among these three comprehensive models, the Oregon WTP model had the highest R 2 value (0.94) and the Ottawa WTP model had the highest sensitivity (100%). The models with no continuous monitor data were those without 2 years of data available (Alliance WTP and Cadiz WTP) or those
Discussion
Factors that could be used in models In an earlier study , factors that could be used in models to estimate microcystin levels at recreational sites were identified and models were developed for one site, MBSP Beach, using a small dataset (n = 24). In the current study, WTP sites, larger datasets, and modeling at eight sites expanded on this earlier work. The sites included the recreational beach investigated during the earlier study, a boater swim area of a Lake Erie island, four WTP sites in the Western Lake Erie Basin, and two inland lake WTP sites. The percentages of microcystin detections ranged from 26 to 83% at the sampling sites during 2016-2017 (MBSP Beach included data from 2013-2014). Data were collected or compiled on factors to be used in real-time and comprehensive models. Factors for real-time models are those that are available in real-time for management decisions, including those from hand-held and continuous water-quality measurements and environmental and seasonal data. Factors for comprehensive models include those that require that a sample be collected and analyzed in a laboratory. As a first step in model development, correlation coefficients (Spearman's rho) were computed between microcystin concentrations and real-time and comprehensive factors. This exploratory data analysis provides insights into factors that may potentially be used in multiple linear regression models. The significance and direction (positive or negative) of significant correlations for some key variables were different between Lake Erie and inland lake sites. These included phycocyanin, pH, specific conductance, and nutrient concentrations. The best site-specific real-time and comprehensive models were then developed when at least 2 years of data were available. In addition to model statistics, it was important to be able to reasonably explain how each explanatory variable could potentially affect the observed variation in microcystin concentrations.
Among real-time factors, phycocyanin fluorescence, pH, specific conductance, streamflow or gage height, and a seasonal factor (sine of day of the year) were significantly correlated to microcystin concentrations at most sites (Table 2). These were the variables commonly used in real-time models, with phycocyanin used most often. Phycocyanin is a light-harvesting pigment protein produced by cyanobacteria, and the concentration of phycocyanin is often used as a proxy for cyanobacterial biomass (Humbert and Törökné 2016). However, because phycocyanin is a measurement of both non-toxin-and toxin-producing cyanobacteria and phycocyanin has high intracellular variability (Stumpf et al. 2016), caution should be taken in using phycocyanin alone to estimate microcystin concentrations. After phycocyanin, pH and streamflow or gage height were most often used in real-time models. High pH (pH > 9) is partially caused by cyanobacteria while it also enhances their dominance (Jacoby et al. 2000). High biomasses of cyanobacteria use bicarbonate (HCO 3 -) as a carbon source, which releases hydroxide (OH -) and increases pH. Specific conductance had a positive effect on one of the inland lake models. During the summertime, elevated specific conductance is associated with low lake levels and stagnation, which are conditions that favor cyanobacterial growth (Andres et al. 2019). The inclusion of specific conductance as a negative variable in Lake Erie models is harder to explain, but it could simply reflect the temporal patterns of cyanobacteria (highest late summer) and specific conductance (lowest late summer). Specific conductance is proportional to major ion concentration (Wetzel 2001), and the seasonal decline of specific conductance may be due to decreases in calcium and carbonate as the result of growth of the invasive bivalve Dreissena mussel.
Streamflow was often negatively significantly correlated to microcystin concentrations. Lower streamflows occur during periods with high evapotranspiration and less turbulent waters, conditions conducive to cyanobacterial dominance. Bertani et al. (2017) found a negative relation between bloom size and streamflow and suggested this was consistent with cyanobacterial growth being favored by lower summer flushing rates and higher residence time. In the current study, a seasonable variable was used in three models. Graham et al. (2017) also included a seasonal variable (cosine day of the year) in models for microcystin occurrence in a Kansas reservoir. The seasonal variable reflects the consistent seasonal pattern in microcystin occurrence. The further importance of a seasonal variable was shown in the current study, where microcystin was negatively correlated to water temperature at an inland lake site because microcystin concentrations were higher in the fall when temperatures were low. Although wind speed and lake-level change were inconsistently correlated among sites and different timeframes for the same site, these variables were used in two models. The influence of wind direction and lakelevel change on microcystin concentrations is complex. Microcystis growth is favored during periods with high water stability; however, short wind-induced mixing events may have a positive effect on cyanobacterial growth by enhancing resuspension of nutrients (Bertani et al. 2017).
Many of the real-time factors were averages over time periods antecedent to the time the microcystin sample was collected. Average conditions over larger time periods may be more important to the development of current microcystin concentrations than are conditions at the time of sampling. Bertani et al. (2017) used three time-lagged variables (2, 8, and 30 days for wind velocity and stress, irradiance, temperature, and streamflow) in models to estimate cyanobacterial bloom size in the Western Lake Erie Basin. They stated that different time scales help to minimize collinearity among variables. In another study in the Western Lake Erie Basin, Chaffin et al. (2018b) indicated that one should average continuous monitor data over 1 h or 24 h to get a better correlation with water-quality data than one measurement made at the time a sample is collected. They stated that one measurement of phycocyanin, for example, could be influenced by large spikes of Microcystis colonies drifting past the sonde.
Comprehensive factors investigated included nutrient and cyanobacterial gene concentrations and algal pigment laboratory measurements. Comprehensive data for nutrients and cyanobacterial genes were lagged up to 2 weeks to provide an advanced warning of elevated microcystin concentrations.
Some lagged dissolved nutrients showed significant but weak correlations to microcystin concentrations whereas lagged total nitrogen or phosphorus were seldom significantly correlated to microcystin concentrations (Table 3). In Lake Erie, the highest concentrations of nutrients occur during the spring or early summer when water temperatures are too low to support cyanobacterial blooms (Chaffin et al. 2011). Total N to P mass ratios were significantly correlated to microcystin concentrations at one Lake Erie site, but used in models at several sites. Other investigators (Jacoby et al. 2015) found that the best predictor of microcystin concentration categories in nine lakes in Washington, USA, was N to P ratio. The authors stated that low N to P ratios favor cyanobacterial dominance because many cyanobacteria genera have lower cellular N to P requirements than other phytoplankton. However, a 10-year data set of 246 lakes showed that middle N to P ratios had the highest probability of microcystins present and the highest concentrations of microcystins (Scott et al. 2013). Some sites had correlations (either positive or negative) between ambient concentrations of nitrate, nitrite, ammonia, and orthophosphate (either on the 2-week lag or unlagged) with microcystins, but there was no consistent pattern of these correlations. Because snapshot grab samples may not adequately account for complex biogeochemical processes, one option may be to collect continuous nutrient data. In the current study, unlagged nitrate plus nitrite and unlagged orthophosphate concentrations were significantly correlated to microcystin concentrations at four sites and one site, respectively. This is important for future management decisions, because it is possible to measure these constituents in situ continuously in real-time and obtain more than a snapshot by use of a probe or automatic analyzer. While nitrate concentration by itself was not a good predictor of microcystin concentration in this study, nitrate concentrations > 0.2 mg N/L indicate the potential for microcystins to be present because cyanobacteria cannot produce high levels of microcystins in nitrogen-limited waters (Chaffin et al. 2018a). The same-day orthophosphate concentration was used as a negative variable in one model.
Microcystis has been shown to grow well under low orthophosphate conditions, facilitated by a high-affinity orthophosphate uptake system (Gobler et al. 2016). In addition, low orthophosphate concentrations may result from increased metabolic activity during warmer months from cyanobacteria and other organisms. It is challenging, however, to interpret orthophosphate concentrations because cyanobacteria can store enough P intracellularly for several cellular division cycles (Baldia et al. 2007;Gobler et al. 2016). Because the roles of nutrients in development of cyanobacterial blooms and toxins are complex (Srivastava et al. 2016), the use of nutrients and the interaction between nutrients and other factors in models to estimate toxin concentrations need to be further investigated.
Significant Spearman's correlations between cyanobacterial genes (lagged) and microcystin concentrations were found at all sites and were used in several models. Microcystin concentrations have been found to correlate with copy numbers of toxin genes mcyA (Srivastava et al. 2012) and mcyE (Otten et al. 2012), and mcyE and mcyA (Conradie and Barnard 2012), genes required for microcystin toxin production. In laboratory studies, higher nitrogen concentrations resulted in increased microcystin concentrations and increased expression of the mcy genes (Chaffin et al. 2018a;Harke and Gobler 2015;Srivastava et al. 2016). Towards an operational system using models for management decisions Because these models can be used for management decisions, important measures of model performance are sensitivity, specificity, and accuracy in terms of estimates above and below an action threshold. Sensitivity is especially important in that managers want to err on the side of caution to predict exceedance of the action threshold. Providing an exact estimate of the microcystin concentration is not as important, as the models are only one tool for assessing current waterquality conditions. On that note, sensitivities > 90% at four Lake Erie WTPs indicated that models with continuous monitor data were especially promising ( Table 4). The importance of servicing sondes to obtain reliable data in the future is important. The USGS recommends that continuous monitors be calibrated and cleaned for fouling as often as needed based on-site conditions and data quality objectives (Wagner et al. 2006). In Western Lake Erie Basin waters, maintenance functions on sondes are typically performed once a month or more often if needed (Erin Bertke, U.S. Geological Survey, oral commun.).
At several sites, continuous monitor data had to be intentionally excluded for lagged comprehensive data to be included in the best models. This means that realtime factors may be sufficient at some sites and that comprehensive factors would only be needed in the event continuous monitor data are not available. At one site, a comprehensive model with orthophosphate measured at the time of sampling and continuous monitor data provided 100% sensitivity and specificity. With further research and data collection, employing an automatic analyzer for measuring orthophosphate and nitrate concentrations may prove to be useful. Collecting data on comprehensive factors, however, may be worth the extra time and effort at some sites to provide an advanced warning of a microcystin toxin event. This includes continuing to collect samples for analysis of total nutrients, calculating N to P ratios at some sites, and measuring algal pigment fluorescence in a local laboratory. Assays targeting the microcystin toxin gene representing the dominant strain or general microcystin production may be useful predictors of future toxic blooms.
It should be noted that the relations between explanatory variables and microcystin concentrations were sometimes different among the 2 years investigated during this study (3 or 4 years at MBSP Beach). Indeed, biovolumes and community profiles (including the genera of microcystin producers) were different in 2016 and 2017 at the three sites where phytoplankton community analysis samples were collected (Fig. 3). Microcystin concentrations were lower in 2016 than in 2017 at most sites. Nevertheless, consistent significant correlations among sites, years, and time periods for some variables and high model performance statistics (sensitivities, specificities, and accuracies) show promise in using models for management decisions.
If the models prove to be valuable management tools after validation, they can be used to trigger sample collection, adjust treatment options at WTPs, and provide real-time advisories at recreational sites. The models developed during this study are not intended for immediate use by beach and WTP managers but are rather exploratory work to demonstrate how models could be used for future management decisions. More data need to be collected to build site-specific datasets and validate models before they can be practically applied. Indeed, explanatory variables and microcystin concentrations vary site by site and indicate there are complexities that still need to be understood. The model results are not intended to be used as a surrogate for microcystin concentrations-direct measurement of the toxin is still required. Finally, a system for compiling data and running the models daily would be a valuable tool for waterresource managers. The Great Lakes NowCast (USGS 2019b) has been providing real-time estimates of Escherichia coli based on models since 2014. The NowCast system provides speed and efficiency for managers to manage data and develop and validate models. It can be easily modified to include cyanotoxins.
Conclusions
The ability to quickly estimate the probability of exceeding an action threshold for microcystin concentrations is valuable to recreational site and WTP managers. In this study, we showed that site-specific multiple linear regression models with accuracies > 80% could be developed for Great Lakes and inland lake sites that use a variety of water-quality and environmental factors related to microcystin concentrations. Real-time models commonly included variables such as phycocyanin, pH, specific conductance, and streamflow or gage height. Many of the real-time factors were averages over time periods antecedent to the time the microcystin sample was collected, including water-quality data compiled from continuous monitors. Sensitivities > 90% at four Lake Erie WTPs indicated that models with continuous monitor data were especially promising. Comprehensive models (those which have data from discrete samples analyzed in a laboratory) were useful at some sites with lagged variables for cyanobacterial toxin genes, dissolved nutrients, and (or) N to P ratios. More work needs to be done to validate models before they can be applied for management decisions. provided data for two sites and significantly contributed to the writing of this article. Funding was provided by the Ohio Water Development Authority and the U.S. Geological Survey Cooperative Water Program. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
2020-07-14T14:25:16.585Z
|
2020-07-14T00:00:00.000
|
{
"year": 2020,
"sha1": "910767d2bef4023b4de4d39cc69c6af95aa61801",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10661-020-08407-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "910767d2bef4023b4de4d39cc69c6af95aa61801",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
}
|
250257790
|
pes2o/s2orc
|
v3-fos-license
|
Optimal Strategy on Radiation Estimation for Calculating Universal Thermal Climate Index in Tourism Cities of China
The Universal Thermal Climate Index (UTCI) is believed to be a very powerful tool for providing information on human thermal perception in the domain of public health, but the solar radiation as an input variable is difficult to access. Thus, this study aimed to explore the optimal strategy on estimation of solar radiation to increase the accuracy in UTCI calculation, and to identify the spatial and temporal variation in UTCI over China. With daily meteorological data collected in 35 tourism cities in China from 1961 to 2020, two sunshine-based Angstrom and Ogelman models, and two temperature-based Bristow and Hargreaves models, together with neural network and support vector machine-learning methods, were tested against radiation measurements. The results indicated that temperature-based models performed the worst with the lowest NSE and highest RMSE. The machine-learning methods performed better in calibration, but the predictive ability decreased significantly in validation due to big data requirements. In contrast, the sunshine-based Angstrom model performed best with high NSE (Nash–Sutcliffe Efficiency) of 0.84 and low RMSE (Root Mean Square Error) of 35.4 J/m2 s in validation, which resulted in a small RMSE of about 1.2 °C in UTCI calculation. Thus, Angstrom model was selected as the optimal strategy on radiation estimation for UTCI calculation over China. The spatial distribution of UTCI showed that days under no thermal stress were high in tourism cities in central China within a range from 135 to 225 days, while the largest values occurred in Kunming and Lijiang in southwest China. In addition, days under no thermal stress during a year have decreased in most tourism cities of China, which could be attributed to the asymmetric changes in significant decrease in frost days and slightly increase in hot days. However, days under no thermal stress in summer time have indeed decreased, accompanying with increasing days under strong stress, especially in the developed regions such as Yangze River Delta and Zhujiang River Delta. Based on the study, we conclude that UTCI can successfully depict the overall spatial distribution and temporal change of the thermal environments in the tourism cities over China, and can be recommend as an efficient index in the operational services for assessing and predicting thermal perception for public health. However, extreme cold and heat stress in the tourism cities of China were not revealed by UTCI due to mismatch of the daily UTCI with category at hourly scale, which makes it an urgent task to redefine category at daily scale in the next research work.
Introduction
There is a close relationship between human thermal perception and the atmospheric environment [1], but humans do not have receptors to sense the air temperature directly [2]. Rather, what humans feel in the daily experience is actually the comprehensive summary of the thermal environmental conditions, such as skin temperature and their demand for heating or cooling [3], which are influenced directly or indirectly by the atmospheric elements such as solar radiation, air temperature, humidity, and wind speed etc. [3][4][5][6]. Thus, thermal comfort index, rather than simple air temperature, has drawn increased attention in the recent decades, in order to provide better services for assessing and predicting thermal perception in the domain of tourism, public health, and climate impact assessment [7][8][9][10].
Initially, many two-parameter indices were developed to represent the human thermal environment, including the effective temperature [11], the wind chill index [12], and the temperature-humidity index [13]. Though these empirical indices can be easily calculated with simple algorithms, they neglect significant variables and fluxes influencing thermal perception, which would inevitably lead to misrepresentation of the thermal environment [7]. In the recent decades, many heat budget models have been developed in the field of thermal biometeorology, including representatives such as the MEMI model [14], the Klima-Michchel Model [1], and the MENEX model [15]. All of these heat budget models can be used in the assessment of the thermal environment, but none is accepted as the fundamental standard due to persistent shortcomings caused by the relevant theory on heat exchange and thermos-physiology [7]. Finally, through the cooperation by scientists from many countries, the Universal Thermal Climate Index (UTCI) was established under the commission supposed by the International Society on Biometeorology [3,7,16].
Based on the achievements in many previous heat budget models, especially the Fiala model, UTCI has fully considered the comprehensive influence of the atmospheric environment on human perception [3,17,18]. Up to now, UTCI has been validated extensively with measured data from climate chamber or wind tunnel experiments, together with data collected from outdoor surveys [3,19]. It has been identified that UTCI is sensitive to small variations in the atmospheric environment [20], and believed to be suitable for assessing thermal environments under all climate conditions [7,[21][22][23][24][25][26].
However, everything has two sides. While UTCI has great advantage over the simple empirical indices in describing thermal perception due to its full consideration of the atmospheric environment, this consideration inevitably has led to the negative influence on its application caused by its requirements of more input meteorological variables [3,4,16,27], among which solar radiation is certainly more difficult to access than the other meteorological items such as temperature and humidity etc. Under this condition, cloudiness was often used as proxy of solar radiation as the input variable [9,23,24,28], but the reliability of the UTCI results might be suspicious, as calculation by cloudiness might result in high discrepancy in radiation estimation [9].
In fact, solar radiation can be accurately estimated by many methods, including robust numerical models [29,30], remote sensing [31,32], and empirical models [33][34][35]. In recent decades, empirical models, including sunshine-and temperature-based models, have become popular in radiation estimation due to their simple operation and readily available variables [36,37]. The sunshine-based models are more preferable to temperature-based ones because of its high accuracy in radiation estimation [33,38], even in the extreme climate regions such as the Tibetan Plateau [39]. Recently, many machine-learning methods have also been used to estimate solar radiation, and have shown great promise with their high accuracy [40,41]. Based on the sensitivity analysis, Weihs et al. [4] argued that the uncertainty in the calculated UTCI might be less than ±2 • C; if radiation was reasonably estimated with synoptic observations, so we can envisage that accurate UTCI can be obtained based on optimal strategy on estimating solar radiation with empirical models or machine-learning methods. However, to our knowledge, no research has yet has been conducted to test this hypothesis.
China covers a huge area with complex topography and diverse climate [42], which highlights the importance of the assessment of its thermal environment for public services [9]. In recent years, some studies on UTCI have already been performed to provide information on the thermal environment over China [9,10,28,42,43]. However, like the research conducted in the other countries mentioned above [21][22][23][24][25], UTCI also cannot be easily calculated in China due to paucity of solar radiation observation, as there are only about four percent (100 out of 2500) weather stations routinely observe solar radiation over China, due to scarcity of radiation instruments and their high costs of maintenance [38,41]. Under this circumstance, UTCI in China was often calculated by some readily available items such as cloudiness [9,28,43], or estimated by simple spatial interpolation of solar radiation [44], or even analyzed by avoiding the input requirement of solar radiation with assumption that radiative temperature just equals air temperature [10].
In this study, the sunshine-based Angstrom and Ogelman models, and the temperaturebased Bristow and Hargreaves models, together with neural network and support vector machine-learning methods, were used to estimate solar radiation for calculating UTCI in 35 tourism cities of China. The objectives of this study were: (1) to investigate the influence of different strategies of radiation estimation on the accuracy of UTCI calculation, based on which the optimal strategy could be identified; (2) to provide spatial distribution of UTCI in tourism cities over China; and (3) to reveal the temporal trend in thermal stress in these cities in the recent 60 years, which would be beneficial for the local governments to make relevant policies on assessing and predicting thermal perception for the public health.
Database
Chinese main tourism destinations, including 35 cities distributed in different climate zones (Figure 1), were selected according to the classification based on comprehensive development indices of tourism industry, urbanization, and ecological environment [45]. Detailed information on these tourism cities can be seen in Table 1. al. [4] argued that the uncertainty in the calculated UTCI might be less than 2 ℃ if radiation was reasonably estimated with synoptic observations, so we can envisage that accurate UTCI can be obtained based on optimal strategy on estimating solar radiation with empirical models or machine-learning methods. However, to our knowledge, no research has yet has been conducted to test this hypothesis.
China covers a huge area with complex topography and diverse climate [42], which highlights the importance of the assessment of its thermal environment for public services [9]. In recent years, some studies on UTCI have already been performed to provide information on the thermal environment over China [9,10,28,42,43]. However, like the research conducted in the other countries mentioned above [21][22][23][24][25], UTCI also cannot be easily calculated in China due to paucity of solar radiation observation, as there are only about four percent (100 out of 2500) weather stations routinely observe solar radiation over China, due to scarcity of radiation instruments and their high costs of maintenance [38,41]. Under this circumstance, UTCI in China was often calculated by some readily available items such as cloudiness [9,28,43], or estimated by simple spatial interpolation of solar radiation [44], or even analyzed by avoiding the input requirement of solar radiation with assumption that radiative temperature just equals air temperature [10].
In this study, the sunshine-based Angstrom and Ogelman models, and the temperature-based Bristow and Hargreaves models, together with neural network and support vector machine-learning methods, were used to estimate solar radiation for calculating UTCI in 35 tourism cities of China. The objectives of this study were: (1) to investigate the influence of different strategies of radiation estimation on the accuracy of UTCI calculation, based on which the optimal strategy could be identified; (2) to provide spatial distribution of UTCI in tourism cities over China; and (3) to reveal the temporal trend in thermal stress in these cities in the recent 60 years, which would be beneficial for the local governments to make relevant policies on assessing and predicting thermal perception for the public health.
Database
Chinese main tourism destinations, including 35 cities distributed in different climate zones (Figure 1), were selected according to the classification based on comprehensive development indices of tourism industry, urbanization, and ecological environment [45]. Detailed information on these tourism cities can be seen in Table 1. The cities with red color are used for exploring optimal methods on solar radiation estimation, and the cities with black color are the other tourism cities in this study. Roman numerals indicate the climate zones in China. I denotes the temperate and warm-temperate deserts of northwest China, II Inner Mongolia, III the temperate humid and sub-humid northeastern China, IV the temperate humid and sub-humid northern China, V the subtropical humid central and southern China, VI the Qinghai-Tibetan Plateau, and VII the tropical humid southern China. Whole names of the cities can be seen in Table 1. For this research, access to the fundamental database of NMIC (National Meteorological Information Center) was given by CMA (China Meteorological Administration). The daily meteorological data, including sunshine hours, mean temperature, maximum temperature, minimum temperature, vapor pressure, relative humidity, wind speed, precipitation, and cloudiness, were collected from 1961 to 2020 for all of these locations. There were very few missing values in the dataset. When a missing value was identified, it was substituted by the average of the values observed on preceding and following days [10]. Solar radiation is not a routinely observed meteorological item in most weather stations in China, and the daily solar radiation data were only available in six stations, i.e., Harbin, Beijing, Wuhan, Chongqing, Hangzhou, and Guangzhou. Daily solar radiation data from 1991 to 2020 in these locations were believed to be reliable after strict data quality control made by NMIC, so the datasets from 1991 to 2010 were used for model calibration, while the datasets from 2011 to 2020 were used for model validation in this study.
Calculation of UTCI
UTCI is defined as the isothermal air temperature which would elicit the same response under a set of reference conditions [7]. UTCI is calculated by solving Fiala's heat balance model [3], which calculates the human physical response to meteorological conditions. The model is based on a thermoregulation model, consisting of 12 human body elements and 187 tissue nodes [3]. A rapid calculation of UTCI can be achieved by a polynomial approximation procedure to compute the offset of UTCI to T a (UTCI−T a ) as follows [16].
where T a denotes the air temperature, V the wind speed, e the vapor pressure, and T mrt the mean radiative temperature, respectively. V, e, and T mrt can be computed as: where V u and V v are wind speed at meridional and longitudinal directions, respectively; T d is the dew point temperature, T g is the ground temperature, and R p is the solar radiation observed by a nude man, which can be estimated by the SolAlt model [9,44]. UTCI is divided into 10 categories ranging from extreme cold stress to extreme heat stress [16] (Table 2). This category has been extensively validated by both climate chamber and wind tunnel experiments [3,19], and widely accepted by both the International Society on Biometeorology [3,7,16] and the researchers in this domain [21][22][23][24][25][26]. Currently, calculation of UTCI can be performed by Bioklima 2.6 software package with four meteorological input variables, including solar radiation, air temperature, vapor pressure or humidity, and wind speed [9,23]. Detailed description on the input variables can be found in the relevant procedures and processing steps [44].
Estimation of Solar Radiation
Two sunshine-and two temperature-based empirical models, together with two machine-learning methods, were used to estimate solar radiation in this study.
Angstrom Model
The Angstrom formula is the most widely used popular empirical sunshine-based model [46,47], which calculates solar radiation with sunshine hours as follows.
where R a is the observed daily solar radiation, R e the extra-terrestrial solar radiation, S the observed sunshine hours, and S 0 the potential sunshine hours, respectively. R e and S 0 can be calculated with the method recommended by FAO [48].
Ogelman Model
The Ogelman model is also an empirical sunshine-based model, which can be expressed as [49]
Bristow Model
The Bristow model is an empirical temperature-based model, using temperature as input variables to predict solar radiation [50].
where ∆T is the difference between daily maximum and minimum temperature.
Hargreaves Model
The Hargreaves model is also an empirical temperature-based model, estimating solar radiation with the diurnal range of temperature [51].
BP Neural Network
BP neural network has one or more hidden layers, and one output layer. The data are propagated from input layer to output layer through hidden layer with error being transmitted in the opposite direction, so the connection weight of the network can be corrected to decrease the final error. Recently, BP neural has been argued to be efficient in solar radiation estimation [52].
Support Vector Machine
The support vector machine (SVM) is a supervised machine-learning method for data analysis and pattern recognition. The SVM follows the concept of separating the features from one another. According to the algorithm of SVM, the same types of features are set on one plane. To be specific, the SVM aims to find a hyperplane that can separate data points of one class from another to the best degree. The best degree is referred to as the hyperplane with the largest margin between the two classes, and margin is defined as the biggest width of the slab parallel to the hyperplane that has no interior data points. Based on the principle of structural risk minimization, this method can better solve the problems with nonlinearity and high dimensionality. Up to now, SVM has been widely employed for radiation estimation due to its high accuracy [41].
Statistical Analysis
The Nash-Sutcliffe Efficiency (NSE), the Mean Absolute Percentage Error (MAPE), and the Root Mean Square Error (RMSE) were used as criteria to evaluate the model performance [39,53]. NSE is analogous to coefficient of determination, with the exception that NSE ranges from negative infinity to 1, which can be used for indicating model efficiency. The negative value of NSE indicates that the mean observation can be used as a better predictor than the simulated values. MAPE is used to identify the relative bias in simulated values compared with observations, while RMSE is an indicator of the squared difference between simulated and measured values.
where O i is the observed value, S i the estimated value, O the average of the observed value, and n the sample number of observations, respectively. Higher NSE and lower MAPE and RMSE mean better model performance.
Trends in time series of UTCI were estimated by the nonparametric Theil-Sen's estimator [54].
where X i and X j are the UTCI values for year i and j, respectively. Positive β denotes an increase in trend, while negative value of β indicates decrease in the time series. The trend significance was tested by Mann-Kendall method. Detailed information on calculating the standardized test statistic (z) of MK can be referred to the relevant descriptions [10,55,56].
The empirical coefficients of a, b and c in Equations (7)- (10) were fitted by numerical iteration methods [53]. The BP and SVM methods were implemented through the "nnet" and "e1071" packages in R language, respectively. A regression task was involved in the study, and both the empirical models and the machine-learning methods were calibrated and validated before application. The machine-learning methods are believed to be more accurate in radiation estimation than the empirical models [41], but they require big data for model training. In contrast, the empirical models are easy to operate, and require less data for model calibration than the machine-learning methods.
Comparison of Model Performance in Radiation Estimation and UTCI Calculation
Coefficients of the empirical models were calibrated with the dataset from 1991 to 2010, and the calibration results are shown in Table 3. For the sunshine-based Angstrom model, the coefficient a ranged from 0.130 to 0.240 with averaged value of 0.16, while the coefficient b ranged from 0.456 to 0.587 with an average of 0.528. The values of NSE were between 0.861 and 0.930 with an average of 0.886, indicating that Angstrom had a high efficiency in radiation estimation. The average MAPE and RMSE were 16.795 and 27.452, respectively. The calibration performance of the Ogelman model was very similar to that of the Angstrom model. The average NSE was 0.895 for the Ogelman model, which was almost equal to that of the Angstrom model. In addition, the values of MAPE and RMSE were also very close to those of the Angstrom model, either for each location or as a whole. For temperature-based models, the average value of NSE was 0.673 for the Bristow Correlation analysis identified that many meteorological factors should be used as input variables for training machine leaning methods, including extra-terrestrial solar radiation (R e ), sunshine hours (S a ), potential sunshine hours (S p ), cloudiness (Cl), air mean temperature (T a ), maximum temperature (T m ), minimum temperature (T n ), diurnal variation of temperature (T d ), vapor pressure (P e ), humidity (H u ), wind seep (W s ), precipitation (P r ), and precipitation events (P t ) ( Figure 2). However, the machine-learning models could be over-trained when the highly correlated features were used as input variables simultaneously. In this study, when the correlation coefficients among several features were higher than 0.8, only one feature was used as the input variable for the machine-learning models. According to this rule, together with the consideration of the data availability, T a was kept as input variable for machine-learning models, while T m , T n , and P e were removed from the dataset. However, all of the three sunshine-related features, including S p , R e , and S a , were used as input variables despite of their high correlation coefficients, as removal of any one of these features would lead to poor performance of the machine-learning models. The machine-learning models were also trained with the dataset from 1991 to 2011 for comparison with the results by the empirical models (Table 3). A "Trial and error" method was used to tune the machine-learning models to determine the best parameters. For the BP model, the number of units in the hidden layer and the parameter for weight decay were set as 10 and 0.01, respectively. For the SVM model, the kernel used in training and predicting was set as "radial" basis, and the cost of constraints violation was tuned as 1. The value of gamma was determined as 0.125. The two machine-learning methods performed better in calibration with higher NSE and lower MAPE and RMSE. The average values of NSE, MAPE, and RMSE were 0.938, 14.210, and 20.127, respectively, for the BP neural network, while the Support Vector Machine further improved the model performance in calibration by a slightly higher NSE of 0.940, and lower MAPE and RMSE of 13.711 and 19.781, respectively. availability, Ta was kept as input variable for machine-learning models, while Tm, Tn, and Pe were removed from the dataset. However, all of the three sunshine-related features, including Sp, Re, and Sa, were used as input variables despite of their high correlation coefficients, as removal of any one of these features would lead to poor performance of the machine-learning models. The machine-learning models were also trained with the dataset from 1991 to 2011 for comparison with the results by the empirical models (Table 3). A "Trial and error" method was used to tune the machine-learning models to determine the best parameters. For the BP model, the number of units in the hidden layer and the parameter for weight decay were set as 10 and 0.01, respectively. For the SVM model, the kernel used in training and predicting was set as "radial" basis, and the cost of constraints violation was tuned as 1. The value of gamma was determined as 0.125. The two machine-learning methods performed better in calibration with higher NSE and lower MAPE and RMSE. The average values of NSE, MAPE, and RMSE were 0.938, 14.210, and 20.127, respectively, for the BP neural network, while the Support Vector Machine further improved the model performance in calibration by a slightly higher NSE of 0.940, and lower MAPE and RMSE of 13.711 and 19.781, respectively. The calibrated empirical models and trained machine-learning models were used for validation against the observed radiation from 2011 to 2020, respectively. The validation results are shown in Table 4. For each empirical model, the overall model performance in The calibrated empirical models and trained machine-learning models were used for validation against the observed radiation from 2011 to 2020, respectively. The validation results are shown in Table 4. For each empirical model, the overall model performance in validation was quite similar to that in calibration. However, compared to the performance in calibration, the machine-learning methods showed worse performance in validation. The average value of NSE was 0.878 for the BP neural network, much lower than that of 0.938 in calibration. In addition, the values of MAPE and RMSE also became larger in model validation for machine-learning methods.
The estimated radiation dataset in 2011-2020 was used to calculate UTCI, and the obtained UTCI was compared to that calculated with observed radiation. The UTCI validation result is shown in Table 5. As a whole, the error in radiation estimation was not amplified, but reduced in the UTCI calculation process. The sunshine-based Angstrom model had a high average NSE value of 0.990, and lower RMSE value of 1.236 (Table 5). Referred to the validation in radiation (Table 4), an average error of 35.4 J/m 2 s radiation estimation led to an average error of 1.2 • C in UTCI calculation, which was exactly within the error range of 2.1 • C identified by sensitivity analysis [4]. The Ogelman model showed very similar performance to the Angstrom model. However, the temperature-based models, both the Bristow and Hargreaves models, presented lower NSE and higher MAPE and RMSE than sunshine-based models. In contrast, the NSE values of the machine-learning methods were 0.992, slightly higher than those of the sunshine-based models. The average RMSE value for both machine-learning methods was about 1.1 • C, showing very limited advantage over the sunshine-based models. Considering both accuracy and applicability, the Angstrom model was selected to estimate solar radiation for UTCI calculation in regional analysis below, due to its easy calibration and readily available input data. The accuracy of the Angstrom model in calculating UTCI and day number within each category can be seen in Figures 3 and 4, respectively.
Spatial Analysis of UTCI and Day Number within Each Category
The calibrated model was used to calculate the UTCI in all of the tourism cities from 1961 to 2020. The spatial distribution of average yearly UTCI and the days within each category in Chinese tourism cities are shown in Figure 5a, and the detailed information is shown in Table 6. As a whole, the UTCI increased gradually from north to south in the tourism cities of China, with the exception of Huangshan location due to its high altitude (see Table 1). UTCI in the tourism cities of northeast China was lower than 5 • C, while the UTCI reached the highest value around 28 • C in Sanya, the southernmost part of China. The days within each category are shown in Figure 5b [14.8,19.4] In contrast with the spatial distribution of days under no thermal stress and slightly cold stress, more days under heat stress were identified in the tourism cities of south China (Figure 5b,c), while more days under cold stress were found in the tourism cities of north China (Figure 5f-h). According to category 3, many tourism cities in the lower reach of Yangtze River, including Wuhan, Nanjing, Hangzhou, and Shanghai etc., had about one month under strong heat stress, while the tourism cities in Zhujiang River Delta such as
Temporal Trend in UTCI and Day Number within Each Category
The time analysis was further conducted with the calculated UTCI in all of the tourism cities from 1961 to 2020, and the trend analysis of the UTCI and days number within each category from 1961 to 2020 is presented in Figure 6. On the whole, the UTCI showed an overall increasing trend for most of the large tourism cities in China ( Figure 6a), most of which increased with very high statistical significance level of 99% (z > 2.58). Only 2 out of 35 tourism cities showed negative trends in UTCI. Huhehaote had a negative trend with very small value of −0.009 °C /a, and the negative trend in UTCI was negligible in Chengdu with an even smaller value of −0.001 °C /a. In addition, both trends did not pass the significant level of 90% (z < 1.96). In other words, very slight decreases in both locations were not significant in terms of statistical analysis. Generally speaking, the
Temporal Trend in UTCI and Day Number within Each Category
The time analysis was further conducted with the calculated UTCI in all of the tourism cities from 1961 to 2020, and the trend analysis of the UTCI and days number within each category from 1961 to 2020 is presented in Figure 6. On the whole, the UTCI showed an overall increasing trend for most of the large tourism cities in China (Figure 6a), most of which increased with very high statistical significance level of 99% (z > 2.58). Only 2 out of 35 tourism cities showed negative trends in UTCI. Huhehaote had a negative trend with very small value of −0.009 • C/a, and the negative trend in UTCI was negligible in Chengdu with an even smaller value of −0.001 • C/a. In addition, both trends did not pass the significant level of 90% (z < 1.96). In other words, very slight decreases in both locations were not significant in terms of statistical analysis. Generally speaking, the days under no thermal stress increased in most of the tourism cities over China (Figure 6d). The days under slight cold stress increased in most locations in the northern parts of China, while they decreased in most locations in the southern parts of China (Figure 6e). The days under moderate heat stress and strong heat stress generally increased over China from 1961 to 2000 (Figure 6b,c). In contrast with these increasing trends, trends in the days under moderate cold, strong cold, and very strong cold stresses presented an overall decreasing trend in most locations over China (Figure 6f-h). Especially, the days under very strong cold stress decreased in all locations of China (Figure 6h). In short, the UTCI increased in most locations of China, accompanying with an overall increase in days under no thermal stress and heat stress, and an overall decrease in days under cold stress (Figure 6a-h). days under no thermal stress increased in most of the tourism cities over China ( Figure 6d). The days under slight cold stress increased in most locations in the northern parts of China, while they decreased in most locations in the southern parts of China (Figure 6e). The days under moderate heat stress and strong heat stress generally increased over China from 1961 to 2000 (Figure 6b,c). In contrast with these increasing trends, trends in the days under moderate cold, strong cold, and very strong cold stresses presented an overall decreasing trend in most locations over China (Figure 6f-h). Especially, the days under very strong cold stress decreased in all locations of China (Figure 6h). In short, the UTCI increased in most locations of China, accompanying with an overall increase in days under no thermal stress and heat stress, and an overall decrease in days under cold stress (Figure 6a-h). Figure 6. Trend analysis of the changes in yearly day number within each category from 1961 to 2020. Plus +, multiple , and asterisk * signs denote confidence levels of 90%, 95%, and 99%, respectively. Figure 6. Trend analysis of the changes in yearly day number within each category from 1961 to 2020. Plus +, multiple ×, and asterisk * signs denote confidence levels of 90%, 95%, and 99%, respectively.
Optimal Strategy on Estimating Solar Radiation for UTCI Calculation
The sunshine-based Angstrom model was selected as the best choice to estimate solar radiation for UTCI calculation. In this study, the sunshine-based models performed better than the temperature-based models in terms of higher NSE and lower MAPE and RMSE (Tables 3 and 4), which was in agreement with the results from the previous reports [33,38]. The machine-learning method is believed to be a promising choice for accurate estimation of solar radiation [41], but the strict requirement of many input variables for model training cannot always be met, which inevitably leads to better performance in calibration but worse performance in validation [57]. This inherent defect was again identified by the higher NSE in calibration and the lower NSE in validation in this study (Tables 3 and 4). In fact, the machine-learning methods can only slightly improve the accuracy in radiation estimation compared with the sunshine-based models, but require many more input variables and datasets for model training [41]. Due to limitation of length, only the BP and SVM methods were implemented in this study. Recently, the Random Forest method has also been identified as an effective algorithm in radiation estimation [58], which may outperform the BP and SVM methods. So, involvement of more machine-learning methods to further improve accuracy in radiation estimation becomes necessary in the future research work.
Based on sensitivity analysis, Weihs et al. [4] concluded that the maximum uncertainty in solar radiation estimation would be 15% in conventional sites, which might contribute to a maximum uncertainty in UTCI with 2.1 • C. In this study, the RMSE of the radiation by the Angstrom model was 32 J/m 2 s in Beijing, while the average solar radiation was around 300 J/m 2 s in clear days (Figure 3). This error resulted in an uncertainty of about 11% in radiation estimation, which further led to an uncertainty of about 1.2 • C in UTCI calculation (Figure 3), just within the error range made by sensitivity analysis [4]. Therefore, it can be reasonably concluded that accurate UTCI can be obtained through the optimal strategy for radiation estimation. Based on the results and analysis above, the sunshinebased Angstrom model is recommended as the optimal strategy due to its excellent model performance, easy operation, and readily available input variables.
Increase in Yearly Day Number under no Thermal Stress Accompanying with More Risks in Heat Stress in Summer in China
Global warming has been well acknowledged in modern society [59], with its great negative effects in many aspects such as the economy [60], agriculture [61], and global food security [62]. In addition, climate change is also believed to give rise to more extreme weather events, which would pose great risks to the public health [63][64][65][66][67]. Considering so many impressive negative impacts, people would naturally envisage that days under no thermal stress would decrease in the context of climate change. However, this study showed an opposite result in that the yearly day number under no thermal stress increased in most tourism cities over China in the recent six decades (Figure 6d). To investigate its reliability, the probability of days under each category in the period of 1961-1990 and 1991-2020 was analyzed, respectively. Figure 7 clearly indicates that the distribution probability curve slightly right-shifted in the tourism cities of China. This shift can be attributed to the decrease in cold days and increase in warm days caused by climate warming. For most locations, more days under cold stress changed to the days under no thermal stress, while fewer days under no thermal stress shifted to the days under heat stress. This might be the exact reason for the increasing days under no thermal stress in most tourism cities of China. Only for very few tourism cities such as Shenzhen was this shift was not significant due to the high temperature in south part of the tropical humid zone. In short, it is the asymmetric changes in the hot days and cold days that led to the increase in days under no thermal stress within a year in the tourism cities of China. This argument is strongly justified by the previous report given by Zhai et al. [68], who identified that the number of hot days displayed a slight increasing trend, while the number of the frost days exhibited a significant decreasing trend in China in the recent decades. al. [68], who identified that the number of hot days displayed a slight increasing trend, while the number of the frost days exhibited a significant decreasing trend in China in the recent decades. The increase in days under no thermal stress mainly occurred in the spring and autumn. In summer, the days under no thermal stress did decease (Figure 8a). Without consideration of the radiation effect on UTCI, Yan et al. [10] also believed that days under no thermal stress have been decreasing in most locations of China from 1961 to 2016 in summer. In contrast to the decrease in days under no thermal stress, the days under strong heat stress showed an increasing trend in most tourism cities in summer ( Figure 8b), which would inevitably cause more potential heat threats to the public health and make the further investigation on thermal perception essential and urgent [28].
Certainties and Uncertainties in UTCI Estimation over China
It is certain that UTCI deserves its name, as the term "universal" means that UTCI is appropriate for all assessments of the outdoor thermal conditions in the major human bio-meteorological applications [7]. Previously, many simple thermal indices have al- The increase in days under no thermal stress mainly occurred in the spring and autumn. In summer, the days under no thermal stress did decease (Figure 8a). Without consideration of the radiation effect on UTCI, Yan et al. [10] also believed that days under no thermal stress have been decreasing in most locations of China from 1961 to 2016 in summer. In contrast to the decrease in days under no thermal stress, the days under strong heat stress showed an increasing trend in most tourism cities in summer (Figure 8b), which would inevitably cause more potential heat threats to the public health and make the further investigation on thermal perception essential and urgent [28].
al. [68], who identified that the number of hot days displayed a slight increasing trend, while the number of the frost days exhibited a significant decreasing trend in China in the recent decades. The increase in days under no thermal stress mainly occurred in the spring and autumn. In summer, the days under no thermal stress did decease (Figure 8a). Without consideration of the radiation effect on UTCI, Yan et al. [10] also believed that days under no thermal stress have been decreasing in most locations of China from 1961 to 2016 in summer. In contrast to the decrease in days under no thermal stress, the days under strong heat stress showed an increasing trend in most tourism cities in summer ( Figure 8b), which would inevitably cause more potential heat threats to the public health and make the further investigation on thermal perception essential and urgent [28].
Certainties and Uncertainties in UTCI Estimation over China
It is certain that UTCI deserves its name, as the term "universal" means that UTCI is appropriate for all assessments of the outdoor thermal conditions in the major human bio-meteorological applications [7]. Previously, many simple thermal indices have al-
Certainties and Uncertainties in UTCI Estimation over China
It is certain that UTCI deserves its name, as the term "universal" means that UTCI is appropriate for all assessments of the outdoor thermal conditions in the major human bio-meteorological applications [7]. Previously, many simple thermal indices have already been suggested in different regions in China [69][70][71], but none of them could be used for the spatial analysis at the national scale over China due to regional adaptability. In contrast, UTCI successfully revealed the overall distribution of the thermal environments in the tourism cities of China ( Figure 5), and the results are highly consistent with public cognition. For example, Kunming is called "spring city" due to its long period of thermal comfort, which has been identified by UTCI as the city with the most days under no thermal stress (Figure 5d). The general distribution of more cold stress in tourism cities of north China and more heat stress in tourism cities of south China also agree well with the basic public cognition. However, it should be noted that Huangshan is an exception of this the general spatial distribution of UTCI (Figure 5a). Though located in the central part of China, the yearly UTCI of Huangshan is much lower than the surrounding cities due to its high altitude (Table 1). This unique UTCI makes Huangshan become one of the most attractive tourism destinations in the summertime. So, the effect of topographic features on UTCI should be explored in the future research relevant to the distribution of thermal environments in China.
The general spatial distribution of UTCI in the tourism cities of China is comparable to the previous reports [9,44], with quantitative differences due to different strategies on dealing with input variables. In the future, error in UTCI calculation caused by using cloudiness [9] or simple interpolation [44] should be examined deliberately. In addition, UTCI stress should be redefined at daily scale. Currently, UTCI category is defined at hourly scale [16]. However, air temperature has obvious seasonal and daily changes, especially in the monsoon climate regions such as China [72]. As air temperature is the dominant element of UTCI, and UTCI also has seasonal and daily changes at daily and seasonal scales. Therefore, both heat stress and cold stress would be underestimated if the daily UTCI were classified according to the current UTCI category at the hourly scale. For example, the daily UTCI of 26 • C was defined as no thermal stress (9-26 • C) according to the current hourly category (Table 2). However, there is a great possibility that UTCI is higher than 26 • C in many hours during the day, so it actually should be classified as moderate heat stress rather than no thermal stress in this day. Vice versa, the daily UTCI of 9 • C should actually be referred to as a slight cold day, as UTCI would be lower than 9 • C in many hours of the day. Likewise, daily UTCI values within category 3 or category 9 might actually belong to category 2 or category 10. This might be the exact reason that no very strong heat stress (category 10) and extreme hot stress (category 2) were identified in both this study and all of the previous reports, which is obviously contradicted to the public cognition that there are many "ice" and "furnace" cities in China. Thus, it is urgent to redefine UTCI category at daily scale through comparison of daily UTCI with hourly values in the future work.
Conclusions
The sunshine-based Angstrom model performed well in estimation of solar radiation in the tourism cities of China with high NSE of 0.864 and low RMSE of 35.4, which resulted in a high efficiency in UTCI calculation with NSE of 0.99. Uncertainty in UTCI calculation with estimated solar radiation by the Angstrom model was 1.2 • C, just within the error range obtained by sensitivity analysis. So, the Angstrom model is proposed as the optimal strategy on solar radiation estimation for UTCI calculation due to its high accuracy and easy operation.
Spatial distribution of UTCI indicated that day number under no thermal stress was higher in the tourism cities of central China, within a range from 135 to 225 days, and the largest days under no thermal stress occurred in Kunming and Lijiang in southwest China. Very strong cold stress mainly occurred in Harbin and Changchun in northeast China, with a period between 15 and 25 days in a year. Strong heat stress mainly occurred in the tourism cities of south China, especially southernmost Sayan, with a period between 40 and 70 days in a year.
Contrary to popular belief, days under no thermal stress during a year have increased in most tourism cities of China in the recent decades in the context of global warming, which can be attributed to the asymmetric changes in the significant decrease in frost days and slightly increase in hot days over China in the recent decades. However, in the summer, days under no thermal stress have decreased in most regions of China, accompanying with increasing trends in days under very strong heat stress, especially in the developed regions such as Yangtze River Delta and Zhujiang River Delta, which would pose great risks to thermal perception for the public health in these regions. UTCI can successfully identify the general spatial distribution and temporal trend of thermal environments in most tourism cities of China. However, up to now, all reports on Chinese UTCI were performed by classifying daily UTCI values according to the category obtained at hourly scales, which would result in the underestimation of days under extreme hot or cold stress conditions. This is exactly the weakness of UTCI performance in China, i.e., no extreme cold or extreme hot days have been identified by the daily UTCI values. Thus, it is urgent to redefine UTCI category at daily scales by comparing daily UTCI with the corresponding hourly values in the future research work.
|
2022-07-04T15:09:36.186Z
|
2022-07-01T00:00:00.000
|
{
"year": 2022,
"sha1": "99904d88d5d1f4cb1126d60058f335c2be07c33d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/13/8111/pdf?version=1656682216",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e4cabfa05d1023b8760526cf76f2082b2bfca027",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
196031768
|
pes2o/s2orc
|
v3-fos-license
|
Design of an Input-Parallel Output-Parallel LLC Resonant DC-DC Converter System for DC Microgrids
Compared with the centralized power system, the distributed modularized power system is composed of several power modules with lower power capacity to provide a totally enough power capacity for the load demand. Therefore, the current stress of the power components in each module can then be reduced, and the flexibility of system setup is also enhanced. However, the parallel-connected power modules in the conventional system are usually controlled to equally share the power flow which would result in lower efficiency in low loading condition. In this study, a modular power conversion system for DC micro grid is developed with 48 V dc low voltage input and 380 V dc high voltage output. However, in the developed system control strategy, the numbers of power modules enabled to share the power flow is decided according to the output power at lower load demand. Finally, three 350 W power modules are constructed and parallel-connected to setup a modular power conversion system. From the experimental results, compared with the conventional system, the efficiency of the developed power system in the light loading condition is greatly improved. The modularized design of the power system can also decrease the power loss ratio to the system capacity.
1.Introduction
Facing the depletion of energy, many countries have conducted research on renewable energy in recent years, hoping to reduce environmental pollution and damage. There are different types of energy, such as wind power and solar power, so the micro-grid system combining different renewable energies has become one of directions of research [1], [2]. Micro-grid is a system consisting of the renewable energy, user load and energy storage system combined into the system and can strengthen the security and reliability of regional power supply. In modern society, most household appliances, solar power generation and power storage systems use DC power supply, and the use of DC power supply can reduce the variables. So the development of decentralized DC micro-grid will become more and more popular.
Because the micro-grids in different areas require different output power, the distributed power system is preferable [3]. This system utilizes a number of small power modules in parallel to supply power to the load, which can effectively reduce the current stress on the power components and reduce the difficulties in the selection of components when designing the circuit.
2.System Structure
The distributed power system presented in this paper has been developed for the distributed micropower system, its structure shown in Figure 1. The system architecture consists of a full bridge-half bridge LLC resonant DC power converter [4][5][6][7] micro controller, and a voltage feedback circuit and master microcontroller. The system uses the input 48 V DC bus as the input and utilizes the master micro-controller to coordinate the power supply of the decentralized power supply system, to provide a stable voltage in the 380 V DC bus. As shown in the figure, the modular power conversion system is run by multiple power modules in parallel. Figure 2 shows the circuit architecture diagram of a full bridge LLC resonant DC power converter. This circuit consists of a full bridge converter, an LLC resonant circuit, an isolated transformer and a voltage doubling circuit. This architecture uses phase-shift pulse width modulation (PSPWM) [8] to drive switching elements Q 1 to Q 4 . Through resonant compensating inductor L r , resonant electric capacity C r and transformer magnetizing inductor L m . Then connecting an isolation transformer with turn ratio of N P : N S , to get boost and the effect of isolating input from output, finally connecting with a voltage doubling circuit composed of a bipolar D 1 , D 2 and capacitors C 1 , C 2 , to rectify the voltage and boost it to 380 VDC.
3.Circuit Analysis
The circuit shown in Figure 3 is the equivalent circuit of LLC resonant circuit. The input of this is square wave V inac output from the full-bridge converter, which is then resonated by LLC and output as equivalent load R ac . The relation of R ac is shown as equation (1), where R o is the secondary side output load. The relation of resonant frequency f r is equation (2). The ratio F x between operating frequency fs and resonant frequency f r is equation (3). The relation of quality factor Q is equation (4). In addition, the ratio K between total primary inductance and resonant inductance is equation (5). Conducting the impedance analysis based on the equivalent circuit in Figure 3. We find voltage gain is equation (6). (3) G≡ G j2πf s = F x 2 (K-1) Use equation (6) to conduct a numerical analysis, we can obtain the diagram of relation between operating frequencies and resonant frequencies in different quality factors as shown in Figure 4. When the resonant frequency is equal to the operating frequency, the voltage gain is 1. Figure 4. the relationship of Gain and F x 4.Experiment result A decentralized power system is built up for this paper, the physical circuit shown in Figure 5. This system simulates the feasibility of this system using three converters in parallel. The specifications of the converter are 48 V at the input and 380 V/0.92 A (350 W) at the output. The turn ratio of primary side to secondary side of isolation transformer is 1: 4.7. The switching frequency of power switching element is 50 kHz. The three power converters designed for this system are made in the same specification. They transmit current feedback value to the communication connection required by the master controller, connect with one another in parallel for power input and power output, and can operate independently. Figure 5. physical system Figure 6 shows the comparison between the operating efficiency of using this system strategy and that of three power modules in parallel. As shown in the figure, operating at 175 W of output power, the efficiency may reach up to 91 %, 7 % higher than three modules operating in parallel. When output power is 350 W, the efficiency of this system is 90.6 %, 2.1 % higher than three modules operating in parallel. The feasibility of this system can be confirmed by the experimental waveform. 35 70 105 140 175 210 245 280 315 350 385 420 455 490 525 560 595 630 665 700 735 770 805 840 875 910 945 980 1015 1050 Efficiency(η) Output Power(W) Conventional system Proposed system Figure 6. Efficiency diagram of this system and the system with three modules in parallel
5.Conclusion
This paper shows a decentralized power system that can be applied to the DC micro grid. The system architecture is to modularize the phase-shift full bridge LLC resonant converters and connect the input and output to the two DC bus in parallel. The power switch element the converter has the advantages of zero voltage switching. In addition, the voltage feedback, current feedback and control strategy are used to increase or decrease the number of modules under different output power, to reduce switching loss when the load is light and further enhance the overall system efficiency. On the other hand, the system is able to share the current stress caused by different power modules, to achieve the highest efficiency. This paper shows three 350 W power modules in parallel are developed to simulate the distributed power system with large-capacity. With different loads, the system can be tested and calculated.
|
2019-02-17T14:17:22.975Z
|
2017-11-28T00:00:00.000
|
{
"year": 2017,
"sha1": "cd8a2a7580a5b5c495d4a75cd4c00f81c58171fb",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/94/1/012101",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e801342e40f4789bed642a18e0944a12e7c9bd62",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
234308025
|
pes2o/s2orc
|
v3-fos-license
|
Research on Multi-robot Scheduling Algorithm in Intelligent Storage System
As an indispensable core part of modern industrial system, intelligent storage system is developing from mechanical automation to robot intelligence. Intelligent storage system is widely used with multi robots, which can replace the picker to work efficiently and realize the effective connection of operation links. How to play the cooperative operation ability of multi robot is the key research content in this field. Based on the improved genetic algorithm, this paper designs a multi robot scheduling algorithm in intelligent storage system. The simulation results show that the scheduling efficiency of this method is improved by about 10% compared with the common genetic algorithm, and it has good stability. The multi robot scheduling algorithm in intelligent warehouse system based on improved genetic algorithm has good generalization.
Introduction
With the development and update of robot technology, multi robot technology is gradually becoming a research hotspot in robot field. Multi mobile robot system is the most challenging research field in robot technology. Through the cooperation of multiple robots, it can complete the tasks that traditional single robot can't complete. Compared with single robot system, multi mobile robot system has more advantages, such as high robustness, high complexity, high efficiency, high accuracy, rapidity and stability. Therefore, more and more attention has been paid to the multi mobile robot system. Each robot in the multi robot system performs the specified tasks, including starting at the starting point and executing the related tasks at the designated position, and returning through the unified return path after the task is executed. Nowadays, many industries such as industry, logistics, security and other industries rely on the use of multi robot system. The market vitality of emerging industries such as e-commerce, express delivery and new energy has been continuously released. These industries are the first to start the process of intelligent warehousing, which greatly promotes the demand of intelligent storage system. The warehouse logistics technology is no longer the traditional warehouse management, but towards the mode of mass orders with more varieties, shorter cycle, less batch and more batches. Warehouse intelligent robot is an important part of intelligent logistics equipment with the highest automation level. It can effectively liberate labor force, and has obvious advantages in saving storage space and improving logistics efficiency. In the intelligent logistics system, intelligent storage is an extremely important link that affects the overall efficiency of logistics. Intelligent storage system uses a lot of intelligent robots, which involves the scheduling problem of intelligent robots. At present, we need to develop multi robot scheduling system. The quality of task scheduling strategy directly determines the overall efficiency of the system. The inventory quantity of the whole warehouse is much larger than that of the traditional warehouse, and the requirements for the accuracy and real-time of the operation are much higher. Moreover, the orders in the e-commerce warehouse system arrive in real-time, and have the 2 characteristics of large quantity and few types of goods in a single order. This makes the picking operation of logistics warehouse in e-commerce scenario more complex than that in traditional environment, so it is necessary to design appropriate warehouse picking system to improve the picking efficiency of the whole warehouse.
Problem description
The core of scheduling problem research is problem model and algorithm. The scheduling problem of multi robot production and transportation often needs to face many tasks, many path conflicts and some sudden failures. Therefore, the workshop scheduling environment is dynamic, and some system information is uncertain and may change with time, so the robot scheduling problem is a kind of dynamic scheduling problem. For this kind of problem, the algorithm is usually needed to solve quickly and efficiently, and can meet the real-time task scheduling in production and transportation. The common scheduling principles are: the shortest path planning principle, the shortest waiting time principle, and the optimal robot work queue principle. In addition, there is a composite index scheduling principle based on a variety of scheduling indicators. The shortest path planning principle is that when scheduling tasks, the scheduling system plans the corresponding path scheme for all assigned tasks, and then calculates the total travel path of all robots in each scheme, and selects the planning scheme with the shortest total path. This method adopts the principle of the shortest path first to ensure that each robot's travel path is the shortest in all tasks, making the overall vehicle travel distance the shortest. However, this scheduling method ignores the possible conflicts between robots, so that the task execution time is longer. The principle of the shortest waiting time means that in the task scheduling, the robot executing the task will wait due to conflict or idle while waiting for the task to be issued. The purpose of this principle is to minimize the total waiting time of each task. Waiting time is the sum of conflict waiting time and idle waiting time. Conflict waiting time refers to the waiting time when the robot encounters conflicts when executing tasks. Idle waiting time refers to the task scheduling system required by the robot system to send out the tasks and task waiting time required by the robot system.
Theoretical basis
In this paper, an improved genetic algorithm is used to schedule multi robots in intelligent storage system. According to the principle of survival of the fittest, genetic algorithm establishes a general framework to solve complex problems. After abstracting the complex problem, the problem is solved by a series of selection, crossover and mutation operations. In this paper, the traditional genetic algorithm is improved by introducing elite retention and adaptive parameter adjustment mechanism to solve the combinatorial optimization problem of scheduling tasks. The basic flow chart is shown in Figure 1. The specific operations are as follows: initialize the relevant configuration parameters and update the multi robot task cost; randomly generate the parent chromosome population, and calculate and sort the normalized fitness according to the task cost; if the convergence condition is satisfied, the optimal multi robot task allocation combination is output to complete the task scheduling, and the subsequent operation steps are not continued. If the convergence condition is not satisfied, continue to carry out the following steps; select the best chromosome of a certain proportion of the father's best chromosome to the offspring population, select the remaining offspring's chromosome with all the parent's chromosome as the sample, so that the offspring population and the parent population size are equivalent; only one parent's optimal chromosome is not involved in the crossover, and the adaptive selection can participate in the cross offspring chromosome To complete the gene exchange on chromosome; to retain only one parent optimal chromosome, not to participate in mutation, adaptive variant offspring chromosome; calculate the normalized fitness of the new offspring population, and rank them according to the fitness as the latest parent population. And jump to repeat the genetic process until the convergence conditions are met, and output the scheduling scheme.
Initial population
In general, the initial population is randomly selected so that the selected chromosomes are evenly distributed. In some cases, solutions obtained from other optimization algorithms are used for the initial population. Although there is a risk that the optimization process will be misled into a local optimum, it has been proved that the seeding method is very effective in some cases. After the coding method is determined, the primary population can be formed according to the generated tasks. Here, the individuals in each population are represented by a 5 × 20 matrix, and each population contains 50 individuals. The first-generation population was generated by random numbers in the range of 1-100. The optimal individuals and the worst individuals in the initial population were recorded by calculating the fitness. The optimal individuals of the first generation are recorded as the best individuals of the past generations. The formulas show that the individuals in the initial population are randomly composed of a 5 × 20 matrix by [1,00].
Initial individual formation: 20 Among them, the column number represents the robot number, and the row number represents the task number to be performed by each robot. The primary population contains 50 individuals, so the primary population forms a 3-D matrix of 5 × 20 × 50.
Evaluation mechanism
The evaluation mechanism is fitness function, which can evaluate all chromosomes in the population. The fitness function of a chromosome can reflect the adaptability of the chromosome to the environment, that is, the survival and reproduction ability of the chromosome in the next generation. In this paper, the objective function deals with the time required for the manipulator to access n task points and return to the starting point, and the goal is to minimize the time. The process of obtaining the objective function is described in detail below. In genetic algorithm, fitness function is used to evaluate the quality of individuals in the population. The larger the fitness function, the better the individual is. We first calculate the value of an indicator and then take the reciprocal to make the index smaller and larger, and finally take the tenth power to strengthen the difference. After normalization, the fitness ratio of each individual in the population can be obtained. The larger the proportion, the higher the probability of being selected for population hybridization. The formula represents the distance cost of the robot with number r in the nth generation, and D gn is taken as the highest distance cost in each individual.
Reproduction and variation
The purpose of reproduction is to retain the good characteristics of good people in the population and transmit them to the next generation at a higher proportion. Therefore, reproduction will not produce new individuals. In genetic algorithm, chromosomes are copied from the previous generation to the next generation according to the standardized value of fitness function. The proportion selection is based on roulette strategy, that is, the selection is proportional to their fitness. This means that the probability of selecting chromosomes with high adaptability for reproduction is higher than that for chromosomes with low adaptability. Based on the fitness calculated by each individual, the proportion of each individual's fitness calculated by each generation population is selected according to the probability. Mutation behaviour is accidental and has no definite result. A good mutation will make the chromosome better, but a bad mutation will lead to the destruction of excellent genes. Generally, there are two kinds of variation behaviour: internal variation and external variation. External variation refers to the introduction of innovation. It breaks through the traditional behaviour and stimulates the application of global search algorithm. It replaces the original gene with a new gene. Internal variation occurs within the chromosome and is not introduced externally. Strictly speaking, it's not like mutation, it's more like internal gene exchange, or internal mutation. In the multi robot multi task assignment problem, the introduction of external gene may destroy the original gene, and cause gene conflict within chromosome, which is not in line with practical application. Roulette can ensure that the excellent individuals are easier to be selected, and also can ensure that the inferior individuals have the probability to be selected, and the selected individuals can carry out the next operation. This can not only ensure the diversity of the population, but also contribute to the convergence of the algorithm. The selection operation formula is as follows: 1000 3 ∑ 50 1 We take the reciprocal of distance cost and take the third power to strengthen the difference. The fitness of each population was obtained. We normalize the fitness to facilitate the selection operation and obtain the selection probability of each individual in the population.
Parameters control
Crossover is a recombination operator that follows the copy operation. The function of this operation is to connect several human parts together to produce new individuals for the next generation. Individuals are randomly selected according to a predefined probability. The crossover operator used for the part of the chromosome is a sequential crossover. For the second part of chromosome, single point crossover is used. Mutation is applied to perform random modification of some individuals with a small pre-defined probability. The mutation operator composed of decimal digits used in the first part of chromosome is inverted. Inversion only applies to chromosomes and ensures that the offspring produced represent "legal" tours. For the second part of chromosome, mutation calculation is applied the random gene of the number value is changed to "1", and vice versa. The number of control parameters affects the process of genetic algorithm, so the convergence rate and the final result must be defined. The most important control parameters include population size and crossover rate. The population size determines the number of chromosomes, so how much genetic material is available in the genetic search process. The crossover rate determines the frequency of crossover operators applied to the population chromosomes, thus generating new populations. The higher the crossover rate, the more individuals are introduced into the new species group. The crossover rate is usually between 0.6 and 1.0, The crossover rate is chosen as 0.8. The mutation rate determines the probability of a gene's value changing on the chromosome. Mutation introduces a new field of search space that has not been explored. However, the mutation rate should not be too high, because it may increase the randomness of search. The mutation rate is usually less than 0.4. In this case, after several trial runs, the mutation rate is selected as 0.1
Convergence property
In many cases, convergence is accomplished by predefining the maximum number of iterations (generation times). However, the predetermination of the maximum iteration means that the duration of genetic search is fixed regardless of the success of the search. In addition, it is difficult to determine in advance how many iterations are needed to find a near optimal solution. Therefore, the quality level of genetic algorithm should be evaluated online. In this method, the condition to terminate the evolution is to iterate the same solution over the predefined iteration algebra. On the initialization of population number, it is obvious that the number of populations has a significant impact on the convergence speed and the quality of the solution. When the number is large, the population will be more evenly distributed in the solution space of the problem, and the probability of finding the optimal solution will be increased. However, if the number is too large, it will bring great computational pressure to the computer, which will slow down the convergence speed. However, when the number of populations is small, although the computational pressure is reduced and the convergence speed is fast, the solution obtained may be a local optimal solution due to incomplete coverage in the solution space, which is not ideal. In this paper, we dynamically adjust the size of the scheduling problem, that is, the number of commodity scheduling tasks Therefore, the algorithm is terminated by defining in advance the number of iterations of the optimal chromosome that constantly appears in the same chromosome.
Simulation experiment
In order to verify the effectiveness of the algorithm, this experiment uses MATLAB to simulate. The experimental environment is set with n = 100 order tasks, 100 shelves, 1 picking table and 5 mobile robots. Among them, each grid is actually expressed as 1m × 1m, and the mobile robot is located at the picking table at the initial moment. In order to evaluate the performance of the order task scheduling algorithm based on order fitness and discrete particle swarm optimization, the algorithm is implemented and simulated on the large-scale intelligent warehouse simulation and demonstration platform. The main variable of simulation is the number of robots, from 50 to 140, adding 10 each time. The arrival of orders adopts the Poisson arrival process, and the total number of orders processed is set to 1000. The main simulation is the time cost of completing these orders by the whole intelligent warehouse system after the improvement of the order scheduling algorithm, as well as the overall distance that the mobile robot has gone through. The simulation results are as follows: As can be seen from table 1, the average waiting time of the traditional genetic algorithm is 15.28 seconds, while that of the improved genetic algorithm is 13.83 seconds. The efficiency of the improved genetic algorithm is improved by 9.61% in terms of waiting time.
In order to verify the stability of the simulation system in this paper, 100 orders are generated randomly in a single simulation, and each module of the simulation software in this paper executes the 100 orders in a cycle, the total time is 1078.2 seconds, and the average time from generation to completion of each order is 10.782 Second, each order can be well and efficiently completed, which proves that the software simulation system in this paper has a certain stability.
Conclusions
Based on the scheduling principle of multi-robot scheduling, combined with online scheduling strategy, a task scheduling strategy with sub task queue and priority is designed. This paper defines the scheduling task in multi robot scheduling system, and obtains the state machine model of the task. The simulation results show that the improved genetic algorithm can obtain better solution results, the total cost of multi mobile robots is small and the task allocation is balanced. At the same time, the convergence of the improved genetic algorithm is better than that of the traditional genetic algorithm. The improved genetic algorithm has a good advantage in solving the cooperative scheduling problem of multiple mobile robots in intelligent warehouse.
|
2021-05-11T00:03:49.450Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "014834594a55b970b817ff2be955e5bd85b2525d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1738/1/012047",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d39fc89490fe655a3b23325966f9a9c5023e646e",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
33873924
|
pes2o/s2orc
|
v3-fos-license
|
Prediction of Dry Matter Intake in Lactating Holstein Dairy Cows Offered High Levels of Concentrate
: Accurate estimation of dry matter intake (DMI) is a prerequisite to meet animal performance targets without penalizing animal health and the environment. The objective of the current study was to evaluate some of the existing models in order to predict DMI when lactating dairy cows were offered a total mixed ration containing a high level of concentrates and locally produced agricultural by-products. Six popular models were chosen for DMI prediction (Brown et al., 1977; Rayburn and Fox, 1993; Agriculture Forestry and Fisheries Research Council Secretariat, 1999; National Research Council (NRC), 2001; Cornell Net Carbohydrate and Protein System (CNCPS), Fox et al., 2003; Fuentes-Pila et al., 2003). Databases for DMI comparison were constructed from two different sources: i) 12 commercial farm investigations and ii) a controlled dairy cow experiment. The model evaluation was performed using two different methods: i) linear regression analysis and ii) mean square error prediction analysis. In the commercial farm investigation, DMI predicted by Fuentes-Pila et al. (2003) was the most accurate when compared with the actual mean DMI, whilst the CNCPS prediction showed larger mean bias (difference between mean predicted and mean observed values). Similar results were observed in the controlled dairy cow experiment where the mean bias by Fuentes-Pila et al. (2003) was the smallest of all six chosen models. The more accurate prediction by Fuentes-Pila et al. (2003) could be attributed to the inclusion of dietary factors, particularly fiber as these factors were not considered in some models (i.e. NRC, 2001; CNCPS (Fox et al., 2003)). Linear regression analysis had little meaningful biological significance when evaluating models for prediction of DMI in this study. Further research is required to improve the accuracy of the models, and may recommend more mechanistic approaches to investigate feedstuffs (common to the Asian region), animal genotype, environmental conditions and their interaction, as the majority of the models employed are based on empirical approaches.
INTRODUCTION
Ruminants need to be fed according to their nutrient requirements to achieve their optimum performance in terms of milk and meat production. However, providing an adequate amount of nutrients in terms of energy and protein to dairy cows is a challenging task due to many complex factors. Accurate estimation of dry matter intake (DMI) is a prerequisite for the formulation of diets to optimize milk production without compromising animal welfare (National Research Council, 2001). As such, this will contribute towards more sustainable production methods, and in particular, efficient use of nutrients, e.g. nitrogen, which has an impact on the environmental footprint.
In the past, most intake prediction equations relied upon the live weight of the animal and its current level of productivity. These approaches are of concern as they do not take into consideration forage composition or the nature of any concentrate fed (Beever, 1993). This is particularly the case with the equations proposed by the Agricultural Research Council (ARC, 1980), National Research Council (NRC, 2001) and the Agriculture Forestry and Fisheries Research Council Secretariat (AFFRCS, 1999). Further consideration such as environmental factors were included in the model suggested by Cornell Net Carbohydrate and Protein System (CNCPS; Fox et al., 2003). The composition of feedstuffs was taken into account in the equations proposed by others (Brown et al., 1977;Rayburn and Fox, 1993;Fuentes-Pila et al., 2003). Roseler et al. (1997) partitioned factors that could mediate effects into live weight (17%), milk yield (45%), feed offered and herd management (22%), body condition score (5%) and climatic Rayburn and Fox (1993) AFFRCS (1999) NRC (2001) CNCPS (Fox et al., 2003) Fuentes-Pila et al. (2003) 1 Definitions: MY = Milk yield (kg/d); FY = Milk fat yield (kg/d); PY = Milk protein yield (kg/d); MOL = Month of lactation, variable whose value is equal to 1 for the first month, 2 for the second one, 3 for the third one and 4 for the remaining months of lactation; RADF = Ration acid-detergent fiber (% of DM); RNDF = Ration neutral-detergent fiber (% of DM); RCP = Ration crude protein (% of DM); WIM = Weeks in milk; FBW = Full body weight (kg); FCM = 4% fat corrected milk (kg/d); TEMP = Temperature adjustment factor for DMI; MUD = Mud adjustment factor for DMI; Lag = Adjustment factor for DMI during early lactation; WOL = Week of lactation. factors (10%). With advances in computerized models, the accuracy of the prediction has been improved, recognizing factors such as: types of feed offered, level of feeding, ration formulation, quality of feed, body condition, stage of lactation, reproduction and climate (Mazumder and Kumagai, 2006). Dairy production systems in some Asian countries are often intensive and rely heavily on imported feedstuffs. Korean Feeding Standard for Dairy Cattle (Ministry of Agriculture and Forestry, 2002) reported that more than 60% of diets offered to dairy cows are based on imported concentrates and agricultural by-products in order to improve the overall efficiency of economic productivity. In part, this situation is attributable to poor production of good quality forage per unit area and the costs of production of such forage. Hence, the supply of good quality forage, vital for ruminant production, is not as stable as in Western I t' 11 w、1、a c cr NTcrfl、 Aman,、,〉*、,、/•'im+nac1 ' I ' T1 a rafr、re fka 01 m European or North American countries. Therefore, the aim of this study was to evaluate existing models in order to predict DMI when lactating cows were offered high concentrate-based diets containing agricultural by-products.
Dry matter intake prediction models
Six equations to predict DMI were chosen and are presented in Table 1. Both AFFRCS (1999) and NRC (2001) models estimate DMI using only animal factors such as body weight, milk yield and days in milk, whilst CNCPS (Fox et al., 2003) includes environmental factors such as ambient temperature. The chemical composition of feedstuffs, such as crude protein (CP), neutral detergent fiber (NDF) and acid detergent fiber (ADF), as well as animal productivity, were particularly important for the other equations employed (Brown et al., 1977;Rayburn and Fox, 1993;Fuentes-Pila et al., 2003) (Table 1).
Data collection from commercial dairy farms
The data collected contained a total of 430 lactating Holstein cows from 12 different commercial dairy farms in the Gyeonggi Province, Republic of Korea. Collection was conducted by visiting the farms regularly once a month for 6 months from December 2003 to May 2004. Commercial dairy cow concentrates were supplied by a computer-based concentrates feeder according to production level of each cow. Home-made total mixed rations (TMR), consisting primarily of a roughage or a commercial TMR were offered to cows twice a day (between 8:00 to 10:00 and 16:00 to 18:00 depending on individual farm). Cows were all freestall housed with fermented sawdust bedding. Among the survey group cows were discounted from data collection if they were either more than 500 days in milk (DIM), yielded on average less than 10 kg milk/day or had somatic cell count (SCC) of more than 500,000 cells/ml. In the surveyed farms, commercial concentrate, commercial TMR (dry or wet), cracked corn, beet pulp, brewer's grain, cottonseed, corn silage, alfalfa hay, Klein grass hay, Bermuda grass hay, tall fescue grass hay, timothy grass hay, orchard grass hay, oat hay and rice straw were used to formulate the feed (see Table 2 for the chemical composition of individual feed ingredients), and the chemical analysis of nutrients in the diets for the 12 farms is shown in Table 3.
Body weight, body condition score (BCS), the average amount of TMR supplied, and the amount of concentrate feeds fed to individual cows were monitored by regular monthly visits. Parity, days in milk, monthly average milk yield (MY), milk fat, milk protein, total solid (TS), milk urea nitrogen content (MUN) and SCC were available from the database of the Korean Animal Improvement Association (Seoul, Republic of Korea).
Animal experiment for the model evaluation
To examine the DMI of Holstein lactating cows under more controlled conditions, 24 lactating Holstein cows For 2 weeks prior to the initiation of the experiment, cows were adapted to individual electronic feeding gates (American Calan Inc., North-wood, NH, USA) and to experimental diets. Animal feed was supplied as a TMR containing commercial dairy concentrates, cracked corn, wheat bran, cottonseed, alfalfa hay, rye straw and Sudan grass silage. The chemical composition of the diets is presented in Table 4. The body weights of the cows were measured on the first and last day of each experimental period and mean values are presented (Table 5). Milking was performed using a tandem parlor system (DeLaval, Sweden) twice a day at 04:00 and 16:00 and milk yields were recorded by in-parlor milk meters (a-Laval, DeLaval, Sweden). The diets were offered at 10:00 and 17:00 in two equal portions at 110% of the previous day's intake level; the refusals were measured the next day at 09:00 and individual feed intake assessed. Observed mean milk yield and milk composition, body weight, parity, BCS, DIM and days pregnant are presented in Table 5.
Environmental conditions
During the collection of commercial farm and animal experimental data, information on average daily temperature, maximum and minimum temperatures, relative humidity and wind speed were available from the Korean Meteorological Administration (Seoul, Republic of Korea). These are presented in Table 3 for the commercial farm survey period and in Table 5 for the controlled experiment.
Chemical analysis
The DM, ash, CP, ether extract (EE), ADF and acid detergent lignin (ADL) contents of the feed samples from both the commercial farm data and the animal experiment were analyzed by Association of Official Analytical Chemists (1990) methods. NDF was analyzed according to the method of Mertens (2002) and neutral detergent insoluble crude protein (NDICP) and acid detergent insoluble crude protein (ADICP) were analyzed according to the methods of Licitra et al. (1996). Milk composition was analyzed using an automatic milk composition analyzer (MilkoScan, System 4300, Foss Electric, Denmark).
Model evaluation, calculation and statistical analysis
The differences between observed and predicted DMI values from the commercial farm survey and the animal experiment were validated by the paired t-test procedure using SAS Software 8.02 version (SAS Institute, 2001). Model evaluation included a rigorous statistical component and in this study two different methods were used to evaluate the accuracy of predicted values. Firstly, a linear regression analysis, which is often used to evaluate predictions regressing actual values on predicted responses. Secondly, root mean square prediction error analysis was used, as advocated by some authors (Kohn et al., 1998;Dhanoa et al., 1999;Chaves et al., 2006) who have shown that a measure of how well model predictions fit observed data can be calculated as the root mean square prediction error (RMSPE): This term is the square root of the estimate of variance of observed values about the predicted values. The RMSPE can be partitioned in many different ways to identify systematic problems with models (Kohn et al., 1998) and was divided into two terms in this study: the mean bias and the residual error. The mean bias represents the average inaccuracy of model predictions across all data and the in the dairy cow experiment (9), corn 이uten meal (3), coconut meal (5), palm meal (3), wheat bran (11), limestone (2.9), dicalcium phosphate (0.5), salt (0.5) and vitamin mixture (0.6). 2 The values were estimated by NRC nutrient requirements of dairy cattle program (2001, version 1.0). residual error was defined as the remaining error in model prediction after accounting for the mean bias. The residual error is also referred to as prediction error excluding mean bias.
DMI prediction from the commercial farm investigation
Mean prediction of DMI based on six model simulations compared with actual observed values from the commercial farm surveys with 430 lactating dairy cows are presented in Table 6. The actual mean DMI of 12 (Fox et al., 2003) estimated the DMI at 13% lower than the observed value whilst Fuentes- Pila et al. (2003) most closely predicted the actual DMI when compared with the observed mean value. When the model evaluation was conducted with the linear regression method, predicted values were significantly correlated with actual values for all models (p<0.001, Table 6), although R2 values varied substantially and were particularly low for the models by Rayburn and Fox (1993) (R2 = 0.07) and Fuentes-Pila et al. (2003) (R2 = 0.10). However, the slopes of the regression lines from all six models were significantly different from the theoretical value of 1.0 and unexplained sources of variation (i.e. high values of RSD) were observed (Table 6). It is common to find in the literature that models relevant to feeding lactating dairy cows are evaluated by regression of observed values against predicted responses (i.e. DMI in the current study, Ingvartsen, 1994). However, the data provided by simple regression analysis can be ambiguous in testing the null hypothesis and lack sensitivity (Mitchell, 1997;Dhanoa et al., 1999;St-Pierre, 2001), and thus are not able to provide a reliable interpretation of these relationships (Chaves et al., 2006). Indeed, regression equations (Table 6) with various ranges of slopes and intercepts larger than zero have little biological meaning, even though they all appeared to be statistically significant.
When model predictions were tested using measures of deviation, mean bias was significantly different from zero for DMI for all models proposed ( Table 6), suggesting that model predictions were not as accurate as expected. In Table 6, residual error terms represent the error in prediction after accounting for the mean bias (see Materials and Methods) and this was highest with the model by Fuentes-Pila et al. (2003), although the mean bias was the least for this model among the six models employed. However, despite a larger mean bias, the residual error was relatively small in the model prediction of the CNCPS (Fox et al., 2003). Table 7 shows the observed and predicted DMI values from the dairy cow experiment conducted in the Konkuk University Research Farm. The actual mean DMI was 25.7 Linear regression method RMSPE method Fuentes-Pila et al. (2003) was the closest to the actual DMI value, whilst there was some 22% difference between observed and predicted value by the CNCPS model (Fox et al., 2003). Although linear regression analyses of observed against predicted values were all significant (p<0.001), interpretation of the results was rather unreliable due to the low R2 values, explaining only limited variation in DMI (Table 7). The RMSPE analysis provided more accurate interpretation of the predicted values; for example, as a component by RMSPE analysis, mean bias from all six models were statistically significant from zero, again indicating the inaccuracy of all of the models (Table 7). Interestingly, unlike in the commercial farm survey the CNCPS model (Fox et al., 2003) expressed the greatest residual error (4.53) among the six models, showing the existence of some potential error even after accounting for mean bias. Apart from those used in the current study, numerous other models have been recommended in the literature for the accurate prediction of DMI in the ruminant animal (see review by Ingvartsen, 1994). Many of these models were developed based on single or multiple regression techniques using empirical data, which made it difficult to improve the accuracy unless a new set of data became available (Ingvartsen, 1994;Forbes, 1995). Ingvartsen (1994) in a substantial review of DMI prediction concluded that animal and food factors (especially parity, stage of lactation, and an expression of live weight, i.e. metabolic live weight) should be more carefully considered to make better predictions. Some reported energy-corrected milk yield rather than milk yield as the primary factor in their DMI prediction equation (Mazumder and Kumagai, 2006) whilst others suggested that introduction of lipostatic feedback mechanisms into the prediction equation should improve body weight and DMI prediction (Ellis et al., 2006). Without doubt, the major reason for this interest is the impact that feed intake has on animal performance.
DMI prediction from the dairy cow experiment
Of all six models, the model proposed by Fuentes-Pila et al. (2003) predicted DMI closely on both the commercial farm investigation and the dairy cow experiment. One possible reason could be the consideration of the function of individual feedstuffs with regards to effective NDF and/or ADF consumption by dairy cows as also seen in the equation by Brown et al. (1977). Many researchers suggested that NDF could be the most important factor to estimate the range of DMI as it is a major factor in gut-fill (Waldo, 1986;Mertens, 1994). Mertens (1997) also summarized the importance of NDF in the dairy ration in relation to animal health and performance especially, where the ratios of forage:concentrate are concerned. Thus, inclusion of NDF (and/or ADF) as a factor to estimate DMI may improve the accuracy of the model. It should be noted that the larger residual error of this model prediction compared with those from the other models (Tables 6 and 7) would suggest that accurate prediction of mean values does not necessarily demonstrate good predictability in the current study (Chaves et al., 2006).
On the other hand, a poor prediction in terms of mean bias by the CNCPS model (Fox et al., 2003) was notable, and perhaps unexpected as this equation has been evaluated frequently and robustly, and widely adopted in many other countries (i.e. Chiou et al., 2006) as a feeding standard for animal production. Chaves et al. (2006) discussed that there might be potential to predict DMI inaccurately with a model such as CNCPS as it is a requirement system, not a response system, of which the distinctiveness was further reviewed in the study of St-Pierre and Thraen (1999). For instance, DMI prediction is based on the input of milk production, of which the level and the composition are used to calculate the energy and nutrients required. However, factors that affect production responses such as feeding value and animal responses are not accounted for in the model (i.e. Sunagawa et al., 2007), and inability to explain nutrient partitioning between the various productive processes can be attributed to poor DMI prediction (Chaves et al., 2006). Lanzas et al. (2007) proposed that inclusion of dietary factors in DMI prediction is necessary in their recent revision for CNCPS feed carbohydrate fractionation scheme for formulating rations for ruminants. Instead, the results predicted by the AFFRCS (1999) model in Table 6 showed the least residual error, indicating better predictability in terms of DMI, although mean bias from both evaluations were relatively large.
In the present study, we did not compare high concentrate-based diets with forage-based ones in DMI prediction. However, it is also speculated that feeding more than 65% of concentrate is not that common, especially in Western European countries where a forage-based feeding system, either fresh or conserved, plays an important role in the dairy production system. Hence, we suggest that high concentrate-based feeding systems might contribute towards bias to predict DMI with chosen models in this study.
Further research is needed to identify the issues raised above and much attention has to be paid to developing a modified or new model to predict DMI more accurately for lactating dairy cows reared in Asian countries. As a consequence of variation in individual feedstuffs, which are often caused by constraints in cereal trading and also the use of locally-produced agricultural by-products, an accurate estimation of DMI will always be a challenging task. To achieve this goal, more mechanistic approaches, rather than simple empirical associations, are recommended for investigating diet and animal interactions under non standard environmental conditions, animals or feeds (Kohn et al., 1998;Martin and Sauvant, 2007). Any improvement will help producers to achieve productivity and profitability goals and, in the end, will contribute to the overall efficiency and sustainability of the ruminant agricultural industry especially in Asian countries.
|
2017-10-11T06:53:39.183Z
|
2008-05-06T00:00:00.000
|
{
"year": 2008,
"sha1": "207c0e95358e452cd4e13033b25792ab471bf14e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5713/ajas.2008.70398",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c5c035665a9a249bb59df177d25e4c2d6b682919",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
228955819
|
pes2o/s2orc
|
v3-fos-license
|
An Improved Protocol for Establishment of AML Patient-Derived Xenograft Models
Summary Patient-derived xenografts (PDXs) are the most valuable tool for preclinical drug testing because they retain the genetic diversity and phenotypic heterogeneity of the original tumor. Acute myeloid leukemia (AML) remains difficult to engraft in immunodeficient mice. This is particularly true for long-term frozen patient specimens. This protocol is designed to establish PDXs of human AML with improved engraftment rates. The optimized approach increases the viability of patient cells before implantation, efficiently monitors in vivo engraftment, and maximizes bone marrow collection. For complete details on the use and execution of this protocol, please refer to Salik et al. (2020) and Lynch et al. (2019).
Patient samples frozen in DMEM supplemented with 20% FBS and 10% DMSO have been stored in liquid nitrogen for 10-30 years. Thus, the thawing process is one of the most critical steps because it determines the viability of AML patient cells and the ultimate success of cell engraftment in mice.
1. Thaw frozen patient cells rapidly in a water bath at 37 C (<1 min). 2. Immediately transfer the thawed cells dropwise into a large volume of pre-warmed 20% FBS-containing RPMI-1640 medium in 50 mL centrifuge tube. 3. Filter the cell suspension through a 40 mm Cell Strainer to remove cell clumps and debris. 4. Centrifuge the tubes at 250 3 g for 5 min at 4 C. 5. Discard the supernatant and resuspend the cell pellet in 500 mL of 0.25% FBS-containing PBS. 6. Count the number of cells in a hemocytometer.
Note: Perform a 1:10 dilution cell count by resuspending in 5 mL of cell suspension with 45 mL trypan blue 7. Determine the number of viable patient cells to be injected into NSG (NOD.Cg-Prkdc scid Il2rg tm1Wjl /SzJ) mice.
Note: Divide the total number of viable cells by the number of mice to be injected to calculate how many cells will be injected into each mouse.
The mouse number is largely dependent on the total number of viable cells. If possible, inject more than 1 3 10 6 viable cells per mouse, for at least three individual NSG mice. Otherwise, a minimum of 5 3 10 5 cells will need to be injected per mouse. 8. Transfer the cell suspension to an Eppendorf tube. 9. Keep the Eppendorf tubes on ice for injections. 10. Perform intravenous (IV) injections in NSG mice.
Note: Prior to injection, warm mice under a heat lamp till tail vein dilation (5-10 min).
Place mouse cages in the front of a heat lamp and no closer than 15 cm from the lamp. Monitor all mice being heated frequently. If mice are observed to be inactive, panting, sweaty, or have red extremities, remove cages from the heat before mice may experience heat exhaustion.
Note: Load a syringe with patient cells and inject cells into the lateral tail vein using a 27-or 28gage sterile needle.
Ensure there are no air bubbles in the syringe. The maximum recommended injection volume is up to 5 mL/kg. IV injections should be performed by experienced researchers. After mice are injected with patient cells, assess engraftment of human leukemic cells in mouse peripheral blood starting from 3-4 weeks of implantation. The peripheral blood will be collected and analyzed once per week over a period of 6 months.
Note: AML patient cells often take 3-6 months to engraft depending on the number and quality of viable cells injected in NSG mice.
a. Perform a 1:10 dilution of BD Pharm RBC Lysing solution (103) with ddH 2 O. b. 700 mL of 13 RBC lysis buffer is used for one sample (~20-40 mL of peripheral blood). c. Store the RBC lysis buffer at room temperature (~20 C). 12. Prepare a cocktail of antibodies for FACS analysis.
d. Add 4 mL of human CD45 (hCD45)-APC and 2 mL of mouse CD45 (mCD45)-FITC to 94 mL PBS (per reaction/sample). mCD45 has a final concentration of 2 ng/mL. hCD45 has a final concentration of 1.25 ng/mL. e. Keep on ice until further use.
Processing Peripheral Blood to Monitor Engraftment
Timing: 2 h About one month after injection, patient cell engraftment in mouse peripheral blood will be monitored through measuring the profile of hCD45 + and mCD45 + cells using FACS analysis.
Note: The MiniCollect tube prevents coagulation of the blood as it contains EDTA.
Note: Tail vein blood collections are performed by experienced researchers.
14. Add 1 mL of PBS in the MiniCollect tube and then transfer it to an Eppendorf tube. 15. Centrifuge the Eppendorf tube for 8 min at 250 3 g at 4 C. Discard the supernatant. 16. Add 700 mL of 13 RBC lysis buffer, gently mixing each tube immediately. Troubleshooting 17. Incubate in the dark at~20 C for 15 min.
Harvesting Bone Marrow from AML Patient-Derived Xenograft Models
Timing: 3-6 h Effective collection of bone marrow is an integral part in the process of PDX mouse model establishment and use. This step is to ensure the maximum bone marrow collection from a PDX mouse by harvesting spinal cord and femurs/tibias.
CRITICAL: To preserve tissue cell viability, it is important to collect and process the tissue samples within 6 h after surgical resection.
Use scissors and forceps with curved ends.
27. Upon confirmation of patient cell engraftment in peripheral blood, euthanize the mouse with carbon dioxide asphyxiation or an alternative method approved in accordance with institutional animal care and use guidelines. 28. Place the mouse onto a sterile surgical pad in a sterile hood. 29. Pin the mouse in a supine position to a dissection board by putting a pin through each of the four paws. Spray the entire body with 80% ethanol. 30. Open the lower abdominal cavity with sterile scissors and remove the surface muscles to find the pelvic-hip joint and tibia ankle joint (Figure 2). 31. Cut off the hind leg above the pelvic-hip joint and below the tibia ankle joint with sharp sterile scissors. 32. Remove the residual muscle tissues surrounding the femur and tibia with sterile forceps and scissors.
Note: Increased spleen size is an important sign of leukemia. Mice with significant engraftment of leukemic cells often have an enlarged spleen.
33. Femurs and tibias: Cut along the inner side of bone up to the pelvic-hip joint until the incision cannot be continued. Attempt to cut as close to the bone as possible. Then cut along the top of the bone structures. Cut at the hip joint, and then cut the distal tibia ankle joint. Then remove the bones and place them on a rediwipe and proceed to remove as much residual muscle as possible. Cut the bony structure at the knee joint and place in PBS. Repeat for the other femur/tibia. 34. Spine: Turn mouse over into the prone position, re-pin, and spray 80% ethanol on the back.
Make an incision in the skin over the base of the spine and cut right up till the head and down to the tail. Cut along the left side of the spinal cord until the cut cannot continue any further, then repeat along the right side. Cut underneath the spinal cord removing as much tissue as possible. Cut at the top of the spinal cord and then the bottom at the tail. Place the spinal cord on a rediwipe and proceed to remove as much residual muscle and connective tissue as possible, cutting in one motion alongside bone, and then place in PBS. 35. In a biological safety cabinet, cut and wipe away all muscle tissue from the spinal cord and femurs on a rediwipe, and then place tissue samples in a petri dish submerged in a small amount of PBS. 36. Pour a small amount of fresh PBS into a mortar and pestle. 37. Cut the spinal cord into small pieces and place them into the mortar.
Note: Place scissors and forceps on a petri dish lid to avoid contamination.
38. Crush the spinal cord, and then place femurs and tibias into the mortar and continue crushing until the fibrous texture is achieved. 39. Using a P1000, pipette in 1 mL PBS from the mortar into a 50 mL centrifuge tube with a 40 mm cell strainer to remove debris. Pipette PBS from the surface, trying not to pipette any bony fibers. 40. Pour some fresh PBS into the mortar and crush until the remains are white and fibrous. 41. Pipette PBS from mortar into the tube using stripettes.
Note: Using stripette will speed up the process.
Note: If the filter becomes blocked, replace with a new filter and pipette any remaining liquid from the old filter into the new filter. 46. Spin all the tubes together at 250 3 g for 5 min at 4 C. 47. Remove the supernatant carefully and resuspend the pellet in 1 mL of 13 RBC lysis buffer. 48. Incubate for 7 min at~20 C, then inactivate lysis buffer by topping up the tube to 50 mL with PBS. 49. Centrifuge at 250 3 g for 5 min at 4 C. 50. Remove the supernatant and resuspend the pellet in 1 mL freezing media.
Note: Sterile freezing media composes of 10% DMSO, 50% FBS and 40% RPMI-1640 medium. 51. Pipette 5 mL of the cell media into an Eppendorf tube for counting. a. Dilute 5 mL cell media with trypan blue.
-Create a 1:100 dilution by adding 495 mL trypan blue to 5 mL of cell suspension. 52. Count cells using hemocytometer by putting 10 mL of solution under the coverslip.
Note: Make sure to wipe hemocytometer with ethanol and wipe dry with chemwipe.
53. After the cell count, determine the quantity of cells to freeze down for later use. Troubleshooting Note: Do not put more than 10 7 cells per cryovial to prevent freeze-thaw cycles and reduce cell viability.
54. Transfer the remaining cells into cryovials (1 mL/vial) to freeze at À80 C freezer for 18-24 h in Nalgene Cryo 1 C Freezing Container, and then transfer to liquid nitrogen on the following day.
CRITICAL: Be aware of the potential presence of human pathogens. Be sure to adhere to appropriate biosafety protocols when handling human tissues. In addition, appropriate antibiotics should be added to the media to prevent bacterial contamination during tissue processing.
EXPECTED OUTCOMES
This protocol describes the establishment of AML patient-derived xenograft (PDX) mouse models for in vivo preclinical drug testing. Using this protocol, our pilot study has achieved a 60% engraftment rate in NSG mice (Table 1). There are three major steps optimized to improve in vivo engraftment of primary AML patient cells. This includes methods for increasing the viability of patient cells in the thawing process, monitoring in vivo engraftment efficiently and maximizing bone marrow collection from PDX mice. AML patient cells often take 3-6 months to engraft depending on the number and quality of viable cells injected in NSG mice. An optimal thawing process determines the quantity and quality of patient cells and increases the success rate of PDX model establishment.
LIMITATIONS
The major limitation of this protocol is that about 40% of primary AML patient samples fail to engraft in NSG mice. This is not surprising as AML has frequently been reported to be a difficult-to-engraft model. In particular, the use of frozen patient cells reduces cell viability and limits availability of viable cells to be transplanted into mice. Fresh patient samples or increased viable cells will substantially improve the success rate of a PDX model in this protocol.
TROUBLESHOOTING Problem
There is low viability of patient cells after thawing.
Potential Solution
This may happen due to the thawing condition of patient cells as they are very sensitive. To avoid this situation, we recommend using 20% FBS and to keep cells on ice during the whole procedure.
Problem
There are no hCD45 + peripheral blood cells observed in FACS analysis.
Potential Solution
There are several potential reasons: 1) not enough primary antibodies are used in flow cytometry; 2) the incubation period with antibodies is not long enough; 3) gating is incorrect and voltages are inappropriate; and/or 4) there is a failure of patient cell engraftment.
Problem
There is red blood cell residue in the FACS samples.
Potential Solution
A major reason for this problem may be that RBC lysis buffer is not working at its optimal condition. Ammonium chloride (NH4Cl) is the active agent in RBC lysis buffer and sufficient NH4Cl is required to remove the RBCs. However, the number of red blood cells within blood samples may vary. Thus, the volume of RBC lysis buffer will need to be further optimized, depending on the volume of blood to be collected.
Problem
There is low yield of harvested bone marrow cells.
Potential Solution
This may be due to several factors such as 1) not keeping samples on ice, 2) leaving the samples out for too long thus causing the low viability of PDX cells, and 3) not removing all muscles around the bone and joint prior to grinding, leading to cells being trapped between the muscles. It is therefore important to complete the harvesting procedure as quickly as possible and to sufficiently remove the surrounding tissues in order to increase the yield of harvested bone marrow cells.
Problem
There is a failure of patient cell engraftment in NSG mice.
Potential Solution
The number of viable patient cells injected is critical for achieving engraftment in NSG mice. The number of primary patient cells below the threshold, e.g., 5 3 10 5 cells per mouse, could be taken into account. If there are no mice engrafted with AML patient cells, the number of cells injected will need to be increased. This may also be caused by technical reasons during injection such as speed and force of injection. Furthermore, limited number of patient cells may be due to cell leakage during injection or the needle is too superficial to the site of injection leaving a subcutaneous bubble at the site of injection, thus losing cells.
RESOURCE AVAILABILITY Lead Contact
Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, Jenny Y. Wang, Ph.D., jenny.wang@unsw.edu.au.
Materials Availability
No new mouse lines were generated in this study. NSG mice were obtained from Australian BioResources.
Data and Code Availability
No new data or code were generated for this study.
|
2020-11-05T09:08:43.845Z
|
2020-10-28T00:00:00.000
|
{
"year": 2020,
"sha1": "352dd231665c5f2ee8cf184febd33f566b323c95",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.xpro.2020.100156",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "21e2276ffc5bace0a8dda3fcfc60f2a4c0ec1672",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
243056380
|
pes2o/s2orc
|
v3-fos-license
|
Vehicle Face Re-identification Based on Nonnegative Matrix Factorization with Time Difference Constraint
Light intensity variation is one of the key factors which affect the accuracy of vehicle face re-identification, so in order to improve the robustness of vehicle face features to light intensity variation, a Nonnegative Matrix Factorization model with the constraint of image acquisition time difference is proposed. First, the original features vectors of all pairs of positive samples which are used for training are placed in two original feature matrices respectively, where the same columns of the two matrices represent the same vehicle; Then, the new features obtained after decomposition are divided into stable and variable features proportionally, where the constraints of intra-class similarity and inter-class difference are imposed on the stable feature, and the constraint of image acquisition time difference is imposed on the variable feature; At last, vehicle face matching is achieved through calculating the cosine distance of stable features. Experimental results show that the average False Reject Rate and the average False Accept Rate of the proposed algorithm can be reduced to 0.14 and 0.11 respectively on five different datasets, and even sometimes under the large difference of light intensities, the vehicle face image can be still recognized accurately, which verifies that the extracted features have good robustness to light variation.
Introduction
Traditionally, the vehicles with fake plates are mainly detected by viewing videos manually, however, with the rapid increase in the numbers of vehicles, the manual detection methods will face great difficulties due to the massive amounts of video data. Therefore, it is of great significance to propose an automatic and effective vehicle re-identification algorithm. For vehicle re-identification, the biggest challenge is that affected by the difference of light intensities, there may be large differences between the captured images representing the same vehicle as shown in Fig. 1, which brings great difficulties to re-identification, so it can be concluded that it is very critical to obtain stable and effective vehicle face features under various lighting conditions [1]. The remainder of this paper is organized as follows. In section 2, some related works are addressed. The proposed NMF model for vehicle face recognition is proposed in section 3. In section 4, a projected gradient algorithm is used to solve the objective function of the proposed NMF model, and the matching method of vehicle face features is given in section 5. In section 6, the proposed algorithm is proved effectively through experiments. Finally, the conclusion is drawn in section 7.
Related Work
Nowadays, the existing vehicle recognition methods mainly include the following two categories, one is based on artificial extracted features, the other is based on the features which are obtained automatically through deep learning. The artificial features can be divided into the low-level features and the high-level features, where the low-level image features include color feature [2][3], edge feature [4], texture feature [5], and shape feature [6], et al; and scale key point features [7][8] and 3D model feature [9][10][11][12] can be considered as the high-level features. However, the artificial features depend on human experience to a large extent, and the deep information of image is not easy to be mined, so the effectiveness of artificial features is hard to be ensured. Therefore, the deep learning based vehicle recognition algorithms are paid more attention in recent years, which include some traditional deep learning models such as Convolutional Neural Network model [13][14][15], Deep Belief Network model [16][17], Transfer learning model [18][19][20], Restricted Boltzmann Machine [21][22][23], and some improved models such as Conv5 [24], Teacher-Student Network [25], Parsing-based View-aware Embedding Network [26], Semantics-guided Part Attention Network [27], the model fused by multiple networks [28], and the network based on reconstruction [29], et al. For the supervised vehicle classification problem, these deep learning methods have achieved good results, but for vehicle face matching problem under the conditions that the times of each vehicle being captured is very limited and the number of the training samples is too small, the universalities of these models are not very well. Therefore, under a limited number of vehicle face samples, it is very meaningful to propose a vehicle re-identification algorithm with good robustness and universality.
Nonnegative Matrix Factorization (NMF) can obtain effective basis feature images for image classification, and it has also achieved good results in vehicle face recognition in recent years [1,30]. Therefore, in the paper we propose a new vehicle face re-identification method based on improved NMF, which takes into account the image difference caused by the capturing times. In the proposed algorithm, the variable features and stable features which are easily affected and not easily affected respectively by light variation can be obtained after model training, then the stable features are used to judge whether two vehicle face images can match or not.
Proposed Model
After the vehicle image is captured by the surveillance cameras on the highway, in order to remove the useless information in the image and obtain the effective vehicle face features, it is necessary to segment the vehicle face region firstly in the captured image. Yolo models have been proven to be very effective in target detection, and since Yolov5 model has the advantages such as small network structure and fast processing speed, it is selected to segment the vehicle face region in the image in the proposed algorithm, where the codes of Yolov5 model and the weight files can be downloaded from https://github.com/ultralytics/yolov5 and https://github.com/ultralytics/yolov5/releases/tag/v5.0 respectively, and the segmentation results are shown in Fig. 2. As we know, it is very important to obtain effective feature basis vectors through dimensionality reduction for object recognition, where the common dimensionality reduction methods include principal component analysis (PCA), linear discriminant analysis (LDA), local preserving projection (LPP), et al. From the principles of these methods, we know that the elements in the feature basis vectors and coefficient vectors can be either positive or negative, and the negative elements are reasonable from in terms of mathematical operation. However, for image processing, the negative elements are difficult to explain reasonably, for example, the pixels in the basis images and the weights can both not be negative. Therefore, we use NMF model to ensure the non-negativity of the decomposed matrices, where the original NMF model is shown as (1), , 0 m n m r r n ik kj where the columns of F , U and V represent original feature vectors, basis vectors and coefficient vectors respectively, and each coefficient vector is usually regarded as the new feature vector [31].
In the proposed algorithm, two images which represent the same vehicle can be seen as a pair of training samples, and all pairs of training samples are placed in two original feature matrices, i.e., 1 F and 2 F , where the column vectors at the same position in 1 F and 2 F represent the same vehicle. Therefore, the whole decomposition error can be obtained through fusing the decomposed errors of 1 F and 2 F as shown in (2), (2) where the smaller the decomposition error, the more accurate the vehicle face feature. From the perspective of feature stability, it can be considered that each vehicle face image contains two types of features as shown in Fig. 3. One is the stable features which are robust to illumination variation such as the shape of vehicle windows and grid, the other is the variable features such as vehicle color and vehicle light brightness, et al, which will change with the light intensity. From the above analysis, we can see that it is very important to distinguish stable features and variable features from vehicle face image for vehicle re-identification. Therefore, in addition to ensuring the non-negativity of the decomposition results, some constraints which are conductive to accurate identification should be imposed on the stable and variable features, where the constraints are shown as follows: 1) The orthogonal constraint of stable features. After matrix factorization, the jth column in The stable feature of vehicle face image should have the following characteristics, that is, even if there is a large illumination difference when capturing the same vehicle two times, the stable features in two captured vehicle images still have strong similarities, on the contrary, the stable features which represent different vehicles should be different from each other, i.e., they should be as orthogonal as possible as shown in (5), where m is the number of columns in i F , i.e., the number of the pairs of training samples. From the above analysis, the function of measuring the orthogonality of stable features can be obtained as shown in (8).
2) The similar constraint of weighted variable features. Since the feature vector i V is composed of stable features and variable features after dimensionality reduction, the last r k − dimensions in i V can be regarded as the variable features, whose solution is shown in (9), We know that the light intensity changes continuously with time, so the larger the interval of the times of capturing two images, the more uncertain the difference between variable features will be. On the contrary, if the time interval is small, the variable features will be similar relatively. Therefore, we can conclude that if the interval of the times of capturing the same vehicle is large, the negative effects of variable features on re-identification will be obvious. In order to reduce the impact of variable features, we will assign different weights to the variable features according to the time difference of image acquisitions as shown in (10), where the capturing time can be obtained through surveillance camera, (11), (11) and the differences between the variable features of all pairs of vehicle face images can be obtained through (12).
When the capturing times of the two images which represent the same vehicle are close enough, the variable features of the two images should be similar, i.e., if 1 On the contrary, if the difference of capturing times is large, the similarity between the variable features of the two images will be uncertain, i.e., if 0 From the above analysis, the greater the difference measure function var J , the more helpful for re-identification. In summary, the objective function of the proposed model is shown in (15), and the optimal parameters * U , 1 * V , 2 * V will be obtained by (16).
In final, we will optimize the parameters U , 1 V and 2 V according to their iterative rules, where the optimization process is as follows: Step.1 Given the training data 1 F and 2 F , the balance coefficients α and β , the error threshold ξ , the maximum number of iterations max N ; Step.2 Initializing the parameters 0 U , 0 1 V , 0 2 V and the number of iterations 0 t = ; Step. 3 1 t t = + , and update the parameters U , 1 V and 2 V accord to (23), (24) and (25); Step.5; else: goto Step.3; Step.5 Output the optimal parameters * U , 1 * V , 2 * V .
Feature Matching Based on Cosine Distance
When judging whether two images then, we will measure the similarity of two vehicle face features by using (27) (27) if d η > , the two images can be considered to represent the same vehicle, on the contrary, they represent different vehicles, where η is the similarity threshold.
1) "BITVehicle" dataset. There are 9850 vehicle images in the "BITVehicle" dataset, but not all vehicles have been captured more than twice, therefore, we selected 1500 pairs of vehicle images as positive samples from the dataset, where each pair of images represent the same vehicle; at the same time, we selected 2000 pairs of vehicle images as negative samples, where each pair of images represent the different vehicle. Some pairs of positive and negative samples are shown in Fig. 4. In addition, in the experiment for the algorithms based on NMF, we use the PC machine whose configuration is Intel i5-10300H CPU, 16G RAM and Matlab 2017b; and for the algorithms based on deep learning, we use the server whose configuration is 16G RAM, two Geforce 1080Ti GPUs, and Tensorflow 1.0.
Parameter Setting
In the proposed algorithm, some parameters need to be set appropriately, which include the dimension m of the original feature F , the number of training samples n , the dimension of coefficient vector r , the dimension of stable feature k , the balance factors α and β , and the similarity threshold η , where some of parameters are set according to experience, and the others will be obtained through experiment. 1) Among the above parameters, m , n and r can be set according to experience. The vehicle face image is resized to 160 120 3 × × after normalization, and the original feature vector F will be obtained through stacking all columns, i.e., 57600 m = . In addition, n and r are set as 3000 and 500 respectively.
2) From the clustering theory in pattern recognition, if the parameters k , α , β and η are appropriate enough, the positive samples should be very similar, on the contrary, there are large differences between the negative samples. Therefore, we propose a measurement function to measure the clustering property as shown in (28), Table 1. , the best clustering property can be obtained.
In addition, the similarity threshold η needs to be optimized to achieve the best recognition performance, and the purpose is to make False Reject Rate (FRR) and False Accept Rate (FAR) as low as possible, where the recognition performance is measured by (29), The curve of ( ) P η under different η is shown in Fig. 9, and it can be seen that the best recognition performance of the proposed algorithm is achieved when 0.92 η =
Comparison of different algorithms
The algorithms which are selected in the comparison experiment are mainly divided into two categories: some algorithms are similar to the proposed algorithm [1,30], they are all based on NMF model, and the others are based on the deep neural network model, i.e., AlexNet, Resnet50 and VGG19. Before the experiment, the training dataset needs to be constructed firstly as shown in Table 2. Except for the images in the training dataset, the other pairs of samples will be used as the testing samples. Since the recognition principles of the selected two types of comparison algorithms are different, their training methods are also different. For the algorithms based on NMF model, the same training methods as the proposed algorithm will be used; for the algorithms based on deep learning, since the number of the vehicle face images which represent each vehicle is very small, the traditional training and classification methods may be not suitable. Therefore, their training methods have been modified in the experiment, i.e., each pair of three-channel samples are superimposed to form a six-channel sample, thus all pairs of positive samples, negative samples and testing samples are superimposed to form the new positive samples, the new negative samples and the new testing samples respectively, and the filters of the first layer of these deep neural networks are also adjusted accordingly.
In addition, the parameter j T in the proposed algorithm is obtained based on the difference of the acquisition times of the two images, however, except for the self-built dataset, there is not image acquisition time information in the other datasets. Therefore, during the training process, the parameters j T from different datasets will be set differently, if the training samples are from the self-built dataset, the parameter j T will be obtained according to (10); if the training samples are from the other datasets, the parameters j T will be set to 1 or 0, i.e., when the two images are both captured in the day or in the night, the parameter j T will be set to 1, and when the two images are captured respectively in the day and in the night , the parameter j T will be set to 0.
After the models are trained, all the samples from different datasets will be tested, and the testing results are shown in Table 3, where FRR and FAR represent false reject rate and false accept rate respectively. From Table 3, it can be seen that the proposed algorithm is slightly better than the other algorithms in terms of FRR and FAR. Since there are many pairs of samples in the self-built dataset which are captured under large illumination differences, the performances of the above comparison algorithms are significantly reduced, but the proposed algorithm can still achieve good recognition results. The reason is that during training the proposed algorithm ignores the variable features which are affected by the light intensity, and pays more attention to the stable features of vehicle face images, which makes the proposed algorithm have good robustness to light variation. The conclusion that the proposed algorithm has good robustness to light variation has been verified from the results in Table 4.
Conclusion
To improve the robustness of vehicle face features to light variation when capturing image, a NMF model with time difference constraint is proposed. The innovation of the thesis is that the stable features which are less affected by light intensity variation can be obtained after model training, i.e., even if the same vehicle is captured twice under different light intensities, we can still conclude accurately that the two images represent the same vehicle. Although we have achieved good recognition results, there are still some problems to be solved, for example, when there is a large difference in the capture angle in two vehicle face images which represent the same vehicle, the recognition accuracy is still not very high. Therefore, the universality of the proposed algorithm needs to be further improved.
|
2021-09-01T15:03:52.159Z
|
2021-06-30T00:00:00.000
|
{
"year": 2021,
"sha1": "5c78a810ff6fe1aee792b942efa9da98ea087cb4",
"oa_license": null,
"oa_url": "http://itiis.org/digital-library/manuscript/file/24677/TIIS%20Vol%2015,%20No%206-9.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "678d8d3e9770488b694b3b510a617ffe64d2dd71",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
251167224
|
pes2o/s2orc
|
v3-fos-license
|
Work Experience and Achievement: Their Influence on Lecturers’ Career
This study aims to analyze the effect of work experience and work performance on lecturer careers. This study uses a quantitative method that examines three variables, namely two independent variables and one dependent variable. The independent variables are work experience and work performance, while the dependent variable is the lecturer's career (Y). This research was conducted using quantitative research with a survey method with a path analysis approach, with a total population of 160 lecturers and a sample of 60 people. The instrument test in this study used validity and reliability tests. Using path analysis and hypothesis testing. The results of the calculation of normality in the dependent variable work experience data (X1) the hypothesis is accepted or in other words the data is normally distributed. In the work performance data as the dependent variable (X2) the hypothesis is accepted or in other words the data is normally distributed. In the lecturer career data as an independent variable (Y) the hypothesis is accepted or in other words the data is normally distributed. In the linearity test, there is a linear relationship between the work experience variable (X1) and the work performance variable (X2) on the lecturer career variable (Y). The conclusion from this research is that there is an effect of work experience on the career of
This study aims to analyze the effect of work experience and work performance on lecturer careers. This study uses a quantitative method that examines three variables, namely two independent variables and one dependent variable. The independent variables are work experience and work performance, while the dependent variable is the lecturer's career (Y). This research was conducted using quantitative research with a survey method with a path analysis approach, with a total population of 160 lecturers and a sample of 60 people. The instrument test in this study used validity and reliability tests. Using path analysis and hypothesis testing. The results of the calculation of normality in the dependent variable work experience data (X1) the hypothesis is accepted or in other words the data is normally distributed. In the work performance data as the dependent variable (X2) the hypothesis is accepted or in other words the data is normally distributed. In the lecturer career data as an independent variable (Y) the hypothesis is accepted or in other words the data is normally distributed. In the linearity test, there is a linear relationship between the work experience variable (X1) and the work performance variable (X2) on the lecturer career variable (Y). The conclusion from this research is that there is an effect of work experience on the career of lecturers, there is an effect of work performance on the career of lecturers and there is an influence of work experience on the career of lecturers.
INTRODUCTION
Lecturers as stated in the law on higher education are professional educators and scientists with the main task of transforming, developing and disseminating science, technology, and art through education, research, and community service (Mulyani, 2017), (Murcahyanto et al., 2018), (RI, 2019). The development of lecturers is a core part of institutional development, and includes parts of personal Utami, 2017). Performance appraisal needs to be carried out formally and rationally and applied objectively and systematically documented. Lecturers' work achievements are not obtained just like that, but are obtained with hard effort and a long process. So it needs to get an award, because achieving even has added value from the standard of work (Utami, 2017).
Information and work performance data are obtained through a work performance appraisal process called performance appraisal. So that the lecturer's work performance is the result of the work and appearance of a lecturer's work that is in accordance with the target and even exceeds the standards assessed by superiors (Lubis & Siregar, 2021). Therefore, being given an award for lecturer achievement can be measured by indicators (1) success in fostering outstanding students, (2) Produce innovative works, (3) Writing books, (4) Conducting research, (5) Getting awards, (6) Creating works art (Khairiah et al., 2021;Sutrisno et al., 2022).
Work experience for a lecturer is closely related to the lecturer's career. From this theory, the researcher obtained several theoretical frameworks that; A career is an activity and work experience during his life. While the basic capital of one's career development is achievement. Satisfactory work performance is one of the considerations of superiors in promoting someone to occupy a certain position on a job. Human resource development practitioners always use work experience and place more emphasis on experiential learning to improve work performance. Learning activity is a development technique that focuses on the learner in integrating his experience in the learning process. So that experience is related to work performance, because the higher a person's learning experience, the higher his achievement, including in work. From this theory, the researcher wants to know and describe the effect of work experience on lecturers' careers, the effect of work performance on lecturers' careers, and work experience on lecturers' work performance. Similar studies have been carried out including; (Kristola & Adnyani, 2014) who concluded that; (1) work experience has a positive and significant effect on the work performance of Denpasar Agricultural Quarantine Center employees, (2) work experience has a positive and significant impact on career development of Denpasar Agricultural Quarantine Center employees, (3) work performance has a positive and significant effect on career development of Denpasar Class 1 Agricultural Quarantine Center employees. Research by (Saraswati & Dewi, 2017) which concludes that Work Experience, Education and Personality have a positive effect on Employee Career Development at the Nikki Hotel in Denpasar, as well as research (Indrawan, 2019) which concludes that Work Ethics, Work Experience and Work Culture have a positive effect on the work performance of South Binjai District employees.
Based on several previous research results, it can be concluded that career development, work experience, and work performance of lecturers are relevant variables for further research at universities. The explanation above also strengthens the theoretical and realistic thinking of researchers that work experience, and work performance have an effect on careers. So that researchers are interested in analyzing the effect of work experience and work performance on lecturer careers. Thus, the careers of lecturers are feasible or urgent to be researched which is thought to be influenced by work experience and work performance factors. It is hoped that the results of this study are expected to be useful for interested parties, including; Lecturers realize the importance, work experience, and work performance in the career development of lecturers. Leaders realize the importance of increasing knowledge and skills for the career development of lecturers in their respective universities.
METHODS
This study aims to obtain an overview of the effect of work experience, and work performance on the careers of lecturers at Hamzanwadi University. Specifically, the purpose of this research is to find out the (a) direct influence of work experience on lecturers' careers (b) direct influence of work performance on lecturers' careers, (c) direct influence of work experience on lecturers' work performance. This research was conducted at the University of Hamzanwadi Pancor campus. This research was conducted in 2 stages of instrument distribution. Phase 1 was carried out in the context of testing the research instrument. Phase 2 is the actual research. The subjects of this research are permanent and non-permanent lecturers at Hamzanwadi University. This research is classified into the type of quantitative research with a path analysis approach, and the research method used is the field method through survey activities. (Sugiyono, 2019). Based on the data obtained, using this method is expected to explain the effect of implementation, work experience, work performance, on the career development of lecturers. The research design can be described as follows: Based on the above hypothesis, the hypothetical model is as follows: The population of this study were all permanent and temporary employees and lecturers for the 2020/2021 academic year, totaling 160 people. This number consisted of 47 employees, and 112 permanent and non-permanent lecturers. The sample is part of the total population to be studied. Of the 87 samples, 60 permanent lecturers were determined from the foundation who had an ID number, with the aim that the permanent lecturers had data related to this research. So, the sample of this research is 60 lecturers. For testing the instrument, 30 lecturers were determined from a population that was not a sample.
The data analysis technique used in this research is descriptive and inferential analysis techniques. The use of path analysis was developed to study the direct and indirect effects of a number of hypothesized variables. Based on the theoretical framework that has been developed, there are 3 (four) variables studied in this study, namely work experience, work performance and career. Before testing the hypothesis, the path analysis requirements test is carried out first. There are four characteristics or requirements, namely: The data for each variable is interval data, the relationship between two variables is linear and additive, the relationship between each two variables is recursive (one way), the residual variables are not correlated with each other and neither with the variables in system. Thus, the requirements test performed before testing the hypothesis is the normality test, linearity test, path analysis and statistical hypotheses.
Description of Research Data
The first step in analyzing research data is to describe the data. The description of the data is done to obtain an overview of the condition of each research variable which includes the variables of work experience, work performance and lecturer career. The endogenous variable in this study is the X1 Y X2 lecturer's career. While the exogenous variables in the study are Lecturer Work Experience, Lecturer Work Achievement.
Lecturers Career
Lecturer careers are measured using 43 statement items, so the range of theoretical scores between 43 to 172 is 129.
Table 1 Lecturer Career
Based on the results of data collection, the lowest score was 69 and the highest was 166. This means that the empirical range of scores between 69 and 166 is 97. The average career score is 111.20. With a median of 106.77 and a mode of 100.7. Based on the theoretical maximum score, it can be stated that the average score of the lecturer's career is 64.65% of the maximum theoretical score of 172. The standard deviation or standard deviation of the lecturer's career score is 24.92 and the variance is 62.10. The relatively large standard deviation value indicates that the diversity of lecturers' careers is relatively not uniform.
Furthermore, if the lecturer's career score is grouped into the low category of 69-96, i.e. the average score is 97-138, and the high score is 139-154, then the lecturer's career conditions can be stated as follows: the average score of the lecturer's career is 111.20, indicating that the lecturer's career Medium career: as many as ,10 lecturers showed Career in the high category. This shows that most of the lecturers do careers in the medium category. This is influenced by age and length of work, such as opinion (Daulay & Handayani, 2021;Hutabarat & Gurning, 2017) that a career perspective is a sequence of positions occupied by a person during his lifetime. from another perspective career consists of changes in values, attitudes, and motivations that occur as a person gets older and gets older.
Work Experience
The work experience variable was measured using 41 statement items, so the theoretical score range from 41 to 164 was 123. Based on the results of data collection, it is known that the lowest score is 75 and the highest is 154. This means that the empirical range of scores between 75 and 54 is 79. The average work experience score is 113.10 with a median of 109.87, and the mode is 106.21. based on the theoretical maximum score, it can be stated that the average score of work experience is 70% of the theoretical maximum score of 164. The standard deviation or standard deviation of the work experience score is 17.83, and the variance is 317.73. Furthermore, if the lecturers' work experience scores are grouped into the low category of 78-98, i.e. the average score is between 99-134, and the high score is 135-158, then the condition of the work experience of the Hamzanwadi University lecturers can be stated as follows: the average score of the lecturer's work experience is 113 ,10, indicating that the work experience of the lecturers is in the medium category: as many as ,11 the lecturers show the work experience in the high category. This shows that most of the lecturers have work experience that is categorized as moderate because it is influenced by conditional factors. This is according to opinion (Gutek, 2022;Soriano & Castrogiovanni, 2012) that experience is also defined as the knowledge and skills possessed, events or series of events that are followed or passed. After all, every experience affects the attitudes that also determine the quality of all subsequent experiences, each experience to a certain degree affects all the objective conditions in it when we gain a variety of subsequent experiences.
Achievement
The work performance variable was measured using 45 statement items, so the theoretical score range between 45 and 180 is 135.
Table 3. Achievement
Based on the results of data collection, it is known that the lowest score is 43 and the highest is 145. This means that the empirical range of scores between 43 and 145 is 102. The average score for work performance is 91.50 with a median of 86.43, and the mode is 77.5. based on the theoretical maximum score, it can be stated that the average score of work experience is 51.03% of the theoretical maximum score of 180. The standard deviation or standard deviation of the work experience score is 27.39, and the variance is 750, 26. Furthermore, if the lecturer's work performance score is grouped into the low category is 43-87, that is, the average score is between 88-132, and the high score is 133-147, so the condition of the lecturer's work performance can be stated as follows: the average score of the lecturer's work performance is 91.50. This shows that most of the lecturers have moderate work performance. One's work achievements are not obtained just like that, but are obtained with hard effort and a long process. So it needs to be rewarded, because achieving even has added value from the standard of work (Bernardin, 2016;Bratton et al., 2021;Hecklau et al., 2016). (Bernardin, 2016;Bratton et al., 2021;Hecklau et al., 2016). Path analysis requires that the analyzed data must meet certain statistical tests. Therefore, before conducting data analysis using a particular path analysis, first, several statistical tests are required in the path analysis.
Normality Test
The statistical test carried out to test normality in this study was the X2 test formula. The hypotheses proposed in the normality test are: H0: Data comes from a population that is normally distributed, H1: Data comes from a population that is not normally distributed. The criteria in this test is if X2 count ≤ X2 table at a significant level = 0.05, then the data is normally distributed. Conversely, if X2 count > X2 table L0> at a significant level = 0.05, then the data is not normally distributed.
Test for normality of distribution Work experience
After the normality test was carried out, the calculated X2 was 10.746. Then look for the X2 table at the 95% significance level (confidence level) or = 0.05 and df = 7-1=6. Then X2 table = 12,592. So X2 count < X2 table or 10,746 < 12,596 so that the distribution of work experience data is normal.
Normality Test of Distribution Work Performance
After the normality test, with the formula, the calculated X2 is 7.122. Then look for the X2 table at the 95% significance level (confidence level) or = 0.05 and df = 7-1=6. Then X2 table = 12,592. So X2 count < X2 table or 7,122 < 12,596 , so that the distribution of work performance data is declared normal.
Normality Test of Career Variable Distribution
After the normality test was carried out based on the X2 distribution table, with the formula, the calculated X2 was 7.578. Then look for the X2 table at a significance level (confidence level) of 95% or = 0.05 and df = 7-1 = 6. Then X2 table = 12,592. So X2 count < X2 table or 7.578 < 12,596 so that the distribution of lecturer career data is declared normal.
Linearity test Career Work experience
Calculation of estimates of simple linear regression models career variables and work experience. The results of the analysis of variance (ANAVA) on this model are presented in the following table: Based on the calculation of the career linearity test on work experience, F count 1.618 is smaller than Ftable 2.025 at a significant level = 0.05, this indicates that the relationship between career and work experience is linear.
Career linearity test on work performance
The results of the analysis of variance (ANAVA) on this model are presented in the following Based on the calculation of the career linearity test on work performance, Fcount is 0.783 which is smaller than Ftable 2.408 at a significant level = 0.05, this indicates that the relationship between career and work performance is linear.
Linearity test of work performance on work experience
The results of the analysis of variance (ANOVA) on this model are presented in the following table: Based on the calculation of the linearity test of work performance on work experience, Fcount of 0.665 is smaller than Ftable 2,000 at a significant level of = 0.05, this indicates that the relationship between work performance and work experience is linear.
Hypothesis 1.
Work experience has a direct positive effect on work performance. The tested hypotheses are as follows: H0 P21 ≤ 0 H1 P21 > 0 Based on the calculation results, the path coefficient value is P21 = 0.963 with tht = 27.163 and ttb = 1.671 at = 0.05 or 2.390 at = 0.01 because tth> ttb, then H0 :P32≤ 0 is rejected, and H1: P32> 0 is accepted, that the path coefficient P21 = 0.913 is significant at the significance level = 0.05 and = 0.05. By testing the acceptance of H1, that work experience has a direct effect on work performance. Hypothesis 2. Based on the calculation results, the path coefficient value is obtained P31=0,899 with tht =35,661 and ttb = 1,684 at α = 0,05 or 2,423 at α = 0,01, because tth> ttb, so H0 :P31 ≤ 0 rejected, and H1 : P42 > 0 accepted, that the path coefficient P31=0,899 is significant at the significance level α =0,01 and α =0,05, For this reason, work experience has a direct positive effect on careers. Hypothesis 3.
Work performance has a direct positive effect on careers. The tested hypotheses are as follows: Based on the calculation results, the path coefficient value is obtained P32 =0,708 with tht =10,806 and ttb = 1,684 at α = 0,05 or 2,423 at α = 0,01, because tth<ttb = 2,435, so H0 :P32≤ 0 rejected, and H1 : P32 > 0 accepted, that the path coefficient P32 =0,708 is significant at the significance level α =0,01 dan α =0,05. By testing the acceptance of H1, that work performance has a direct effect on careers.
DISCUSSION
Based on the final model of the positive direct influence, work experience, and work performance on lecturer careers, the results of the study prove the three hypotheses proposed in discussing the effect of work experience and work performance on the careers of lecturers at University. lecturers are closely related to leadership, work experience, work performance, organizational climate, competition, social relations, finance, education, promotions, and even luck.
Work Experience has a Positive Direct Effect on Career
Work experience has a direct positive effect on careers, the coefficient of influence is 0.899 or equivalent to the determinant coefficient of direct influence is 0.867. This shows that variations in work experience have a direct influence on careers by 8.67%. This illustrates that 8.67% of the career diversity of lecturers is influenced by variations in work experience such as meeting community needs, making meaningful contributions, being able to solve problems, and carrying out tasks well. This statement is supported by the statement (Bernardin, 2016) that career as a person's perception is related to attitudes and behaviors associated with activities and work experiences throughout one's life. This is also supported by the results of research (Kristola & Adnyani, 2014;Saraswati & Dewi, 2017) which states that work experience has a positive and significant effect on employee career development.
Job performance has a direct positive effect on Career
Job performance has a direct positive effect on careers, the coefficient of influence is 0.708 or equivalent to the coefficient of direct influence determinant of 0.738. This shows that variations in work performance have a direct influence on careers of 7.38%. This illustrates that 7.38% of the career diversity of lecturers is influenced by variations in academic achievement, student development, creating innovative works, writing books, writing articles, research, community service and receiving awards for achievements from the government. This finding is supported by the opinion (Bratton et al., 2021;Mathis et al., 2016) which states that the measurement of work performance is always used as the basis for making decisions on employees. for example decisions in placing employees in positions in the organization, and reducing employees in the order of their work positions. This is also in line with the research results (Kristola & Adnyani, 2014;Saraswati & Dewi, 2017) which states that work performance has a positive and significant effect on employee career development.
Work experience has a direct positive effect on work performance
Work experience has a direct positive effect on work performance, the coefficient of influence is 0.963 or equivalent to the coefficient of direct influence determinant of 0.859. This shows that variations in work experience have a direct effect on work performance of 8.59%. This illustrates that 8.59% of the diversity of lecturers' work performance is influenced by variations in work experience such as meeting community needs, making meaningful contributions, being able to solve problems, carrying out tasks well. The findings are in accordance with the opinion (Bratton et al., 2021;Stone et al., 2020) and further strengthened by research results (Kristola & Adnyani, 2014) which states that work experience has a positive and significant effect on work performance..
|
2022-07-30T15:08:20.810Z
|
2022-05-20T00:00:00.000
|
{
"year": 2022,
"sha1": "fbb312bc9183b0b4f8a3667df4a7fe7c3071605f",
"oa_license": "CCBYNCSA",
"oa_url": "https://journal.staihubbulwathan.id/index.php/alishlah/article/download/2024/774",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "da22c6d80d24f97e1f596e974f15fe00187a2488",
"s2fieldsofstudy": [
"Education",
"Business"
],
"extfieldsofstudy": []
}
|
252846339
|
pes2o/s2orc
|
v3-fos-license
|
Robustify Transformers with Robust Kernel Density Estimation
Recent advances in Transformer architecture have empowered its empirical success in various tasks across different domains. However, existing works mainly focus on improving the standard accuracy and computational cost, without considering the robustness of contaminated samples. Existing work [40] has shown that the self-attention mechanism, which is the center of the Transformer architecture, can be viewed as a non-parametric estimator based on the well-known kernel density estimation (KDE). This motivates us to leverage the robust kernel density estimation (RKDE) in the self-attention mechanism, to alleviate the issue of the contamination of data by down-weighting the weight of bad samples in the estimation process. The modified self-attention mechanism can be incorporated into different Transformer variants. Empirical results on language modeling and image classification tasks demonstrate the effectiveness of this approach.
Contribution Despite having appealing performance, the robustness of the conventional attention module still remains an open question in the literature. In this paper, to robustify the attention mechanism and transformer models, we first revisit the interpretation of the self-attention in transformer as the Nadaraya-Watson (NW) estimator [37] in a non-parametric regression problem in the recent work of [40]. Putting in the context of transformer, the NW estimator is constructed mainly based on the kernel density estimators (KDE) of the keys and queries. However, the KDE is not robust to the outliers [25], which leads to the robustness issue of the NW estimator and the self-attention in transformer when there are outliers in the data. To improve the robustness of the KDE, we first show that the KDE can be viewed as an optimal solution of the kernel regression problem in the reproducing kernel Hilbert space (RKHS). Then, to robustify the KDE, we only need to robustify the loss function of the kernel regression problem via some robust loss functions, such as the well-known Huber loss function [21]. The robust version of the KDE, named RKDE, can Xing Han, Tongzheng Ren, and Tan Nguyen contributed equally to this work. be obtained by minimizing that loss of the robust kernel regression problem and can be used to construct a novel robust attention in transformer, which also improves the robustness issue of the transformer. In summary, our contribution is two-fold: • By connecting the dot-product self-attention mechanism in transformer with the nonparametric kernel regression problem in reproducing kernel Hilbert space (RKHS), we propose a novel robust transformer framework, named Transformer-RKDE, based on replacing the dot-product attention by an attention arising from the robust kernel density estimators (RKDE) associated with the robust kernel regression problem. Comparing to the standard soft-max transformer, the Transformer-RKDE only requires computing an extra set of weights, which can be solved by an iterative re-weighted least-square problem.
• Extensive experiments on both vision and language modeling tasks demonstrate that Transformer-RKDE has favorable performance under various attacks. Furthermore, the proposed Transformer-RKDE framework is flexible and can be incorporated into different Transformer variants.
Organization The paper is organized as follows. In Section 2, we provide background on selfattention mechanism in Transformer and its connection to the Nadaraya-Watson (NW) estimator in the nonparametric regression problem, which can be constructed via the kernel density estimator (KDE). In Section 3, we first connect the KDE to a kernel regression problem in the reproducing kernel Hilbert space (RKHS) and demonstrate that it is not robust to the outliers. Then, we propose a robust version of the KDE based on robustifying the kernel regression loss and use the robust KDEs to construct the robust self-attention mechanism for the Transformer. We empirically validate the advantage of the proposed robust transformer, Transformer-RKDE, over the standard softmax transformer over both language modeling and image classification tasks in Section 4. Finally, we discuss the related works in Section 5 while conclude the paper in Section 6.
2 Background: Self-attention Mechanism from A Non-parametric Regression Perspective In this section, we first provide background on the self-attention mechanism in transformer in Section 2.1. We then revisit the connection between the self-attention and the Nadaraya-Watson estimator in a nonparametric regression problem in Section 2.2.
Self-attention Mechanism
Given an input sequence X = [x 1 , . . . , x N ] ∈ R N ×Dx of N feature vectors, the self-attention transforms it into another sequence H := [h 1 , · · · , h N ] ∈ R N ×Dv as follows: where the scalar softmax((q i k j )/ √ D) can be understood as the attention h i pays to the input feature x j . The vectors q i , k j , and v j are the query, key, and value vectors, respectively, and are computed as follows: where W Q , W K ∈ R D×Dx , W V ∈ R Dv×Dx are the weight matrices. Equation (1) can be written as: where the softmax function is applied to each row of the matrix (QK )/ √ D. Equation (3) is also called the "softmax attention". For each query vector q i for i = 1, · · · , N , an equivalent form of equation 3 to compute the output vector h i is given by In this paper, we call a transformer built with softmax attention standard transformer or transformer.
A Non-parametric Regression Perspective of Self-attention
We now review the connection between the self-attention mechanism in equation (4) and the nonparametric regression, which has been discussed in the recent work [40]. To ease the presentation, we assume that we have the key vectors {k j } j∈[N ] and the value vectors {v j } j∈[N ] that are collected from the following data generating process: where ε is some random noise vectors with E[ε] = 0, and f is the unknown function that we want to estimate. We consider a random design setting where the key vectors {k j } j∈[N ] are i.i.d. samples from the distribution p(k), and we use p(v, k) to denote the joint distribution of (v, k) defined by equation (5). Our target is to estimate f (q) for any new queries q.
[37] provides a non-parametric approach to estimate the function f , which is known as the the Nadaraya-Watson (NW) estimator, the kernel regression estimator or the local constant estimator. The main idea of the NW estimator is that where the first equation comes from the fact that E[ε] = 0, the second equation comes from the definition of conditional expectation and the last inequality comes from the definition of the conditional density. With equation (6), we know, to provide an estimation of f , we just need to obtain estimations for both the joint density function p(v, k) and the marginal density function p(k). One of the most popular approaches for the density estimation problem is the kernel density estimation (KDE) [49,41], which requires a kernel k σ with the bandwidth parameter σ satisfies R D k σ (x − x )dx = 1, ∀x , and estimate the density aŝ where [v, k] denotes the concatenation of v and k. Specifically, when k σ is the isotropic Gaussian Given the kernel density estimators in equations (7) and (8), as well as the formulation in equation (6), we obtain the NW estimator of the function f : Now we show how the self-attention mechanism is related to the NW estimator. Note that If the keys {k j } j∈[N ] are normalized, we can further simplify f σ (q i ) in equation (9) to Such an assumption on the normalized key {k j } j∈ [N ] can be mild, as in practice we always have an normalization step on the key to stabilize the training of the transformer [52]. If we choose σ 2 = √ D, where D is the dimension of q and k j , then f σ (q i ) = h i . As a result, the self-attention mechanism in fact performs a non-parametric regression with NW-estimator and isotropic Gaussian kernel when the keys are normalized.
Robustify Transformer with Robust Kernel Density Estimation
As we have seen in Section 2, the self-attention mechanism can be interpreted as an NW estimator for the unknown function where the density is estimated with KDE using the isotropic Gaussian kernel. In this section, we first re-interpret KDE as a regression in the Reproducing Kernel Hilbert Space (RKHS), which shows that the vanilla KDE is sensitive to the data corruption. Instead, we observe that, a variant of the kernel density estimation termed as the robust KDE, can down-weight the importance of the potential corrupted data and obtain a robust density estimator. Based on the robust KDE, we derive the corresponding robust version of the NW-estimator, and show how to use this robust version of the NW estimator to replace the self-attention mechanism, and eventually lead to a more robust Transformer variants. (12)) and (c) RKDE (equation (13)) with Huber loss (equation (14)), where (a) is the true density function. We draw 1000 samples (gray circles) from a multivariate normal density and 100 outliers (red cross) from a gamma distribution as the contaminating density. RKDE can be less affected by outliers when computing self-attention as nonparametric regression.
KDE as a Regression Problem in RKHS
We start from the formal definition of the RKHS. The space H k = {f | f : X → R} is called an RKHS associated with the kernel k, where k : X × X → R, if it is a Hilbert space with the following two properties: (1) k(x, ·) ∈ H k , ∀x ∈ X ; (2) the reproducing property: ∀f ∈ H, f (x) = f, k(x, ·) H k , where ·, · H k denotes the RKHS inner product. With slightly abuse of notation, . By the definition of the RKHS and the KDE estimator, we knoŵ In fact,p σ is the optimal solution of the following least-square regression problem in RKHS:p Note that, in equation (12), we have the same weight 1/N on each of the error . However, when there are some outliers (e.g., when there exists some j, such that , the error on the outliers will dominate the whole error and lead to substantially worse estimation on the whole density. We illustrate the robustness issue of the KDE in Figure 1. Since the KDE is not robust to the outliers. Combining this viewpoint with the interpretation of the self-attention as the Nadaraya-Watson estimator based on the KDE, it implies that the Transformer is also not robust when there are outliers in the data. The robustness issue of Transformer had been studied in other recent works, such as [33,34,67]. Therefore, via connecting the Transformer to a kernel regression problem in equation (12), we also offer another new insight into the robustness issue of the Transformer.
Robust KDE
Motivated by the robust regression [16], [25] proposed a robust version of KDE, by replacing the least-square loss function in equation (12) with a robust loss function ρ as follows: Examples of the robust loss functions ρ include the Huber loss [21], Hampel loss [19], Welsch loss [64] and Tukey loss [16]. In this paper, we focus on the Huber loss function, which is defined as follows: where a is a constant. Kim et al. [25] show the solution of this robust regression problem has the following form: where ω = (ω 1 , · · · , ω N ) ∈ ∆ N , and ω j ∝ ψ k σ (x j , ·) −p robust H kσ . Here ∆ n denotes the ndimensional simplex.
Proof. The proof of Proposition 1 is mainly adpoted from the proof in [25]. Here, we provide the proof for the completeness. For any p ∈ H kσ , we denote Then we have the following lemma regarding the Gateaux differential of J and a necessary condition forp robust to be optimal solution of the robust loss objective function in equation (13).
Lemma 1. Given the assumptions on the robust loss function ρ in Proposition 1, the Gateaux differential of J at p ∈ H kσ with incremental h ∈ H kσ , defined as δJ(p; h), is where the function V : H kσ → H kσ is defined as: A necessary condition forp robust is V (p robust ) = 0.
For Huber loss function, we have that Hence, when the error k σ (x j , ·), · −p robust H kσ is over the threshold a, the final estimator will down-weight the importance of k σ (x j , ·). This is in sharp contrast with the standard KDE method, which will assign uniform weights to all of the k σ (x j , ·). One additional issue is that, the estimator provided in Proposition 1 is circularly defined, asp robust is defined via ω, and ω depends onp robust .
To address this issue, [25] proposed to estimate ω with an iterative algorithm termed as kernelized iteratively re-weighted least-squares (KIRWLS) algorithm. The algorithm starts with some randomly initialized ω (0) ∈ ∆ n , and perform the following iterative updates: Note that, the optimalp robust is the fixed point of this iterative updates, and [25] shows that the proposed algorithm converges under standard regularity conditions. Furthermore, one can directly compute the term k σ (x j , ·) −p (k) robust H kσ via the reproducing property: Therefore, the weights can be updated without mapping the data to the Hilbert space.
Robust Self-Attention Mechanism
Now we describe the robust self-attention mechanism we use. We consider the density estimator of the joint distribution and the marginal distribution from the robust KDE: With the similar computation, the robust self-attention mechanism we use is defined as where ω joint and ω marginal are obtained via the KIRWLS problem. For experiments related to language modeling, we can leverage information from attention mask to initialize the weights on the unmasked part of sequence. To speed up the computation for Transformer-RKDE, we use a single-step iteration on equation (15) to approximate the optimal set of weights. This strategy is shown to be effective during the empirical evaluation on both image and text data. The procedure of computing the attention vector for Transformer-RKDE can be found at Algorithm 1.
Experimental Results
In this section, we empirically validate the advantage of our proposed robust transformer (Transformer-RKDE) over the standard softmax transformer and its nonparametric regression variant (Transformer-KDE in equation (9)) on two large-scale datasets: language modeling on WikiText-103 dataset [35] (Section 4.1) and image classification on Imagenet [50,11] and Imagenet-C [20] (Section 4.2). Our experiments have shown that: (1) Transformer-RKDE can reach competitive performance with baseline methods on a variety of tasks with different data modalities, this can be achieved without modifying the model architecture; (2) the advantage of Transformer-RKDE is more prominent when there is contamination of samples in either text or image data. All of our experiments are performed on the NVIDIA A-100 GPUs. For each experiment, we compare Transformer-RKDE with other baselines under the same hyper-parameter configurations.
Robust Language Modeling
Dataset: WikiText-103 is a language modeling dataset that contains collection of tokens extracted from good and featured articles from Wikipedia, which is suitable for models that can leverage long-term dependencies. The dataset contains around 268K words and its training set consists of about 28K articles with 103M tokens, this corresponds to text blocks of about 3600 words. The validation set and test sets consist of 60 articles with 218K and 246K tokens respectively. We follow the standard configurations in [35,52] and splits the training data into L-word independent long segments. During evaluation, we process the text sequence using a sliding window of size L and feed into the model with a batch size of 1. The last position of the sliding window is used for computing perplexity except in the first segment, where all positions are evaluated as in [1,52].
Implementation Details: We used the language models developed by [52] in our experiments. The dimensions of key, value, and query are set to 128, and the training and evaluation context length are set to 256. As for self-attention, we set the number of heads as 8, the dimension of feed-forward layer as 2048, and the number of layers as 16. To avoid numerical instability, we apply the log-sum-exp trick in equation (9) when computing the attention probability vector through the Gaussian kernel. We apply similar tricks when computing the weights of KIRWLS algorithm, where we first obtain the weights in log space, followed by the log-sum-exp trick to compute robust self-attention as in equation (16).
Results: In Table 1, we report the validation and test PPL of Transformer-RKDE versus the softmax transformer and its nonparametric regression variant. Based on the derivation in equation (11), we would expect Transformer-KDE to have similar performance with softmax transformer. Meanwhile, Transformer-RKDE is able to improve baselines PPL and NLL in both validation and test sets.
We can observe more obvious improvement when the dataset is under a word swap attack, which randomly replace selected keywords of input data by a generic token "AAA" during evaluation. Transformer-RKDE achieves much better results for down-weighting rare words, and therefore more robust to such kind of attack. Our implementation on word swap is based on the public code TextAttack by [36], while we use the greedy search method with the constraints on stop-words modification from the TextAttack library.
Image Classification under Adversarial Attack
Dataset: We use the full ImageNet dataset that contains 1.28M training images and 50K validation images. The model learns to predict the class of the input image among 1000 categories. We report the top-1 and top-5 accuracy on all experiments. For robustness on common image corruptions, we use ImageNet-C [20] which consists of 15 types of algorithmically generated corruptions with five levels of severity. ImageNet-C uses the mean corruption error (mCE) as metric, while the smaller mCE means the more robust of the model under corruptions.
Implementation Details: Our method uses the same training configurations as DeiT-Tiny [58]. Given that all approaches do not modify the model architecture, each employed model has 5.7M parameters. To evaluate adversarial robustness, we apply adversarial examples generated by untargeted white-box attacks including single-step attack method FGSM [18], multi-step attack method PGD [32] and score-based black-box attack method SPSA [60]. The attacks are applied on 100% of the validation set of ImageNet. Both these attacks perturb the input image with perturbation budget = 1/255 under l ∞ norm; while PGD attack uses 20 steps with step size α = 0.15.
Results:
We summarize the results in Table 2. On clean data, DeiT-RKDE can improve the performance of baseline DeiT and DeiT-KDE in both top-1 and top-5 classification accuracy.
Similar to the language modeling experiment, the advantage of DeiT-RKDE is more obvious under adversarial attacks and common image corruptions, which suggests that Transformer-RKDE can improve the baseline dot-product transformer over different data modalities. Furthermore, Figure 2 shows the relationship between accuracy versus perturbation budget using three attack methods.
DeiT-RKDE can improve the accuracy under different perturbation budget and exhibits greater advantage with higher perturbation strength.
Related Works
Robustness of Transformer: Vision Transformer (ViT) models [13,58] recently achieved exemplary performance on a variety of vision tasks that can be used as a strong alternative to CNNs.
To ensure its generalization ability on different datasets, many works [54,42,3] have studied the robustness of ViT under different types of attacks. [33] empirically shows that ViT is vulnerable to white-box adversarial attack but a simple ensemble defense can achieve unprecedented robustness without sacrificing clean accuracy. [34] performs robustness analysis on different building blocks of ViT and proposed position-aware attention scaling and patch-wise augmentation that improved robustness and accuracy of ViT models. More recently, [67] proposed fully attentional networks to improve the self-attention and achieved state-of-the-art accuracy on corrupted images. However, these works focus on improving the architectural design of ViT targeted for some specific tasks, which lacks a general framework on improving the robustness of transformers. In addition, most of the recent works studying robustness of transformer concentrate on vision related tasks and cannot generalize across different data modalities.
Theoretical Frameworks of Attention Mechanisms: Attention mechanisms in transformers have been recently studied from different perspectives. [59] shows that attention can be derived from smoothing the inputs with appropriate kernels. [23,8,62] further linearize the softmax kernel in attention to attain a family of efficient transformers with both linear computational and memory complexity. These linear attentions are proven in [5] to be equivalent to a Petrov-Galerkin projection [48], thereby indicating that the softmax normalization in the dot-product attention is sufficient but not necessary. Other frameworks for analyzing transformers that use ordinary/partial differential equations include [31,51]. In addition, the Gaussian mixture model and graph-structured learning have been utilized to study attentions and transformers [55,17,66,63,53,26,39,38].
Conclusion and Future Works
In this paper, via the connection between the dot-product self-attention mechanism in transformer with nonparametric kernel regression problem, we developed Transformer-RKDE by leveraging robust kernel density estimation as a replacement of dot-product attention to alleviate the effect from outliers. We show that the optimal estimation of density functions via robust KDE requires computing a set of weights by solving an iterative re-weighted least-square problem. Empirical evaluations have shown that Transformer-RKDE can improve performance on clean data while demonstrate robust results under various attacks on both vision and language modeling tasks. The Transformer-RKDE framework we developed has the merit of generalizing to the whole family of transformer models, which we intended to demonstrate as a future work. Meanwhile, we will also investigate better and more efficient approach to estimate the set of weights for RKDE.
|
2022-10-13T01:15:47.951Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "927d707786ac1e19cb421aed6f5b0603b9dadc88",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "927d707786ac1e19cb421aed6f5b0603b9dadc88",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
604863
|
pes2o/s2orc
|
v3-fos-license
|
A Question-answer Distance Measure to Investigate QA System Progress
The performance of question answering system is evaluated through successive evaluations campaigns. A set of questions are given to the participating systems which are to find the correct answer in a collection of documents. The creation process of the questions may change from one evaluation to the next. This may entail an uncontroled question difficulty shift. For the QAst 2009 evaluation campaign, a new procedure was adopted to build the questions. Comparing results of QAst 2008 and QAst 2009 evaluations, a strong performance loss could be measured in 2009 for French and English, while the Spanish systems globally made progress. The measured loss might be related to this new way of elaborating questions. The general purpose of this paper is to propose a measure to calibrate the difficulty of a question set. In particular, a reasonable measure should output higher values for 2009 than for 2008. The proposed measure relies on a distance measure between the critical elements of a question and those of the associated correct answer. An increase of the proposed distance measure for French and English 2009 evaluations as compared to 2008 could be established. This increase correlates with the previously observed degraded performances. We conclude on the potential of this evaluation criterion: the importance of such a measure for the elaboration of new question corpora for questions answering systems and a tool to control the level of difficulty for successive evaluation campaigns.
Introduction
The questions-answering (QA) task consists of providing short, relevant answers to natural language questions. QA research has focused on extracting information from text or spoken sources, providing the shortest relevant text in response to a question. For example, the correct answer to the question Besides France and Germany, where have we seen cases of mad cow-like disease affecting goats? is Belgium 1 instead of a list of documents. This simple example illustrates the two main advantages of QA over current search engines: First, the input is a natural-language question rather than a keyword query; and second, the answer provides the desired information content and not simply a potentially large set of documents or URLs that the user must plow through.
In the QA domain progress has been observed via evaluation campaigns ((Dang et al., 2007;Mitamura et al., 2008;Forner et al., 2008;Turmo et al., 2008)). The QAst (Questions-Answering on Speech Transcriptions) campaigns focus on evaluating QA systems on speech transcriptions. Oral sentences have different features than the written one (long sentences for instance), and the aim is to evaluate the systems on this type of data. Moreover, the system are evaluated on three different languages: French, English and Spanish.
In the QAst 2009 evaluation (Turmo et al., 2009), a new procedure for building the question corpus has been proposed. In the previous QAst evaluations (Turmo et al., 2008), the questions were created by the evaluators from the documents. In 2009, the objective was to build more spontaneous questions. Native speakers were requested to read excerpts of documents and to ask, using speech, questions about information related to but not included in these excerpts. Because of this new building procedure, the correct answer to a question can be potentially far away from the excerpt use to create the question, specially with the long sentences found in oral transcriptions. Thus, we aim to evaluate whether this new building procedure has an impact on the results obtained on the QAst 2008 campaign.
In this paper, we propose a new measure based on the distance between the answer to a question and its elements, to evaluate whether the difficulty of the task had changed as a result. First, we compare the results obtained on the 2008 and 2009 QAst evaluations. We then motivate and describe our measure, which is applied on the questions corpus of 2008 and 2009 for each language (French, English and Spanish). We analyze the results and finally we conclude on the potential of this measure to assist in the building of new questions corpus in evaluation campaigns.
Observations on QAst 2008 and 2009 results
A first observation comes from the general results obtained by all the participants: they all went down (Turmo et al., 2009 This loss is shown more clearly in Table 3 Moreover, all the other participants to both evaluation campaigns observed a general performance loss for their English system. Table 4: Results for the other systems on English.
The same important differences in results are observed between the 2008 and 2009 results for the written modality.
Observing the two question sets (see (Turmo et al., 2009) for details), we noticed that the written questions were corrected versions of the spoken ones. In consequence we consider that the way the questions has been collected has had a more fundamental influence.
Comparison between 2008 and 2009 corpus
To comprehend these differences in performance, we compared the 2008 and 2009 test corpora. We believe that the performance loss between the 2008 and 2009 evaluations can be explained in part by a greater distance between the answers and the questions elements for the 2009 test data. Quantifying the difference required us to design a distance measure between the question elements as found in the documents and the answer. The aim is also to have a measure who can be used again on every questions corpus.
A distance measure for questions corpus
We aim to evaluate the distance between the elements of a question and its correct answer. In the QAst evaluation campaigns, only the correct answer (there can be several in some cases) is given, along with the document where this answer can be found. As such, we do not know the excerpts of the document used to create the questions. These excerpts contain the elements of the questions, or transformations of these elements, which were used to build the questions. Also, we know the document where the answer can be found, but there is often several occurrences of a same answer in a document. Because we do not know where the elements used to build the questions are, we need an approach who evaluate the global repartition of the occurrences of each elements and each answer to a question in a document.
For each question of the corpora we measured the global distance between the elements of the question and occurrence of the correct answer. The global distance is computed as the average of distances between the elements of the question found in the document and the answer. Only question elements considered important by our system are kept. The elements considered pertinent in the question are named entities (standard, extended and nonspecific) and multi-words expressions. In the following question, Which political leader of Palestine died recently?
The death of Arafat means that we will now have a new election in Palestine. The European Union has told Israel that the dialog between the two countries is important to sign a truce. It is necessary to get a new political leader as soon as possible.
Evaluation of the measure
The proposed distance measure was used to investigate the differences between the test corpus of the French, English and Spanish tasks of QAst 2008 and 2009. Table 5 shows the results of that analysis. AD is the average distance obtained for a questions corpus, and SD the standard deviation. A big gap can be noticed between the 2008 and 2009 data on the French and English sets. We see that the mean distance has a strong increase in the QAst 2009 test corpus compared to the previous year, especially on the French corpus. However, we a see a really strong decrease on the results for the Spanish task. As shown in Table 3 there was almost no differences between spoken and written modalities on the 2009 data, the measures do not appear in Table 5 In order to have a better representation of the distribution of the distances, we split the values into nine categories, ranging from questions with a distance of zero to questions with a distance superior ton 500. The X axis represents the nine categories and the Y axis the number of questions with a certain distance value. As such, this figure shows for each corpus the number of questions in each categories. It allows us to see the evolution of a corpus from 2008 to 2009.
Discussion
As stated before, we believe that the way the questions were created for the QAst 2009 evaluation can partially explain the performance loss observed between the 2008 and 2009 evaluations. Because the speaker had to ask questions about information not contained in the text excerpts, we hypothesized that the distance between the correct answer and the elements of the question was different than in the 2008 evaluation. We built a distance measure to quantify the difference. The proposed distance measure allows to assess the evolution between the test sets of evaluations.
Correlation between the distance results and the evaluation campaigns results
Using this distance, we compared the test sets for the French, English and Spanish tasks of QAst 2008 and 2009. Table 5, the average distance has an increase on the French and English task. However, the Spanish task shows a strong decrease. For each of these three tasks, the standard deviation is very high, indicating that there are strong variations between the distances of a corpus. As such, the mean distance value is not a good indication of the distances of a corpus. Figure 1 shows the distribution of the distance values for each test corpus. We can observe that while the mean distances for the English tests corpus are relatively similar between 2008 and 2009 compared to the French and Spanish corpus, the distribution indicates a strong dispersion. There is also a strong dispersion of the values for French and Spanish. For instance, the Spanish test corpus of 2008 has a lot of values with a great distance: 8 questions with a distance superior to five hundred, while there are 7 questions with a distance value of zero. On the other hand, the test corpus of 2009 has more values with a small distance: 14 questions have a distance of zero while there are no question with a distance superior to five hundred. These distributions of values clearly illustrate the evolution of the three test corpus between 2008 and 2009.
The average distance obtained on the French and English corpus may potentially explain the huge loss between the QAst 2008 and 2009 evaluations. The distance between the elements of a question and its answer have an important effect on the segmentation in snippets of the documents processed by the QA systems. This segmentation is a fundamental aspect of the way the QA systems work. The aim is to simplify the extraction of the answer. Depending on the system, a snippet can be a sentence or a group of lines. When working on oral transcriptions, the snippets are generally build using blocks similar to normal sentences. (Reyes-Barragan et al., 2009) segments the documents into passages of 24 words. Twelve of the words of adjacent passages are included. (Comas and Turmo, 2009) defines the passages as being segments where two consecutive keywords are separate by no more than w words. In (Bernard et al., 2009), the documents are selected using a search descriptor which contains the elements of the questions critical in finding the correct answer. The snippets are then extracted using a window's size fixed for each question type. The windows parameter is fixed by tuning on the corpus of the previous years. In (Reyes-Barragan et al., 2009) andTurmo, 2009) approaches, the segmentation of the documents needs the question elements to be relatively close between them, or the sentences to have a fixed value. In (Bernard et al., 2009) the segmentation needs the data development corpus to be similar to the data of the test corpus. For the 2009 campaign, the development data used the corpus of the 2008 campaign. Alas, the questions of 2008 and 2009 were created differently.
As such, if the average distance of development data is different from the average distance of the test data, the window's size parameters will not be adapted to the test data. If the parameters are too low, the silence will increase: the window is too small so there are less snippets with an answer close to the elements of the question. On the other hand, if the parameters are too high, the noise will increase: there are a lot more of candidate answers and will be more difficult to evaluate each answer.
The window's size of the 2009 system was fixed using the corpus of the 2008 campaign. With this value, the balance between the noise and the silence is good. Figure 1 shows on the 2009 evaluation for Spanish that the distances are really low. Because the size of the window is too high, there are a lot of candidate answers to treat for the system. As such, it is more difficult to evaluate which one is the correct answer. It could explain why the results were not good on the 2009 test corpus. As such, the window's size parameters need to be fixed to a relatively low value in order to decrease the noise. In a similar way the distance values on the French and English 2009 corpus are much higher. This time the window's size is too small, and so there are less snippets to evaluate. This phenomena might also explain the loss observed on the 2009 campaign.
Finally, it seems that while the new way to build the questions corpus can explained the loss on the results obtained by each system on the 2009 evaluation, it is not the only criteria to explain these results. For instance, the type of data processed for each language could be another criteria: the French task is based on journalistic speeches (Broadcast News), while the English and Spanish are based on Parliamentary talks (EPPS). Features of a language can also be strong criteria to explain these differences in term of results (Bernard et al., 2010).
Usability for futures evaluations
This measure was used to evaluate the impact of the new way to build questions corpus of the QAst 2009 evaluation campaign. A strong loss between 2008 and 2009 evaluations was observed. The main hypothesis was that the new approach was at least one of the criteria explaining this loss. Because of the building procedure for the questions corpus, it was supposed that the distance between the elements of the question and its answer would increase. Higher distance values could explained the loss in the results between 2008 and 2009. As such, the average distance of a questions corpus was evaluate by our measure distance.
As discussed in 4.1., the results of this measure show that this new way of building questions does not always imply a greater distance between the elements of the question and its answer. While the average distance does increase on the French and English 2009 corpus, we observe a surprisingly strong decrease on the Spanish 2009 corpus. Moreover, it also shows that for each language, there is a difference between the 2008 and 2009 average distance. This difference is very strong in the case of the French and Spanish tasks. As such, it implies that the questions corpus of 2008 do not evaluate the systems on the same criteria than those of 2009. This measure can be used as a criteria to evaluate the evolution of an evaluation campaign when building a new questions corpus. If the aim of a campaign is to evaluate the systems on the same features than the last iteration in order to analyze the progress made, this measure can provide interesting data on the average distance between elements of a question and its answer.
This approach was developed using the critical elements representation of the LIMSI system, but it can clearly be generalized on other system outputs. The measure only need to be adapted to another representation of the critical elements of a question to be used.
Conclusion and perspectives
There has been a huge loss in systems results between the QAst 2008 and QAst 2009 test corpus. One reason for this difference could be rooted in the new methodology used to build the questions corpus. To evaluate this hypothesis, a new measure was built to measure the average distance of each question of an evaluation corpus between the elements of the question and its answer. This measure was applied on the three common tasks of the 2008 and 2009 QAst evaluation, which featured three languages: French, English and Spanish. As stated in section 3., the methodology difference ended up with questions where the distances between the elements of the question found in the documents and the answer are much greater than for the 2008 evaluation only on the French and English task. On the contrary, the measure on the Spanish task shows a strong decrease of the average distance.
As such, while it can be supposed that this new way of building questions can imply an increase of the distance between the elements of the question and its answer, it is not always the case. As such, the decrease of the systems performances on the QAst 2009 evaluation can not be explained only because of a greater distance. Therefore, other measures are needed to identify the problems encountered into this evaluation. For instance, it could be interesting to evaluate the presence of referential expressions. Evaluating the features of the different languages could also explained the differences between Spanish and French and English.
Finally, this measure shows great potential into evaluating the differences between several iterations of an evaluation campaign. For instance, it can be used to evaluate the evolution of a campaign from one edition to another. This point is particularly important if the aim of the evaluation is only to evaluate the progression of the candidate systems, and not adding new features. As such, it could be interesting to developed other measures to evaluate the evolution of a campaign.
|
2015-07-13T20:20:30.000Z
|
2010-05-01T00:00:00.000
|
{
"year": 2010,
"sha1": "1c40af1607b2de20997a11514099c5769da6374c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "6096ac98c11d4d500979b5f2703c316c72990812",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
267440574
|
pes2o/s2orc
|
v3-fos-license
|
Accuracy of the 10 μg desmopressin test for differential diagnosis of Cushing syndrome: a systematic review and meta-analysis
We evaluated the accuracy of the 10 μg desmopressin test in differentiating Cushing disease (CD) from non-neoplastic hypercortisolism (NNH) and ectopic ACTH syndrome (EAS). A systematic review of studies on diagnostic test accuracy in patients with CD, NNH, or EAS subjected to the desmopressin test obtained from LILACS, PubMed, EMBASE, and CENTRAL databases was performed. Two reviewers independently selected the studies, assessed the risk of bias, and extracted the data. Hierarchical and bivariate models on Stata software were used for meta-analytical summaries. The certainty of evidence was measured using the GRADE (Grading of Recommendations Assessment, Development, and Evaluation Working Group) approach. In total, 14 studies were included: 3 studies on differentiated CD versus NNH and 11 studies on differentiated CD versus EAS. Considering ΔACTH in 8 studies involving 429 patients, the pooled sensitivity for distinguishing CD from EAS was 0.85 (95% confidence interval [CI]: 0.80–0.89, I2 = 17.6%) and specificity was 0.64 (95% CI: 0.49–0.76, I2 = 9.46%). Regarding Δcortisol in 6 studies involving 233 participants, the sensitivity for distinguishing CD from EAS was 0.81 (95% CI: 0.74–0.87, I2 = 7.98%) and specificity was 0.80 (95% CI: 0.61–0.91, I2 = 12.89%). The sensitivity and specificity of the combination of ΔACTH > 35% and Δcortisol > 20% in 5 studies involving 511 participants were 0.88 (95% CI: 0.79–0.93, I2 = 35%) and 0.74 (95% CI: 0.55–0.87, I2 = 27%), respectively. The pooled sensitivity for distinguishing CD from NNH in 3 studies involving 170 participants was 0.88 (95% CI: 0.79–0.93) and the specificity was 0.94 (95% CI: 0.86–0.97). Based on the desmopressin test for differentiating CD from EAS, considering ΔACTH, Δcortisol, or both percent increments, 15%, 19%, or 20% of patients with CD, respectively, would be incorrectly classified as having EAS. For CD versus NNH, 11% of patients with CD would be falsely diagnosed as having NNH, whereas 7% of patients with NNH would be falsely diagnosed as having CD. However, in all hierarchical plots, the prediction intervals were considerably wider than the confidence intervals. This indicates low confidence in the estimated accuracy, and the true accuracy is likely to be different. Systematic review registration https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=85634, identifier CRD42018085634; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=68317, identifier CRD42017068317.
Introduction
Evaluation of patients with suspected hypercortisolism is one of the most challenging investigations in endocrinology (1).This is due to the intermittent activation of the dynamic hypothalamicpituitary-adrenal (HPA) axis, which results in clinical and biochemical characteristics that are indistinguishable between neoplastic and non-neoplastic forms of hypercortisolism.Furthermore, even in neoplastic cases, it is often difficult to distinguish between the two main differential diagnoses, namely, endogenous neoplastic hypercortisolism and non-neoplastic hypercortisolism (NNH) (1).
In adults, the most frequent etiology of endogenous neoplastic hypercortisolism is Cushing disease (CD), accounting for approximately 70% of Cushing syndrome (CS) cases (2).CD is caused by increased production of adrenocorticotropic hormone (ACTH) due to pituitary adenoma.It has an incidence and prevalence of 2-3 cases per 1,000,000 inhabitants/year and 40 cases per 1,000,000 inhabitants, respectively (3).The principal differential diagnosis of CD is endogenous neoplastic hypercortisolism secondary to ectopic production of ACTH (ectopic ACTH syndrome [EAS]), which accounts for 10%-20% of the causes of ACTH-dependent CS (4).
When the prevalence of one of the conditions that characterize NNH increases, many patients with endogenous neoplastic hypercortisolism may not develop the most specific signs and symptoms associated with this hormonal disorder (e.g., easy bruising, capillary fragility, proximal weakness, and reddish-purple striae).Thus, there is an urgent need to distinguish these two clinical conditions.Additionally, as pituitary microadenomas may be present in 9.3% (range, 1.5%-26.7%) of pituitary incidentalomas in the general population (10) and in up to 38% of patients with EAS (11), the differential diagnosis between CD and EAS has been recommended (7,12), especially when a lesion with a size of <6 mm is observed on pituitary magnetic resonance imaging (MRI).
Regarding differential diagnosis between CD and EAS, the gold standard examination is bilateral and simultaneous petrosal sinus sampling (BIPSS).This method exhibits a diagnostic accuracy of 90%-98% (13-15).However, BIPSS is invasive and should be performed by highly qualified professionals (7); these factors have limited its widespread use.Therefore, some dynamic tests have been developed for the differential diagnosis of endogenous CS.
The corticotropin-releasing hormone (CRH) test, dexamethasonesuppressed CRH stimulation test (DEX-CRH test), and desmopressin stimulation test have been widely used to distinguish neoplastic hypercortisolism from NNH as well as perform differential diagnosis between CD and EAS (16-18).However, the current lack of availability of CRH for diagnostic purposes, even in countries where it was previously used, has led to increased use of the desmopressin stimulation test to examine HPA axis function (1,19).
Although these dynamic tests have been studied in detail in CS, no evidence synthesis with meta-analysis has focused on the desmopressin test.Thus, we aimed to evaluate the diagnostic accuracy of the desmopressin test at an intravenous dose of 10 mg to distinguish neoplastic hypercortisolism from NNH and perform differential diagnosis between CD and EAS.
Methods
A systematic review was conducted according to the Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy (20,21), and the results were reported according to the PRISMAdiagnostic test accuracy (DTA) studies criteria (22).The protocol was registered in the International Prospective Registry of Systematic Reviews (IDs : CRD42018085634 and CRD42017068317).
Eligibility criteria
We included the DTA studies that followed the PIRO structure described below.
Population (P)
Patients with clinical suspicion of endogenous CS who underwent at least two different screening tests for hypercortisolism: 24-h urinary free cortisol (UFC), late night salivary cortisol, no suppression of serum cortisol after the administration of 1 mg dexamethasone overnight, or no suppression after the administration of 2 mg dexamethasone for 48 h.
Test index (I)
We considered desmopressin administered at an intravenous dose of 10 µg as the test index.Serum cortisol and plasma ACTH levels were measured at 15 and 0 min before and 15, 30, 45, 60, and 90 min after desmopressin administration.
Reference test (R)
Patients diagnosed with an ACTH-secreting pituitary adenoma during pathologic analysis after pituitary surgery were considered to have CD.Patients who did not undergo any surgery were considered to have CD if their plasma ACTH level was >10 pg/ mL and if they met one of the following criteria: BIPSS with a central-to-peripheral ratio of plasma ACTH level of ≥2.0 pg/mL before or ≥3.0 pg/mL after CRH test or desmopressin administration, and the presence of a pituitary adenoma measuring >6 mm on MRI in a patient with concordant results suggestive of CD based on the high-dose dexamethasone suppression test (HDDST) and CRH or desmopressin stimulation tests (7).
EAS was diagnosed through immunohistochemical analysis of tumor tissues.In the absence of surgery or immunohistochemistry negative for ACTH expression, which can be noted in up to 30% of EAS cases (11,23,24), the absence of central gradient of ACTH at BIPSS (25) or improvement in hypercortisolism after surgery was considered.
A diagnosis of NNH was made in patients with major depression, obsessive-compulsive disorder, anxiety disorder, chronic alcoholism, or severe obesity as well as in those who exhibited hypercortisolism resolution at follow-up after the control of NNH-associated disease (8,9,26).
Outcomes (O)
Using a 2 × 2 contingency table, the performance of the desmopressin test was compared with that of the reference test, in which true-positive, false-positive, false-negative, and true-negative cases were determined for CD diagnosis.Based on these data, the accuracy of the index test (sensitivity, specificity, positive likelihood ratio [LR+], and negative likelihood ratio [LR−]) was calculated.
Exclusion criteria
Studies involving patients who were diagnosed with CD without presenting the abovementioned confirmatory criteria were excluded.Moreover, studies involving patients with NNH who did not undergo outpatient follow-up for evaluating hypercortisolism after the resolution of NNH-associated disease were excluded.Studies including patients who were diagnosed with CD or EAS without presenting the abovementioned confirmatory criteria were also excluded.
Search strategies
Four general search strategies were implemented for the EMBASE (1980-10/10/2017), PubMed (1966-10/10/2017), LILACS (1982-10/10/2017), and CENTRAL (Cochrane Collaboration Controlled Trials Registry-10/10/2017) electronic databases (Supplementary File).All databases were searched for the second time on September 25, 2023.The index terms "Cushing disease" and "desmopressin" were used to establish each search strategy with no language or year restrictions.EndNote X9 citation management software was used to download the references and remove duplicate entries.For initial screening of abstracts and titles, the free web application Rayyan QCRI was used (27).
Study selection
Four reviewers independently and in pairs (RRG, MVGC, EGP, and VSN-N) selected titles and abstracts from the reference articles identified through bibliographic search.After selecting potentially eligible studies, the full-text was reviewed.The studies were evaluated for conformance to the proposed PIRO structure.In case of disagreements during the selection process, a consensus was achieved through discussion.The reasons for the exclusion of each study were justified.
Data extraction and management
Two reviewers extracted data regarding study characteristics and the corresponding participant-related information for each study.For each comparison between index and reference tests, data regarding the number of true-positive, true-negative, false-positive, and falsenegative cases were extracted in the form of a 2 × 2 table.
Risk of bias and applicability
The risk of bias associated with the included studies was evaluated using the Quality Assessment of Diagnostic Accuracy Studies tool (28).
Unit of analysis
The unit of analysis was the aggregate data extracted from the journal publications.
Synthesis of results (meta-analysis)
For each study, a 2 × 2 contingency table was constructed.Sensitivity, specificity, and LRs were calculated.When the primary study had a value of 0 in a cell of the 2 × 2 table, the value of 1 was added to facilitate calculations (29); this was observed in two of the included studies.
We performed meta-analyses using hierarchical and bivariate models, which account for variability in intrastudy accuracy as well as interstudy variations in test performance with the inclusion of random effects (30).Based on the results of heterogeneity investigations, the bivariate model was used to estimate summary sensitivity and specificity (summary points), and the hierarchical summary receiver operating characteristic (HSROC) model was applied to construct summary ROC curves.
Stata Statistical Software V.18 (StataCorp LLC), with metadta and metandi commands, was used for analyses.
Assessment of heterogeneity
Forest and HSROC plots were visually assessed for heterogeneity.If data allowed, we evaluated the sources of heterogeneity through subgroup analyses.Meta-regression could not be performed because of the limited number of studies available.Variability away from the summary ROC curve is likely to represent greater heterogeneity than variation along the summary ROC curve, which might correspond to simple threshold effects.If the number of studies included was adequate, we would assess the following potential heterogeneity sources: patient characteristics, test methods, and study design.A separate SROC curve would be fitted for each subgroup, and the results would be compared graphically across subgroups (30).
Sensitivity analyses
If the number of studies selected was adequate, we assessed the robustness of our results by conducting sensitivity analysis according to the threshold of ACTH level and cortisol percent increment after the desmopressin test.
Grading of the quality of evidence
For each outcome, the findings were summarized in a tabulated format to determine the effectiveness of the index test.The certainty of evidence was measured using the GRADE (Grading of Recommendations Assessment, Development, and Evaluation Working Group) approach (31, 32).
Study selection
The search strategies yielded 1,940 references.After removing duplicates, 1,838 studies remained (Figure 1).Thirty-three studies potentially eligible for inclusion in the full-text review were selected.However, of these, 19 studies were excluded for the following reasons.One study was a narrative review article (6), and seven studies did not use the desmopressin test as the index test (33-39).In another study, the authors used the desmopressin test to distinguish patients with CD from those with a clinical and laboratory suspicion of CS.In the same study, although most patients suspected of CS had undergone at least one positive screening test for hypercortisolism, they were not classified as carriers of NNH (40).Two studies compared the results of desmopressin test in patients with CD and those with depression; however, the patients with depression showed no clinical or Flowchart of the identification of eligible studies.(48,49).In another study, the criteria used for distinguishing CD from NNH were not described (50).
Study characteristics
According to our eligibility criteria, we included the following 14 studies: 3 studies distinguishing CD from NNH (51-53) and 11 studies distinguishing CD from EAS (54-64).These studies included 979 participants (782 with CD, 79 with NNH, and 118 with EAS).Five of the included studies also involved a group of healthy individuals who underwent desmopressin tests.Tables 1, 2 present the descriptive data of the included studies on differential diagnosis of CD versus EAS and CD versus NNH respectively.
All studies conducted intravenous desmopressin tests, wherein a slow bolus of 10 mg desmopressin was injected into the antecubital vein of patients who had fasted overnight.This was followed by the measurement of plasma ACTH and serum cortisol levels at 15 and 0 min before and 10, 20, 30, 45, 60, 90, and 120 min after desmopressin administration.Only Terzolo et al. ( 62) excluded the 120-min time point from their protocol.Barbot et al. used the following time points: 15 and 0 min before and 15, 30, 45, 60, 90, and 120 min after desmopressin administration (57).The baseline ACTH and cortisol levels were expressed as the means of the respective measurements taken between 15 and 0 min before desmopressin administration.The absolute increase in plasma ACTH levels after desmopressin administration was defined as the difference between the value at 0 min and the highest value attained within 30 min (DACTH).
Regarding differential diagnosis of CD versus NNH, the patients in the included studies were suspected of having endogenous CS, and most of them had mild hypercortisolism.A similar definition of mild hypercortisolism was used in all included studies.Tirabassi et al. defined mild hypercortisolism as a 24-h UFC level of <771 nmol/day (~2 times of the upper limit of normal range [ULNR]), whereas Moro et al. ( 51) and Giraldi et al. ( 53) defined it as a 24 -h UFC level of <690 nmol/day (~3 times of the ULNR).Regarding the criteria for differentiating CD from NNH, a study defined CD as DACTH of >4 pmol/L along with a baseline serum cortisol To distinguish CD from EAS, eight studies calculated sensitivities and specificities based on DACTH percent increment, six studies calculated these values based on Dcortisol percent increment, whereas five studies calculated the sensitivity and specificity based on both, Dcortisol and DACTH percent increment.For CD diagnosis, the most used criteria were DACTH of >35% and Dcortisol of >20% (Table 1).Most criteria used in these studies were prespecified by the authors.
Risk of bias
Figure 2 summarizes the overall methodological quality of all included studies.These studies retrospectively evaluated CS patient series that required differential diagnosis of CD versus EAS or CD versus NNH.However, these studies did not report whether participant recruitment was performed randomly or consecutively.Therefore, all included studies were considered as having an unclear risk of bias for patient selection.Barbot et al. (57) did not prespecify the threshold used; therefore, we considered that their study had an unclear applicability concern for the index test.We revealed that other studies and domains had a low risk of bias and applicability concern.
In all analyses, compared with sensitivity, forest plots revealed greater variability in the estimated specificity across all studies.In addition, based on the graphical outputs obtained after fitting the hierarchical model, the 95% CIs were extremely wide, and the prediction intervals were wider than the CIs (Figures 3B, 4B, 5B).
Quality of evidence
The quality of evidence regarding the desmopressin test for evaluating CD versus EAS was downgraded in two levels because of the risk of bias and uncertainty (all studies were evaluated as having an unclear risk of bias for patient selection, and prediction intervals in all pooled analyses were considerably wider than CIs).To evaluate CD versus NNH, the evidence was downgraded in three levels because of the risk of bias, uncertainty, and imprecision (a few participants per study).Publication bias could not be investigated because of the small number of studies included per meta-analysis (<10).
Discussion
Considering the need to differentiate CD from EAS and NNH, we evaluated the accuracy of the desmopressin test in these two clinical scenarios.We conducted a systematic literature review and found 14 studies that met our eligibility criteria.Based on the studies included in this review, 84 of 100 patients with ACTH- dependent syndrome will have CD (362/429) and 16 will have EAS (67/429).Of the 84 patients with CD, 13 (15%) will be misdiagnosed as not having CD based on the desmopressin test.Of the 16 patients with EAS, 6 (36%) will be falsely considered as having CD.The patients with EAS falsely diagnosed as having CD may have to undergo MRI.In the absence of an adenoma with a size of >6 mm, BIPSS will be performed, and the diagnosis may be rectified.Conversely, the patients with CD falsely diagnosed as having EAS would have to undergo an extensive investigation to determine the presence of ectopic ACTH production.Conversely, for patients with mild hypercortisolism, 48 and 52 of the 100 patients will have respectively CD and NNH.Among 48 patients with CD, the desmopressin test may misdiagnose 5 (11%) patients; however, these patients can be re-tested.Of the 52 patients without NHH, 4 may be unnecessarily referred for MRI and occasionally for BIPSS.
Although separate meta-analyses of each summary point seem to be extremely accurate in distinguishing CD from EAS and NNH, we revealed that the specificity decreased when sensitivity increased in all analyses.This occurred because separate pooling overlooks the correlation between sensitivity and specificity (20).The results of Risk of bias and applicability concerns: authors' judgment on each domain for all included studies.(A) Desmopressin test to distinguish Cushing disease from non-neoplastic hypercortisolism.(B) Desmopressin test to distinguish Cushing disease from ectopic ACTH syndrome.
B A
(A) Forest plot depicting the sensitivity and specificity considering ACTH percent increment after 10 µg desmopressin test to distinguish Cushing disease from ectopic ACTH syndrome.The figure indicates the estimated sensitivity and specificity of the study (black circle) and its 95% confidence interval (black horizontal line).(B) Summary ROC plots from Stata after fitting the hierarchical model to ACTH percent increment.The circles represent the estimates of individual primary studies, and square indicates the summary points of sensitivity and specificity.HSROC curve is plotted as a curvilinear line passing through summary point.The 95% confidence interval and 95% prediction interval are also provided.HSROC, hierarchical summary receiver operating characteristic.TABLE 3 Summary of the proposed "PIRO" and the pooled sensitivity and specificity results of the accuracy of the 10 µg desmopressin test to distinguish Cushing disease (CD) from ectopic ACTH syndrome (EAS) and certainty of evidence according to the GRADE (Grading of Recommendations Assessment, Development, and Evaluation) approach.
What is the accuracy of the 10 µg desmopressin test in distinguishing Cushing disease (CD) from ectopic adrenocorticotropic hormone (ACTH) syndrome (EAS)?With a prevalence of 82%, 82/100 patients with ACTH-dependent Cushing syndrome will have CD.Of them, 16 will be missed by the desmopressin test (19% of 82).Patients with CD but falsely diagnosed with EAS will be subjected to an extensive investigation to determine the source of ectopic ACTH production.Of the 18/100 patients with EAS, four may be unnecessarily referred for MRI and sometimes for IPSS All studies were evaluated as having an unclear risk of bias for patient selection; the prediction interval was considerably wider than the confidence interval.Low certainty of evidence Frontiers in Endocrinology frontiersin.orgam after fasting following ingestion of 8 mg dexamethasone at night) was 50% in patients with CD (23,(72)(73)(74).To improve the specificity of the test, some authors have proposed suppression of 80% of cortisol levels as the cutoff value (64, 70).However, this may result in a low level of accuracy (64).
A systematic review evaluating the diagnostic accuracy of the CRH test, desmopressin test, and HDDST for establishing a CD or EAS diagnosis revealed that the CRH test had the highest sensitivity for detecting CD on the basis of DACTH (87%) and Dcortisol (86%), along with the highest specificity for detecting EAS on the basis of DACTH (94%) and Dcortisol (89%).However, I2 values suggested substantial heterogeneity for sensitivity (62% ACTH and 78% cortisol), and no HSROCs were calculated (17).
The Dexa-CRH test (a test combining CRH administration after 48 h with 2 mg/day low-dose dexamethasone suppression test) has been previously used to distinguish CS from NNH (49).Yanovski first used the Dexa-CRH test to detect CS and proposed that a serum cortisol level of >1.16) reported 95% sensitivity and 97% specificity in the ROC analysis of ACTH values of >27 pg/mL (5.9 pmol/L) at 15 min after CRH stimulus.
The most crucial limitation of this review was the number of studies included and the number of patients included per study (<100 in most studies) (75).When the number of studies is small, deciding which terms should be included in a model and which is the best model may be difficult.For both bivariate and HSROC models, estimates of variances of the random effects can be subject to a high level of uncertainty (30).Additionally, because a low number of studies were included per meta-analysis (<10), the presence of publication bias could not be evaluated.Moreover, we could not evaluate the sources of heterogeneity through subgroup analyses or meta-regression.Furthermore, the evaluated outcomes were limited by the diagnostic accuracy, and evaluation of other crucial aspects from the patient's viewpoint, such as quality of life, stress, and costs incurred due to a false-positive diagnosis, was lacking.
Although we did not specify remission of hypercortisolism as a criterion for pituitary or ectopic ACTH overproduction, no study was excluded based on this, and we included studies in which CD was confirmed by remission of hypercortisolism after trans-sphenoidal surgery.Regarding the diagnostic approach to distinguish ACTHdependent CS from ACTH-independent CS, persistent ACTH levels of >15 or >20 pg/mL have been used to diagnose ACTH-dependent hypercortisolism, ACTH levels of <5 or <10 pg/mL have been used to diagnose ACTH-independent hypercortisolism, and ACTH levels of 5-15 or 10-20 pg/mL have been reported as indeterminate, indicating that new samples should be ordered (7,76).Indeterminate ACTH levels usually indicate ACTH-dependent cortisol secretion.Thus, to avoid losing studies that did not order new samples but instead used BIPSS and the presence of a pituitary adenoma measuring >6 mm on MRI to diagnose CD, we used a cutoff value of 10 pg/mL as an indication of ACTH-dependent hypercortisolism.Although some included studies did not use the ACTH value to distinguish these two diagnoses, all of them considered histopathological analyses, remission of hypercortisolism after pituitary surgery, or BIPSS results when diagnosing CD.
When this review was being performed, two other systematic reviews were published on the same topic.However, none of them summarized sensitivity and specificity using hierarchical and bivariate methods and presented certainty of evidence according to the GRADE approach (17,77).Additionally, our review focused on the desmopressin test, an inexpensive and readily available test in most countries, which has been used as a substitute for the CRH test.
In conclusion, this evidence synthesis demonstrates that using the desmopressin test for distinguishing CD from EAS results in up to 20% of patients with CD being incorrectly diagnosed as EAS.Additionally, the use of the desmopressin test to distinguish CD from NNH results in 11% of patients with CD being falsely diagnosed as NNH and 7% of patients with NNH being falsely diagnosed as CD.Thus, the use of the desmopressin test alone is not recommended to distinguish CD from EAS or CD from NNH.
FIGURE 4 (
FIGURE 4(A) Forest plot depicting the sensitivity and specificity considering the cortisol percent increment after the 10 µg desmopressin test to distinguish Cushing disease from ectopic ACTH syndrome.Estimated study sensitivity and specificity (black circle); 95% confidence interval (black horizontal line).(B) Summary ROC plots from Stata after fitting the hierarchical model to cortisol percent increment.Circles represent the estimates of individual primary studies, and squares indicate the summary points of sensitivity and specificity.HSROC curve is plotted as a curvilinear line passing through the summary point.The 95% confidence interval and 95% prediction interval are also provided.HSROC, hierarchical summary receiver operating characteristic.
4 mg/dL (absolute value) observed 15 min after the test is suggestive of CS (33).Erickson et al. (16) and Giraldi et al. (53) used this test to distinguish CD from NNH; based on the abovementioned proposed cortisol cutoff, they achieved a sensitivity of 100% and specificities of 76% and 62.5%, respectively.Erickson et al. ( FIGURE 6(A) Forest plot depicting the sensitivity and specificity considering the ACTH percent increment after the 10 µg desmopressin test to distinguish Cushing disease from non-neoplastic hypercortisolism.Estimated study sensitivity and specificity (black circle); 95% confidence interval (black horizontal line).(B) Summary ROC plots from Stata after fitting the hierarchical model to ACTH percent increment.Circles represent the estimates of individual primary studies, and squares indicate the summary points of sensitivity and specificity.HSROC curve is plotted as a curvilinear line passing through the summary point.The 95% confidence interval and 95% prediction interval are also provided.HSROC, hierarchical summary receiver operating characteristic.
(46)e studies involved patients who were previously included in a published series (43-45).Another study(46)had no patients with EAS in their series.Salgadoet al. (47) evaluated the desmopressin test results in patients with EAS, and no patient with CD was included in their series.Sakai et al. and Suda et al. conducted the desmopressin test with 5 and 4 µg of desmopressin, respectively Giampietro et al. 10.3389/fendo.2024.1332120Frontiers in Endocrinology frontiersin.orglaboratory features of CS (41, 42).
TABLE 1
Characteristics of the studies included in relation to the "PIRO" and the contingency table for the accuracy of the 10 µg desmopressin test to distinguish Cushing disease (CD) from ectopic ACTH syndrome (EAS).
TABLE 2
Characteristics of the studies included in relation to the "PIRO" and the contingency table for the accuracy of the10 µg desmopressin test to differentiate Cushing disease from non-neoplastic hypercortisolism.
TABLE 3 Continued
16% of 84).Patients with CD but falsely diagnosed with EAS will be subjected to an extensive investigation to determine the source of ectopic ACTH production.Of the 16 patients with EAS, 6 will be misdiagnosed as having CD and may be unnecessarily referred for MRI and sometimes for IPSS
|
2024-02-06T18:26:01.727Z
|
2024-01-30T00:00:00.000
|
{
"year": 2024,
"sha1": "08a5ddc168423e8ac020a41638d39991d34f826f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2024.1332120/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ccb6772c8a22fa5946e98f133148f99f2600d81",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
91048948
|
pes2o/s2orc
|
v3-fos-license
|
Biological Characterization of the Uterine Malignant Mesenchymal Tumours
Sarcomas are neoplastic malignancies that typically arise in tissues of mesenchymal origin. The identification of novel molecular mechanisms leading to mesenchymal transformation and the establishment of new therapies and biomarker has been hampered by several critical factors. First, mesenchymal malignant tumour is rarely observed in the clinic with fewer than 15,000 newly cases diagnosed each year in the United States. Another complicating factor is that sarcomas are extremely heterogeneous as they arise in a multitude of tissues from many different cell lineages. The scarcity of clinical materials coupled with its inherent heterogeneity creates a challenging experimental environment for clinicians and scientists. Faced with these challenges, there has been extremely limited advancement in clinical treatment options available to patients as compared to other malignant tumours. In order to glean insight into the pathobiology of sarcomas, scientists are now using mouse models whose genomes have been specifically tailored to carry gene deletions, gene amplifications, and somatic mutations commonly observed in human sarcomas. The use of these model organisms has been successful in increasing our knowledge and understanding of how alterations in relevant oncogenic, tumour suppressive, and signaling pathways directly impact sarcomagenesis. It is the goal of many in the biological community that the use of several mouse models will serve as powerful in vivo tools for further understanding of sarcomagenesis and potentially identify new diagnostic biomarker and therapeutic strategies.
Abstract
Sarcomas are neoplastic malignancies that typically arise in tissues of mesenchymal origin. The identification of novel molecular mechanisms leading to mesenchymal transformation and the establishment of new therapies and biomarker has been hampered by several critical factors. First, mesenchymal malignant tumour is rarely observed in the clinic with fewer than 15,000 newly cases diagnosed each year in the United States. Another complicating factor is that sarcomas are extremely heterogeneous as they arise in a multitude of tissues from many different cell lineages. The scarcity of clinical materials coupled with its inherent heterogeneity creates a challenging experimental environment for clinicians and scientists. Faced with these challenges, there has been extremely limited advancement in clinical treatment options available to patients as compared to other malignant tumours. In order to glean insight into the pathobiology of sarcomas, scientists are now using mouse models whose genomes have been specifically tailored to carry gene deletions, gene amplifications, and somatic mutations commonly observed in human sarcomas. The use of these model organisms has been successful in increasing our knowledge and understanding of how alterations in relevant oncogenic, tumour suppressive, and signaling pathways directly impact sarcomagenesis.
Introduction
Sarcomas are a rare malignant tumour with less than 15,000 new cases diagnosed each year in the United States. Though rare, sarcomas are highly debilitating malignancies as they are often associated with significant morbidity and mortality. Sarcomas are biologically very heterogeneous as evidenced by the fact that mesenchymal tumours arise from a plethora of different tissues and cell types. They are classically defined by their tissue of origin and are additionally stratified by their histopathology or patient's age at clinical diagnosis [1]. While these classifications have proven useful, modern pathobiological and clinical techniques have the ability to further stratify sarcomas based on their genetic profile [2]. Cytogenetic and karyo type analyses have revealed two divergent genetic profiles in sarcomas. The first and most simple genetic profile is the observation of translocation events in sarcomas with an otherwise normal diploid karyotype. On the other hand, most sarcomas display a more complex genetic phenotype, suggesting that genomic instability plays an important role in many sarcomas.
Proteasome beta subunit (PSMB) 9/β1i is encoded in the major histocompatibility complex (MHC) class region of the 20S proteasome, which is part of the 26s complex that degrades ubiquitin-conjugated proteins. A study done by Hayashi et al. reported that defective expression of PSMB9/β1i may initiate the development of spontaneous human uterine leiomyosarcoma (Ut-LMS) [3]. As human mesenchymal tumours including Ut-LMS are resistant to chemotherapy and radiotherapy, and thus surgical intervention is virtually the only means of treatment, developing an efficient adjuvant therapy is expected to improve the prognosis of the sarcoma. The identification of a risk factor associated with the development of mesenchymal tumours would significantly contribute to the development of diagnostic biomarkers, preventive and therapeutic treatments.
IFN-γ-inducible factor, PSMB9/β1i correlates to uterine mesenchymal transformation
The proteasomal degradation is essential for many cellular processes, including the cell cycle, the regulation of gene expression and immunological function [4][5][6]. Interferon (IFN)-γ induces the expression of large numbers of responsive genes, subunits of proteasome β-ring, i.e., proteasome beta subunit (PSMB)9/β1i, PSMB5/β5i, and PSMB10/multicatalytic endopeptidase complexlike (MECL)-1/β2i [7,8]. A molecular approach to study the correlation of IFN-γ with tumour cell growth has drawn attention. Homozygous mice deficient in PSMB9/β1i show tissue-and substrate-dependent abnormalities in the biological functions of the proteasome [7][8][9]. Ut-LMS reportedly occurred in female PSMB9/ β1i-deficient mice at age 6 months or older, and the incidence at 14 months of age was about 40% [3,10]. Histological studies of PSMB9/β1i-lacking human uterine mesenchymal tumours have revealed characteristic abnormalities of Ut-LMS [3,10]. In recent studies, experiments with mouse uterine tissues and human clinical materials revealed a defective expression of PSMB9/β1i in human Ut-LMS that was traced to the IFN-γ pathway and the specific effect of somatic mutations in molecule of JANUS KINASE 1 (JAK1), which is also important for transducing a signal by type I (IFN-α/β) and type II (IFN-γ) interferons, on the PSMB9/β1i transcriptional activation [11]. Furthermore, analysis of several human Ut-LMS cell lines clarified the biological significance of PSMB9/β1i in malignant myometrium transformation, thus implicating PSMB9/ β1i as an anti-tumorigenic candidate [10,11].
Biological significance of TP53 in human sarcomagenesis
Tumour protein 53 (TP53), tumour suppressor pathway is one of the most well characterized pathways in malignant tumours [12]. TP53 gene encodes a transcription factor required for the activation of numerous DNA damage-dependent checkpoint response and apoptotic genes, and thus its activities are often ablated in many malignant tumours. In addition to loss of TP53 functions via inherited germline mutations, TP53 pathway is commonly disrupted by somatic mutations in TP53 gene during sporadic sarcoma genesis [13,14]. However, even though TP53 gene alterations are widely regarded as having a significant impact on sarcoma genesis, many sarcomas retain wild type TP53, yet phenotypically display a loss of TP53 function. These findings suggest that changes in other components of TP53 pathway; such as amplification of Mouse double minute (MDM) 2 homolog, a negative regulator of TP53 pathway, may result in TP53 inactivation [15,16]. Furthermore, both mice and humans with elevated levels of MDM2 due to a high frequency single nucleotide polymorphism in the MDM2 promoter (Mdm2SNP309) are more susceptible to sarcoma formation [17]. Additionally, deletion or silencing of p19 Arf (P14 ARF in human), an inhibitor of the MDM2-TP53 axis, often results in development of sarcomas. To increase the incidence of uterine mesenchymal tumour, i.e. Ut-LMS, and for better assessment of the role of the systemic expression of transform related protein 53 (TRP53) in response to the initiation of mouse Ut-LMS tumorigenesis, Psmb9-deficient mice were bred with Trp53-deficient mice [18]. These breeding created Psmb9 -/-Trp53 -/mice and closely matched control Psmb9 -/-Trp53 +/+ mice [18]. However, no significant differences were observed in Ut-LMS incidence between these three genetically modified mouse groups. The relationship between the onset of human Ut-LMS and TP53 was not clarified from the clinical data or experimental results obtained from these mice. Together, these data indicate that while inactivation of the TP53 pathway is observed in the vast majority of human sarcomas except for Ut-LMS, the mechanisms leading to disruption of the pathway can vary greatly.
Correlation between biological function of RB and human sarcoma genesis
RETINOBLASTOMA (RB) is an embryonic malignant neoplasm of retinal origin. It almost always presents in early childhood and is often bilateral. The retinoblastoma gene (RB1) was the first tumor suppressor gene cloned. It is a negative regulator of the cell cycle through its ability to bind the transcription factor E2F and repress transcription of genes required for synthesis phase (S phase), which is the part of the cell cycle in which DNA is replicated, occurring between G1 phase and G2 phase [19]. RB pathway represents a second major tumour suppressor pathway deregulated in many sarcomas. Individuals inheriting a germline RB mutation typically develop malignant tumours of the eye early in life. However, in addition to retinal malignant tumours, these children have a significantly higher propensity to develop sarcomas than the general population [20]. While inheritance of germ line RB alterations increases sarcoma risk, there are also numerous examples of sporadic sarcomas harbouring spontaneous mutations and deletions of RB, particularly osteosarcomas and rhabdomyosarcomas [21]. Furthermore, P16 INK4A , a negative regulator of the CDK-CYCLIN complexes that phosphorylate and activate RB, is often deleted in human sarcomas [22]. Clinical experiment suggests increased risk of Ut-LMS in hereditary RB patients [23]. Together, these findings illustrate the importance of RB pathway in sarcomagenesis.
Conclusions
The vast differences in the cellular origins of sarcomas, the lack of availability of tumour specimens, and the heterogeneity inherent within individual tumours has impeded our ability to fully understand the biological characterizations of mesenchymal tumours. However, given the availability of numerous genetic knock-outs, knock-ins, and conditional alleles coupled with the bevy of tissue-specific Cre-recombinase expressing mouse lines, we now have the ability to systematically and prospectively interrogate how individual genes and mutations impact sarcomagenesis. Going forward, tumour analysis from multiple murine derived tumour types can be compared and contrasted in order to identify critical changes in specific sarcomas. The molecular approaches have clearly demonstrated that while there are driver mutations/ translocations, sarcomagenesis is, in fact, a multi-hit disease. The use of several mouse models mimicking the human disease symptom leads to identify critical therapeutic approaches, which can be taken to lessen the impact of these debilitating diseases [18,23,24]. Human mesenchymal tumours including Ut-LMS is refractory to chemotherapy and has a poor prognosis. The molecular biological and cytological information obtained from mouse tissues and human clinical materials will contribute remarkably to the development of preventive methods, a potential diagnostic biomarker, and new therapeutic approaches against human mesenchymal tumours.
|
2019-04-02T13:07:18.471Z
|
2015-10-19T00:00:00.000
|
{
"year": 2015,
"sha1": "01aac48f20d41b621259f8421a1a517ef7f438b2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.24947/baojcrt/1/3/00113",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f65e0ffb8e9437fe7f85d4b009d3b705ce895736",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1582176
|
pes2o/s2orc
|
v3-fos-license
|
Chronic therapeutically refractory angina pectoris
In contrast to the tremendous increase in practical opportunities and theoretical knowledge in the western world, resulting in a doubling of our average life expectancy, the number of diseases has not been reduced during the last 150 years. This apparent contradiction, secondary to changes in environmental influences and sociocultural developments such as smoking habits, sedentary lifestyles, fast food meals, stressful jobs, etc, have led to the introduction of newer definitions for illnesses. From this perspective, the illness pattern has changed during the last century and a half from infectious diseases to more culturally influenced diseases, such as coronary artery disease (CAD). However, to date, following thorough research, no evidence is available that these cultural influences are the sole initiators in the process of atherosclerotic plaque formation. Albeit that plaque formation is the result of a variety of processes culminating in CAD, the search for the initial trigger is ongoing. During the last few years, more and more evidence has become available that an inflammatory response plays a key role in atherosclerotic plaque formation leading to CAD.
These mediators of inflammation interact with nervous signalling transduction pathways arising from the environment of the atherosclerotic plaque. The inflammatory substances released during myocardial ischaemia are relevant to progression of the atherosclerotic process in the narrowed coronary arteries. In contrast, the recruited nervous and neurohumoral pathways during cardiac ischaemic challenges are thought to be involved in maintaining the integrity of the myocytes.
Subsequently, myocardial ischaemia, angina pectoris signalling pathways, and neurohumoral and inflammatory responses are considered to be key players in atherosclerotic heart disease.
This article discusses newer insights into the pathophysiology of chronic (refractory) angina pectoris, resulting from stable atherosclerotic CAD, and suggests some potential additional treatments.
### Atherosclerosis
During the initial phase in the atherosclerotic process, the vascular wall thickens in conjunction with luminal …
c Atherosclerosis During the initial phase in the atherosclerotic process, the vascular wall thickens in conjunction with luminal dilatation (that is, vascular remodelling). Though in stable situations many (risk) factors determine the velocity of progression to plaque formation, before a critical stenosis in a coronary artery is formed an occlusion may already occur, resulting from an unstable plaque. Given the gradual reduction in luminal diameter of a coronary artery caused by atherosclerotic plaque formation, the increased oxygen demand of the heart during exercise ultimately results in a perturbation of the balance between myocardial oxygen consumption (O 2 demand) and coronary blood flow (O 2 supply), the so-called ischaemic threshold. 1 Subsequently, during a physical work out, the ischaemic threshold is determined at the moment coronary flow and myocardial oxygen consumption become disproportional. Pharmacological treatments and revascularisation procedures are meant to improve this ischaemic threshold, by reducing the myocardial oxygen demand, or by improving the oxygen supply to the myocardium.
Recruited pathways following myocardial ischaemia Before the disturbance in oxygen supply and demand balance occurs in the heart, in stable circumstances and predominantly during physical exercise, sensitisation of high threshold nerve endings in the myocardium takes place through a variety of metabolic substances such as potassium, lactate, adenosine, bradykinin, and prostaglandins. 2 These substances induce various kinds of sensations such as fatigue, muscle pain, and shortness of breath. The cardiorespiratory threshold that limits exercise is determined by metabolic excitation-contraction (Na + -K + ) alterations inducing muscle tension, sensation of weakness or fatigue, and finally the responsiveness of motor neurone commands. 3 The responsiveness of motor neurones, depending on impulses relayed by the spinothalamic tract to medullar neurones of the cardiorespiratory centres, is influenced by central (that is, reflexes in the spinal cord) and peripheral factors (that is, receptors in the muscles). Furthermore, exercise in cardiovascular compromised patients initiates stimulation of the b sympathetic nervous system, resulting in partial counteraction of the limited exercise by an increased heart rate and ventilation frequency. However, the adaptive effect to ischaemic challenges in humans appears to be coordinated by a adrenergic receptors. 4 It has been reported that a specific G i protein, connected to the a 2 receptors, induces atherosclerotic related impairment of endothelium dependent relaxation. Furthermore, a receptors, localised on primary afferents, sympathetic postganglionic neurones, and dorsal laminae of the spinal cord and of the brainstem are involved in analgesia and play a role in vasomotor control. In brief, the role of a adrenergic receptors in myocardial ischaemia, by controlling vasomotor tone, is well established.
The role of the vagus nerve in processing noxious cardiac information remains controversial. While both vagal and sympathetic afferent fibres contribute to the increased activity of spinothalamic tract cells and spinal neurons in the C1-C3 segments of the spinal cord, vagal stimulation is a more potent stimulus. Activation of vagal afferent fibres can modulate the processing of information of the thoracic spinothalamic tract cells receiving afferent input from the heart, by activating supraspinal pathways and nuclei. Contrary to the idea that activation of vagal afferent fibres may lead to visceral pain, except in the neck and jaw regions, the vagal afferents may serve as an important rapid signalling pathway for communicating the immune changes from the periphery to the areas in the brain that respond to infection and inflammation. Infection and inflammation elicit the production of vasoactive and neurohumoral compounds.
Depending on the integrity of the vagal afferent pathway, the release of inflammatory cytokines like interleukin (IL)-1, IL-6, IL-1b, and tumour necrosis factor a, trigger several systemic responses. This reaction induces alterations in pain sensitivity and metabolism, hyperthermia, and increased release of adrenocorticotropin, glucocorticoids, and liver acute phase proteins. Furthermore, vagal afferent stimulation activates the hypothalamus-pituitary-adrenal axis. Finally, the activation of this vagal pathway to supraspinal structures, such as the hypothalamus and the amygdala, may activate descending antinociceptive pathways that may provide projections of a visceral organ against local inflammatory reactions.
Based on this information, it is possible that vagal activation resulting from the release of cytokines might produce the inhibition of spinothalamic tract cells and spinal neurones in the thoracic segments. In summary, the vagal afferent pathway to supraspinal structures might be important for eliciting the immune responses resulting from systemic infections and inflammation, and might not be the pathway that contributes to the perception of angina pectoris.
Angina pectoris
The clinical manifestations of angina pectoris are typically provoked through exercise and abate during rest. Usually the patient suffering from effort angina can predict the amount of physical exercise that causes his or her angina. At maximal exercise the faltering blood flow through the diseased coronaries implies a fixed narrowing (stenosis) in the coronary arteries. At rest, the anginal threshold is influenced by, among other factors, emotional stress, exposure to cold weather, superfluous meals, and smoking. These aspects suggest a dynamic stenosis in a coronary artery. As a consequence, the variability of the anginal threshold is determined by the interplay between the fixed and the variable obstruction in the coronary arteries. Angina pectoris is not a very specific indicator for occlusive coronary disease, since it is a relatively late, inconsistent, and non-specific phenomenon. In contrast, in the sequence of events resulting from myocardial ischaemia, angina pectoris is a sensitive parameter. Myocardial ischaemia may occur in the presence of at least 60% narrowing of the diameter of a coronary artery, while anginal complaints may begin when the stenosis is already more than 75%. Only 75 years ago the momentary angina ''pain'' was linked to myocardial ischaemia. 5 To date, from anatomical, pathophysiological, and neurocardiological viewpoints, angina pectoris is considered to be the symptomatic result of ischaemic atherosclerotic coronary artery disease, associated with an impaired residual coronary blood flow (reserve). In a patient suffering from exercise induced angina pectoris this ischaemic threshold is determined by systolic blood pressure, heart rate, and contraction force. 6 During exercise, patients with advanced CAD often experience a crushing, constrictive, suffocating discomfort, usually in the upper substernal area, sometimes radiating to adjacent areas (predominately the left side), such as arms, neck, throat, jaw, and teeth. The provoked visceral nociception is characterised by its vaguely distributed, ''emotionally'' charged aspects, and the influence of emotions on the experience of the anginal pain. The ''vaguely'' localised and ''loaded'' nociceptive information of the elicited angina pectoris is conveyed by visceral afferent nerve fibres, following sensitisation of cardiac (C and A delta) nerve endings. 7 Sensitisation of multireceptive nerve endings is believed to be effected through substances such as adenosine and prostaglandins. The latter sensitise cardiac sympathetic afferents for bradykinin. Other vasoactive and neurohumoral substances involved in ischaemic pain are endorphins, vascular intestinal protein, c amino butyric acid, neuropeptide Y, and serotonin. 8 Transmitters are released by pressure (stretch), infection (irritation), nervous and chemical stimuli, and (myocardial) ischaemia.
After induction of the stimulus, a neuro-hierarchic complex of gating at multiple levels, overlapping receptive fields, and ascending and descending nervous pathways modulate the propagation of information to the cortex and hence determines the ultimate nociception. The involvement in the perception of angina pectoris of the limbic system and predominantly the (hypo)thalamic area has recently been demonstrated, making use of positron emission tomography. 9 In this perspective it is illustrative that mental stress and physical exercise induced myocardial ischaemia produces the same alterations in higher brain centres. Moreover, the hierarchical organisation of the nervous system enables it to settle with compromised balances and so restore the integrity of cardiomyocyte function. Consequently, the afferent and efferent cardiac nervous system may be considered as a hierarchical nervous loop from which one limb is interacting with the other. In addition to this neural feedback pathway, a (neuro)humoral circuit is suggested. In response to stress, the efferent humoral pathway induces the release of glucocorticoids, noradrenaline (norepinephrine), and adrenaline (epinephrine). The humoral afferent limb has only recently been postulated and links the ischaemic heart to cognitive brain centres via cytokines (fig 1). 10
CHARACTERISTICS OF PATIENTS WITH (CHRONIC REFRACTORY) ANGINA PECTORIS
Patients with chronic refractory angina lead severely restricted lives and perform only limited activities.
Moreover, the psychological stress caused by awareness of the increased risk of a myocardial infarction often places an additional burden on the patient and his or her family. These patients are usually characterised by a long history of coronary artery disease. During this part of their life, patients have therefore often experienced numerous hospital admissions, caused by an acute worsening of their coronary artery disease expressed as either a period of unstable angina or a myocardial infarction.
Treatments that reduce these patients' angina not only improve their quality of life but will also ameliorate their psychosocial status. In addition to antianginal medication, they have often been treated with one or more percutaneous transluminal coronary angioplasty (PTCA) procedures or coronary artery bypass graft surgery (CABG). Most patients suffering from chronic refractory angina pectoris are relatively young, predominantly male, with a moderately hampered left ventricular ejection fraction and elevated fibrinogen values 11 12 (table 1). The increased fibrinogen is most likely to be an epiphenomenon, related to chronic inflammation induced by coronary artery disease.
However, an increasing number of patients, surviving various ischaemic events, are suffering from chronic angina pectoris, most likely as a result of sensitisation (that is, a reduced pain threshold), refractory to conventional strategies. The prevalence is estimated to be 100 000 patients in the USA, with an equal number in Europe. 13 Their angina is considered to be refractory when, despite optimal antianginal pharmacological treatment and the presence of persistent reversible myocardial ischaemia, revascularisation is no longer feasible. Patients suffering from angina pectoris, resistant to conventional treatments, may be considered as survivors of their coronary artery disease. Since they are invalided by their anginal pain, without conventional treatment options the patients have unmet medical needs. Subsequently, any additional treatment that relieves their complaints without adversely affecting their chronic disease is worth taking into consideration. The argument for focusing attention on improving these patients' quality of life is particular valid with respect to the prognosis of those who survive with end stage heart disease for a long period of time, as expressed in the low annual cardiac mortality of about 5%. 11
TREATMENT FOR ANGINA
In addition to improvement in lifestyle, the conventional way to improve myocardial ischaemia is by either reducing the oxygen demand (b blockers, calcium channel blockers) or by improving the supply (nitrates, revascularisation procedures such as PTCA or CABG). Additive measures, such as lipid lowering, inhibition of platelet aggregation, and interference in the renin-angiotensin system have become established treatments for stable angina pectoris. In the vast majority of patients these strategies are sufficient to control the symptoms.
Adjunctive treatments for chronic refractory angina pectoris If conventional treatments fail to control the patient's condition, many adjunctive therapies are available. In general, four types of additional treatment can be offered to patients with therapeutically refractory angina (fig 2).
First, the application of additional medication, administered either systemically such as cordarone, chelation, opioids, and (intermittent) urokinase, or locally, such as intrathecally applied anaesthetics or opioids. In general, the use of adjunctive medication for long term treatment is withheld because of its drawbacks (opioids), because it is only suitable for short term application (intrathecally applied anaesthetics), it is costly (urokinase), or it has not proven to be effective for this indication (chelation, cordarone). Medications targetting inflammation and thrombosis are considered to be more potent options in the near future.
Second, treatments aimed at improving myocardial perfusion, by means of a rehabilitation programme or by affecting the haemodynamic system. The trade-off of the beneficial effects of cardiac rehabilitation programmes on cardiac performance is the need for continuation of the programme. 14 Angina pectoris may also be treated by enhanced external counterpulsation. This method is directed at diastolic augmentation of blood flow in the coronary arteries through an increase in aortic retrograde blood flow, induced by compression of cuffs that are wrapped around the legs. Recently, enhanced external counterpulsation has been reported to be effective in improving myocardial perfusion during stress in patients with chronic stable angina. 15 However, experience is limited and the equipment costly.
Third, modulation of the nervous system. The nervous system can be modulated through spinal cord stimulation or transcutaneous electrical nerve stimulation. Neuromodulation appears to be one of the most successful adjunctive treatments. It is a reversible therapy and has been reported to be effective, without concealing angina pectoris during an acute myocardial infarction. The beneficial effects of neuromodulation, expressed in a reduction in the number and severity of anginal attacks in conjunction with an improvement in exercise capacity and quality of life, have been reported to last for several years. Evidence that spinal cord stimulation exerts an additional anti-ischaemic effect is provided by studies on exercise testing, ambulatory ECG monitoring, positron emission tomography, and coronary flow measurements. The explanation for the reduction in myocardial ischaemia may be a homogenisation of the myocardial perfusion. 16 Furthermore, there is evidence that electrical stimulation of the dorsal aspect of the spinal cord stabilises the intrinsic cardiac nervous system and may therefore prevent deleterious consequences, such as electrical instability of the ventricles. 17 Finally, research performed by Kanno and colleagues in 1999 may shed new insights into the influence of electrical stimulation on the concentration of vascular endothelial growth factor (VEGF). From his investigations on low intensity (10% of contraction threshold) electrical stimulation in ischaemic hind paw muscles of rabbits and in muscle cells in vitro it may be concluded that VEGF mRNA concentration after stimulation is increased significantly. Denervation of the heart by endoscopic transthoracic sympathectomy has also been reported in a very limited number of publications over the last decade. The drawback of these destructive experimental treatments is a relatively high mortality and morbidity, ranging from 5-10%. 18 Arbitrarily, these latter adjunctive treatments could also be classified into the next category. Fourth, treatments aimed at vessel formation through upregulation of vascular endothelial growth factors inducing angiogenesis, making use of stem cells, or applying either transmyocardial (TMR) or percutaneous laser (PMR). Restoration of function by means of angiogenesis is a stepwise experimental procedure, best studied by making use of gene therapy. Gene therapy can be applied by direct intramyocardial injection of naked DNA encoding for VEGF, a heparin binding glycoprotein, as well as adenoviral transfection with VEGF. Regulation of VEGF mainly takes place via oxygenation of tissues. Ischaemia enhances both the expression and production of VEGF. Furthermore, since an increased concentration of VEGF mRNA has been demonstrated in ischaemic tissues, this suggests a negative feedback system. When oxygen concentration in the tissues increases, VEGF gets down regulated. At the onset of the angiogenesis process, endothelial cells produce metalloproteinases to digest the basement membrane. Next, the endothelial cells may disconnect from the basement membrane, and are able to migrate, proliferate, and form a network of ''endothelial tubes''. To become functionally important the vessels then need to mature. During the following arteriogenesis, nascent vessels become extensively covered by a muscular coat creating blood vessels with viscoelastic and vasomotor properties. 19 Studies on gene therapy have demonstrated a remarkable improvement in flow to ischaemic areas in peripheral arteries as well as in the heart. 20 Although the clinical results are encouraging, there is a need for further validation in placebo controlled trials. With respect to angiogenesis most concerns relate to the vehicle, usually a genetically manipulated virus, delivering the growth factor. Though the long term effects of these genetic therapies are not yet known, the vehicle issue should be of minor concern in the case of plasmid based delivery. In conclusion, gene therapy induced new blood vessel formation, making use of angiogenetic growth factors, is a recent and promising development.
TMR and PMR are meant to improve the flow through the myocardium by channelling with laser beams. Mirhoseini was the first to advocate direct laser therapy of the myocardium as a treatment for refractory angina pectoris, in 1981. The idea initially was to create transmural channels from the left ventricular cavity into the myocardial muscle to improve myocardial perfusion. Although some animal studies have suggested patency of lasered channels, most recent studies and necropsy reports showed occlusion of the lasered channels within one day, making neo-''revascularisation'' as a mechanism of action unlikely. Also denervation of the heart or laser induced angiogenesis with subsequent collateralisation causing improvement of perfusion is not proven. Initially the myocardium was lasered from the epicardial side during heart surgery, both as an adjunct to bypass surgery and as a stand alone procedure. Early studies showed a high postoperative mortality. Randomised controlled studies comparing laser therapy with medical treatment reported inconsistent findings. The majority showed a reduction of anginal complaints, some an improvement in exercise capacity, and only one study demonstrated an improved perfusion. Developments in catheter based technology made it possible to deliver the laser energy from the endocardial side. Preliminary data show that efficacy is in the same range as surgical based laser therapy. However, in view of the unknown underlying mechanism of action, to date laser therapy is not recommended for this subset of patients. Finally, heart transplantation is not considered a feasible treatment for this group of patients.
Some of the discussed adjuvant treatments have class 2A or class 2B indications, according to the recent American Heart Association/American College of Cardiology guidelines. 21
CONCLUSIONS
The numbers of patients suffering from angina pectoris chronically resistant to conventional treatments are increasing. Patients with chronic refractory angina differ from the ordinary angina patient in three ways: first, patients with chronic refractory angina pectoris maintain their left ventricular function despite severe three vessel disease; second, they do not experience severe arrhythmias and therefore their mortality is only about 5%; and third, their angina is debilitating. New and often promising treatments for this condition are worth taking into consideration.
|
2017-04-09T20:40:52.067Z
|
2004-01-16T00:00:00.000
|
{
"year": 2004,
"sha1": "229d0be4ae9f8d83f0f5af2d3ec0d15352beda1f",
"oa_license": null,
"oa_url": "https://heart.bmj.com/content/heartjnl/90/2/225.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "3d5d3cbfb331045da337b6ca1920c5d161e62ae3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246009830
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of design features and test results of fractional grain cleaners
Stable preservation of sown areas for agricultural production in the Russian Federation is crucial for increasing the gross grain yield. Only high-quality seeds with a low level of injury during harvesting and post-harvest processing will provide a significant increase in the yield of crops. Weediness of the grain heap decreases the yield up to 40 … 60%. According to long-term data, weediness of the grain heap in the natural and climatic regions of the CIS is 6.0 … 15.0%. Moisture during threshing is 16.0 … 20.0%, and in unfavorable years it can reach 22.0 … 25.0%. A safe storage period for such a heap is fairly limited and can attain several hours. Impurities of organic origin with the moisture content of 50.0 … 80.0% have a negative effect on the grain heap safe storage and the quality indicators of seeds. The study carried out during secondary cleaning show that the OZF-50 and OZF-80 machines provide the required productivity of 10.27 t/h and 20.40 t/h, respectively. With this productivity, the main crop content is 99.22 and 99.61%. The content of weed seeds is 3 and 5 pcs/kg, while the grinding of grain meets the technical specifications and attains 0.12 and 0.15% for the above machines. The results of regular periodic tests of the new generation OZF machines show show that all operational and technological indicators and performance indicators of the machines meet the requirements of technical specifications. The developed new fractional grain cleaners ensure the production of original seeds in accordance with GOST R52325-2005.
Introduction
The requirements for the varietal, food and sowing qualities of grain and seeds are regulated by the relevant standards. In particular, the national standard of the Russian Federation Varietal and Sowing Qualities (GOST R 2325-2005) applies to seeds of agricultural plants. Table 1 summarizes the standard requirements for the quality of wheat seeds as a reference crop. The seeds are not allowed for sowing if they exhibit: -weeds (seeds, fruits), pests and pathogens of quarantine significance for the Russian Federation according to the list approved in a prescribed manner; -live pests and their larvae that damage seeds of the corresponding crop, except for ticks allowed by the standards in the amount not more than 20 pcs/kg; -seeds of poisonous plants -pubescent heliotrope and the gray-haired trichodesma.
The purchase and consumption of safe, high-quality and natural food products are of main interest to consumers. At the same time, the main goal of producers is to obtain maximum income in the competitive production of products. Food, especially wheat, plays a leading role in the world politics of any state. Grain export requires new requirements for its quality. Stable preservation of sown areas for agricultural production in the Russian Federation is the main way to increase the gross yield of grain. Only high-quality seeds with a low level of injury during harvesting and post-harvest processing will provide a significant increase in the yield of crops. Weediness of the grain heap leads to a decrease in the yield up to 40 ... 60%. According to long-term data, weediness of the grain heap in the natural and climatic regions of the CIS is 6.0 ... 15.0%. Moisture during threshing can be 16.0 ... 20.0%, and in unfavorable years it can attain 22.0 ... 25.0%. The safe storage period for such a heap is some hours. Impurities of organic origin with the moisture content of 50.0 ... 80.0% have a negative effect on the preservation of the grain heap and the quality indicators of seeds [1][2][3][4][5].
Materials and methods
Untimely preliminary processing of the grain heap coming from the combine harvesters decreases the sowing and commercial qualities of seeds and grain. A high level of injury and insufficient isolation of weeds and biologically defective grain contributes to the rapid development of pathogens [6][7][8][9][10].
Results and their discussion
To implement the proposed ideas, the scientists from VASU for the first time proposed a method for fractionating a grain heap at the earliest stage of its post-harvest processing, which was embodied in the design of these machines and protected by the RF patent No. 2264068.
A new arrangement of sieves has been proposed for a 1.5 ... 2.0 fold increase in the area of sorting sieves with established total area, separation of feeble, crushed and biologically defective grain, as well as machine productivity, energy and metal consumption. The proposed technical solutions are protected by RF patents No. 43798, 63715, 135543, 189918, 189555, 2708970, 185732 and implemented in OZF-80 and SVS-30 grain cleaning machines.
The improved identification of lightweight impurities through a two-stage cleaning of the grain heap, in both the unloading and the pneumatic separation channels, at an air flow rate not exceeding the rate of grain hovering to reduce energy costs by preventing air leaks during secondary cleaning of grain with separation of biologically defective and crushed grain and difficult-to-separate impurities, at a doubled air flow rate and similar air flow rate in the separation zones in the discharge and pneumatic separation channels, is proposed in the utility model patent Device for post-sieve cleaning of grain heaps (patent No. 68373) and implemented in the OZF separators design.
The method for pneumatic separation of grain materials and a device for its implementation (patent No. 2457047) improve the quality of separation of the grain heap.
A dual-aspiration system of the universal grain cleaning machine (patent No. 2366518) reduces the energy consumption for the drive of the diametrical fan rotor of the OZF grain cleaning machine and quickly sets the required air flow rates in the pneumatic separation channels of the first and second aspiration during preliminary, primary and secondary cleaning of the grain heap.
The aspiration system of the grain cleaning machine (patent No. 2298441) provides technologically independent air flow rates in the pneumatic separation channels due to a diametrical fan with adjustable rotation frequency and special air intake windows with a regulation mechanism.
The use of a stamped perforated reflective surface and a flat sieve cleaner manufactured according to patent No. 2298440 dated 10.05.2007 improve the quality of sieve cleaning and provides a sieve cross-section ratio of at least 0.95, which increases the completeness of separation.
The use of the original device of a ball cleaner manufactured according to patent No. 2326745 in the design of grain cleaning machines for post-harvest processing of grain heaps will reduces 'dead zones'. This technical solution will improve the quality of cleaning sieves in sieve mills of grain cleaning machines, ease the maintenance and increase the productivity of new machines.
The use of a device for gravitational distribution of bulk materials in the design of grain separators (patent No. 2404864) will increase the efficiency of uniform introduction of the grain heap into the pneumatic separation channel along the width of the grain cleaning machine and reduce the level of grain injury.
A new design of the sieve for the sieve mill of the grain cleaning machine manufactured according to patent No. 139851 will increase the efficiency of sepatating the grain mixture into fractions and identification of impurities.
In accordance with the state assignment for compliance with the technical specifications, periodic tests were carried out for grain cleaning machines developed by the scientists from the Department of Agricultural Machines, Tractors and Cars, Voronezh State Agrarian University. The machines were manufactured by Oskolselmash LLC. Data on the main indicators of state periodic tests of OZF machines provided by the CCh MTS are presented in Tables 2 and 3.
The operational and technological assessment of the OZF-80 machine was carried out as part of the ZAV-40 grain cleaning unit on a pile of Augustina winter wheat in the Glinnoye department of Krasnoyaruga grain company, and the OZF-50 machine was tested on a pile of Almera winter wheat at O. V. Kormakov IE in Novooskolsky district, Belgorod region [11,12].
The moisture content of the initial heap for all types of cleaning corresponded to the requirements of technical specifications. The weed content in the initial material corresponded to the regulatory requirements -1.99% and 0.82% (according to the requirements of technical specifications, it is up to 5.0% for preliminary and up to 3.0% for primary cleaning). The content of weeds in pieces per 1 kg is 73 (according to technical specifications -up to 100 pcs/kg). No other crops were found in the initial pile of seeds. All other indicators of the tested grain heap meet the corresponding specifications. Data presented in Table 2 show that the OZF-50 and OZF-80 machines for preliminary cleaning of winter wheat at productivity of 50.66 and 80.50 t/h, respectively, provide a good quality of the grain heap separation. At the same time, the amount of grain from the main crop in the unused waste, in the fodder fraction was 0.38 and 1.91% for OZF-50 and 0.38 and 1.92% for OZF-80, which meets the requirements of technical specifications. The grinding of grain, 0.16 and 0.18%, respectively, is also below the existing requirements of 2.0%.
The study carried out during primary cleaning of the grain pile of winter wheat confirm the passport productivity of the OZF-50 separator and were equal to 25.31 and 40.23 t/h. At this productivity, the grain purity is 97.88 and 98.0%, respectively, the grinding of grain at the given productivity is 0.15 and 0.17%, and the amount of the main crop in the fodder fraction is 2.99 and 3.61%. In technical specifications, these indicators are higher than those obtained during periodic tests.
|
2022-01-18T20:07:43.032Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "4f7c35690ae9c1a92e0eb89532d43f012ccbbd71",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1755-1315/954/1/012057",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4f7c35690ae9c1a92e0eb89532d43f012ccbbd71",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Physics"
]
}
|
235239819
|
pes2o/s2orc
|
v3-fos-license
|
The real-life impact of vaccination on COVID-19 mortality in Europe and Israel
Objectives This study aimed at estimating the real-life impact of vaccination on COVID-19 mortality, with adjustment for SARS-CoV-2 variants spread and other factors across Europe and Israel. Study design Time series analysis. Methods Time series analysis of the daily number of COVID-19 deaths was performed using non-linear Poisson mixed regression models. Variables such as variants’ frequency, demographics, climate, health, and mobility characteristics of thirty-two countries between January 2020 and April 2021 were considered as potentially relevant adjustment factors. Results The analysis revealed that vaccination efficacy in terms of protection against deaths was 72%, with a lower reduction of the number of deaths for B.1.1.7 vs non-B.1.1.7 variants (70% and 78%, respectively). Other factors significantly related to mortality were arrivals at airports, mobility change from the prepandemic level, and temperature. Conclusions Our study confirms a strong effectiveness of COVID-19 vaccination based on real-life public data, although lower than expected from clinical trials. This suggests the absence of indirect protection for non-vaccinated individuals. Results also show that vaccination effectiveness against mortality associated with the B.1.1.7 variant is slightly lower than that with other variants. Lastly, this analysis confirms the role of mobility reduction, within and between countries, as an effective way to reduce COVID-19 mortality and suggests the possibility of seasonal variations in COVID-19 incidence.
Introduction
The pandemic of the coronavirus infectious disease 2019 (COVID-19) is continuously evolving, driven by the spread of new variants of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). During the second half of 2020 and early 2021, a variety of new SARS-CoV-2 variants emerged. EU2 variant (mutation S:447N), first observed in July 2020 in Western Europe, was found to be capable of increasing virus infectivity. 1,2 Then, several variants of concern (VOCs) have been identified, including B.1.1.7 developed first in the UK in September 2020, 3 B.1.351 in South Africa in December 2020, 4 P.1 in Brazil in January 2021, 5 and the 'Indian' variant B.1.617 reported first in Maharashtra in January 2021. 6 The disease mortality has been increased in these countries after new variants were developed. 7e10 An increased risk of transmissibility, hospitalization, and death associated with the B.1.1.7 variant was reported by a number of authors. 8,11e16 The B.1.351 variant was found to have an increased transmissibility and immune escape 17 and was estimated to be 50% more transmissible than pre-existing variants. 18 Higher incidence of COVID-19 cases in younger age groups was observed in the Amazonas state, suggesting changes in pathogenicity of the P.1 variant. 19 Preliminary findings suggest also a significant increase in case fatality rate in young and middle-aged population for the P.1 mutant. 20 The region of Maharashtra, where the B.1.617 variant emerged, experienced a significant rise in daily infection rate after the new variant appeared. 10 To control the SARS-CoV-2 spread, a number of different vaccines have been developed and analyzed in clinical trials, including eight vaccines having emergency use or conditional marketing authorizations worldwide or across regions, as of May 2021. 21 The worldwide vaccination campaign started in December 2020 aiming to provide herd immunity across societies. The threshold for COVID-19 'herd immunity' was placed between 60 and 70% of the population gaining immunity through vaccinations or past disease exposure; however, scientists warn that herd immunity is unlikely to be achieved owing to factors such as vaccine hesitancy and the spread of new variants. 22,23 Israel was far ahead of other countries in terms of the proportion of vaccinated inhabitants, exceeding 62% at the end of April 2021, with the UK reaching 50% and the USA 42% at the same time. 24 Results of clinical trials on vaccine efficacy revealed that Pfizer-BioNTech had 95% efficacy at preventing symptomatic COVID-19 infection in people without prior infection. 25 Efficacy of 94.1% was reported for Moderna, 26 70.4% for Oxford-AstraZeneca, 27 66.5% for Johnson & Johnson, 28 and 96.4% for Novavax, 29 with the latter being still under the investigation before authorization. For the prevention against a severe disease course, Pfizer, Moderna, AstraZeneca, and Novavax reported a 100% efficacy, whereas 84% was observed for Johnson & Johnson; however, the latter was tested on a broader range of countries, including the USA, South Africa, and Brazil, after the new VOCs spread.
Clinical evidence suggests that newly developed virus variants may affect the protective efficacy of both naturally acquired immunity and vaccinations. Studies on neutralization of convalescent sera against distinct strains showed that VOCs were harder to neutralize than the original strain, an early Wuhan-related strain of SARS-CoV-2. Neutralization titers against the B.1.1.7 variant showed a threefold reduction, 30 a 3.4-fold reduction was observed for the P.1 variant, 31 and a 13.3-fold reduction for the B.1.531 variant. 32 Johnson & Johnson vaccine was found to have 64% efficacy against infection in South Africa and 68% in Brazil after the spread of B.1.135 and P.1 variants, whereas the efficacy against severecritical disease was 82% and 88% in both countries. 28 10 The other concern is the probability of reinfection after recovery or vaccination. Hansen et al. 39 observed an 80.5% protection against reinfection in a population-level observational study on Danish patients previously tested positive for SARS-CoV-2; however, the study was performed before VOCs spread. The probability of reinfection after vaccination is also a big concern. As reported by the US Centers for Disease Control and Prevention (CDC), there were around 9200 infections among vaccinated inhabitants among 95 million of those who have already been vaccinated in the USA (0.01%) as of 26 April 2021. 40 Despite these optimistic preliminary data, experts alarm that additional data are needed to assess the potential impact of VOCs on future vaccine efficacy. 41 Considering all the concerns associated with new VOC spread, the real vaccination effectiveness becomes hard to assess and judge but can be expected to decrease over time. Also, it is likely that vaccination may favor the emergence of new variants by selection of new, better fitted mutants. Some scientists suggest that, similarly as for seasonal flu vaccines, COVID-19 vaccines will need to be redesigned or even updated periodically to protect against new variants. 42,43 Vaccination efficacy and distinct variants spread are the only two factors among numerous other variables affecting COVID-19 infection and death rates across the world. A variety of potential predictors were assessed in the literature, including demographic characteristics, mobility and social-distancing measures, environmental and climate variables, as well as health characteristics. 44e53 This study aims at estimating the real-life impact of vaccination on COVID-19 mortality based on publicly available data from Europe and Israel, using time series analysis with non-linear mixed regression models. Variants frequency, including B.1.1.7 and other variants, as well as country-specific demographic and meteorological characteristics, health indicators, and mobility factors were considered as potentially relevant adjustment factors. Results of the current study should inform policy decision-makers, scientists, and the general public about the role of vaccination and socialdistancing strategies in controlling the COVID-19 pandemic in the face of new VOCs spread.
Data collection
A total of 32 countries were considered in the analysis, including European countries and Israel. The daily number of COVID-19 deaths was the primary outcome of interest. Values were smoothed using 7-day moving average, divided by the number of inhabitants of a given country and reported as daily numbers of deaths per 1 million inhabitants.
The main explanatory variables of interest were proportion of vaccinated inhabitants (vaccination coverage), as well as average proportions of SARS-CoV-2 variants calculated across strains forming 12 Nextstrain clades. The focus was on 20A (EU2), 20E (EU1), and 20I (B.1.1.7) variants, with the two formers being dominant in Europe during the summer 2020 and the latter VOC being most frequent early 2021. Other time-varying covariates were maximum daily temperature, mean daily wind speed, the number of arrivals at two biggest airports of a country, and change in mobility from the prepandemic level (considering the average across retail/recreation, transit stations, and groceries/pharmacies). Additional fixed covariates were proportion of population aged 65 years or older, prevalence of diabetes, and rate of cardiovascular deaths.
Data on COVID-19 deaths and vaccination were obtained from Our World in Data on 15 April 2021. 24 Metadata on SARS-CoV-2 virus variants (clades) identified up to mid-April 2021 were downloaded from the Nextstrain platform. 54e56 We assumed that if a strain was observed on a given date, it could be observed in a range of ±14 days from the observation date. Because the data were not reported daily, linear interpolation was used to impute missing observations, assuming zeros a month before the first and after the last (if up to 1 March 2021) reported occurrence of a variant. Finally, data were smoothed with the use of 14-day moving average.
Countries' characteristics were obtained from Our World in Data, Eurostat, the National Centers for Environmental Information, Aviation Intelligence Portal, and Google COVID-19 Community Mobility Reports. 24,57e60 Data on arrival flights and mobility were smoothed using 7-day moving average.
Statistical analysis
Regression analysis was used to investigate the association between COVID-19 mortality and daily reported time-varying variables and fixed covariates.
The primary analysis of the daily number of COVID-19 deaths was performed with the use of non-linear Poisson mixed model with random country-level intercept and mobility effect. The considered period was from the date of the first reported death in Europe, 29 January 2020, up to 15 April 2021.
Owing to the presence of autocorrelations, and to consider the fact that the number of infections on a given day is dependent on the number of infectious cases in the population over previous days which translates into the respective number of deaths, the model was adjusted for the logarithm of the daily number of COVID-19 deaths reported 7 days earlier. To capture the fact that increasing or decreasing trends in COVID-19 mortality over time are generally stable over several weeks or months, the logarithm of quotient of COVID-19 deaths 7 days before divided by deaths 14 days before the actual date was added as a covariate. All other time-varying variables were considered with a 21-day lag, to account for the virus incubation period, assuming 7 days from contact to symptoms onset, and a delay between symptoms onset and death due to the disease, assuming another 14 days. In addition, heterogeneity between countries was considered with random intercepts and mobility effects varying between countries.
Assuming M indicates mortality with vaccination coverage 'c', M o is the mortality without vaccination, and 'VE' represents the vaccine efficacy, we have: After applying the logarithmic transformation and considering a set of covariates x 1 ; …; x k and random effects u 0 ; u 1 ; …; u n on intercept and selected x 1 ; …; x n , this equation was extended as shown in the following to specify the non-linear model: For the exploratory analysis, vaccine efficacy against B.1.1.7 and non-B.1.1.7 variants was analyzed using a similar approach. Assuming that there are two classes of virus variants with known proportions equaled p 1 and p 2 , the vaccine efficacy could be considered as the average efficacy weighted by variants proportions: The formula for the non-linear model is then as follows: Additionally, three scenarios were tested as sensitivity analyses, varying either the time to symptoms onset or the time between symptoms onset and death. A detailed methodology is presented in Supplementary Materials.
A P-value lower than 0.05 was considered as statistically significant. Akaike's information criterion (AIC) was provided to inform about models' fit statistic. Analyses were performed using SAS, version 9.4, software.
Descriptive statistics
Descriptive statistics of outcomes and covariates across 32 countries included in the analysis, for the period between January 2020 and April 2021, are presented in Table 1. Mean proportions of SARS-CoV-2 variants, EU2, EU1, and B.1.1.7, for each country are presented in Fig. 1. Until mid-April 2021, the variant EU2 was the most frequently spread for vast majority of countries, except Israel and the UK for which B.1.1.7 was more frequent, as well as Spain and Lithuania with EU1 being more commonly observed.
Primary analysis
Analysis of the non-linear Poisson mixed model of the number of COVID-19 deaths revealed that the effect of vaccination effectiveness against mortality was assessed as significant and equaled to 0.720 (P < 0.001; Table 2). Other covariates that were found significant in the model were temperature (À0.005, P < 0.001), arrivals at airports (0.709, P < 0.001), and mobility change from the prepandemic level (0.753, P < 0.001). Variables used to account for autocorrelation and minimize the effect of trend were assessed as significant (Log of the number of daily COVID-19 deaths 7 days before: 0.926, P < 0.001; Log of the number of COVID-19 deaths 7 days before/14 days before: 0.158, P < 0.010). The random intercept variance was statistically significant, which indicated significant unexplained variability between countries (0.014, P ¼ 0.023).
Exploratory analysis
Results of the analysis of the exploratory model revealed numerically lower vaccine effectiveness against B.1.1.7 than against non-B.1.1.7 variants, although the difference was not statistically significant (0.697, P ¼ 0.002 and 0.778, P ¼ 0.049, respectively; Table 3). The same set of covariates was found significant in the exploratory model as in the primary analysis: temperature (À0.005, P < 0.001), arrivals at airports (0.703, P < 0.001), and mobility change from the prepandemic level (0.753, P < 0.001). Variables used to account for autocorrelation and minimize the effect of trend were assessed as significant (Log of the number of daily COVID-19 deaths 7 days before: 0.926, P < 0.001; Log of the number of COVID-19 deaths 7 days before/14 days before: 0.158, P < 0.010), as was the variance for random intercept (0.013, P ¼ 0.025).
Sensitivity analysis
Sensitivity analyses yielded overall vaccination effectiveness estimates against mortality between 0.60 and 0.72 (Fig. 2). Regarding vaccination effectiveness associated with variants, two out of three scenarios provided consistent results with the main analysis, i.e., a trend towards lower effectiveness against the B.1.1.7 variant, and the opposite trend was observed in the remaining scenario (Fig. 2). Detailed results are provided in Supplementary Materials.
Discussion
In this study, we investigated the association between daily mortality due to COVID-19 and vaccination coverage, proportions of SARS-CoV-2 variants, and additional factors, such as demographics, health, mobility, and meteorological variables, analyzing countrylevel data across Europe and Israel. Results of the analysis suggest that vaccination effectiveness against deaths is equal to 72% and that it is slightly lower against the B.1.1.7 variant than against non-B.1.1.7 variants (difference not statistically significant). These findings suggest lower effectiveness against death than reported efficacy against severe or critical disease course in clinical trials of vaccines (84e100%). 25e29 This lower-than-expected effectiveness might be explained by the difference in considered populations: clinical trials included restrictive populations, and our study covers general populations, irrespective of age, concomitant therapies, medical condition, and general condition. In particular, vaccinated people in real life are older on average than subjects enrolled in clinical trials (12.2% aged !55 years in the AstraZeneca trial; 24.7% aged !65 years in the Moderna trial; 33.5% aged !60 years in the Johnson & Johnson trial; 42.3% aged !55 years in the Pfizer trial). However, our results suggesting lower protection against the B.1.1.7 variant are consistent with reported data so far from in vivo experiments and patient-level studies, providing an external validation of these findings. Laboratory evidence revealed a slight reduction in neutralization against the B.1.1.7 variant compared with the original strain. Neutralization titers against this VOC were threefold lower when analyzing convalescent sera and 3.3-fold and 2.5-fold lower for Pfizer and AstraZeneca vaccinees, respectively. 30 Real-world studies on the B.1.1.7 VOC suggested that it caused increased mortality compared with non-B.1.1.7 variants, 15,61 which, therefore, might not have been contained with similar effectiveness by vaccination. Our results evoke the question of variants evading vaccine antibodies in the future and the need to adapt such vaccine for each new season, which was earlier suggested by experts. 42,43 While the utilization of individual-level data, collected in realworld setting, could provide more precise estimates of vaccine effectiveness, the use of aggregate data at country level also has a major advantage: the vaccination impact estimated in this analysis should capture the indirect protection provided by vaccination. If the vaccine protects against infection, the number of infectious cases would decrease as more people are vaccinated. The lower number of infectious cases in the population would lead to a reduced probability for susceptible individuals to get in contact with infectious cases, thus leading to a reduction in incidence among all people, including non-vaccinated people. This indirect protection can be captured when comparing different populations with different rates of vaccination coverage, but could not be captured when comparing vaccinated and non-vaccinated individuals from the same population. Interestingly, the fact that our estimated vaccine effectiveness is relatively low compared with vaccine efficacy reported in clinical trials suggests that there is no or little indirect protection provided by vaccination. This could indicate that the vaccine protects against disease but not against infection or that vaccinated groups of population are not those that contribute to the propagation of the virus.
A positive relationship between the number of arrivals at airports and mortality has been observed in this analysis, similarly as between mobility change and mortality. It suggests that both increased long-distance travel and increased mobility are strong predictors of growth in the daily number of COVID-19 deaths. These findings highlight the role of mobility reduction, both within and between countries, as an effective way to reduce COVID-19 mortality, especially when new virus variants spread across the world. Our results are in line with previous study by Jabło nska et al. 50 suggesting that countries with lower reduction in mobility at the beginning of the pandemic experienced a higher COVID-19 daily deaths peak. The role of social distancing was also underlined by Badr et al. 62 who observed a significant impact of mobility on COVID-19 transmission in the USA. The daily temperature was found as a significant predictor of COVID-19 mortality in this study, with increasing temperature associated with the reduction in the number of deaths. Kerr et al. 63 found no consensus on the impact of meteorological factors on COVID-19 spread in their literature review; however, they suggested existence of environmental sensitivity of COVID-19, but not as significant as non-pharmaceutical interventions and human behavior. Several authors underlined that disease seasonality may exist, 64e67 including Liu et al. 66 who found that COVID-19 infection and mortality rates were higher in colder climates and that the cold season caused an increase in total infections, while the warm season contributed to the opposite effect. Because our analysis covered a full annual cycle of COVID-19, our result suggests the possibility of seasonal variations in COVID-19 incidence. Such seasonality has been well established in temperate climate for other respiratory viruses. 68,69 Limitations Our study has several limitations. First, our analysis was conducted on a country-level basis to estimate the vaccination efficacy, which should be seen as a less precise method than analysis of individual-level data, as previously noted. However, given the range of included countries, our results shed light on the problem of vaccination effectiveness from a broader perspective and investigate the effect of vaccination across societies, considering variability of vaccination coverage through time and between countries. Second, the quality of data on variants distribution varied between countries and was low for some of them; therefore, results of the exploratory analysis should be treated with cautious. To limit bias and avoid fluctuations, we used methods of interpolation and smoothing. Countries with limited data were excluded. Third, the set of covariates used in the multivariate analysis can be assessed as non-exhaustive. We decided to consider factors that were previously assessed as significantly impacting the risk of severe illness or mortality from COVID-19 in the literature. 44e53 Also, the significant random intercept observed in our models reflects unexplained between-countries variability resulting from the omission of influential variables. It was previously shown, for example, that COVID-19 mortality may be influenced by economic factors, which were not considered in this analysis. 46 Finally, we were not able to consider other new SARS-CoV-2 VOCs, except B.1.1.7, in the current analysis, which was due to their limited spread in Europe as of April 2021. It rises a need for further research on this topic in the future.
Conclusions
This study confirms a strong effectiveness of COVID-19 vaccination based on real-life public data, in terms of protection against deaths being around 72%, although it appears to be slightly lower than could be expected from clinical trial results. This suggests the absence of indirect protection for non-vaccinated individuals. Results also suggest that vaccination effectiveness against mortality associated with the B.1.1.7 variant is high but slightly lower than other variants (70% and 78%, respectively). Finally, this analysis confirms the role of mobility reduction, both within and between countries, as an effective way to reduce COVID-19 mortality and supports the possibility of seasonal variations in COVID-19 incidence.
|
2021-05-30T01:07:33.092Z
|
2021-05-29T00:00:00.000
|
{
"year": 2021,
"sha1": "176a2bc896ff7dab92c047820da6d08f8897a8dc",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8413007",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "f93bc5c166d32bc002f0867f1b5cd2d76362e129",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
198780849
|
pes2o/s2orc
|
v3-fos-license
|
Investigating the impact of the diseases of despair in Appalachia
Introduction Appalachia is one of the regions most significantly impacted by the opioid crisis. This study investigated mortality due to diseases of despair within the Appalachian Region, with an additional focus on deaths attributable to opioid overdose. Methods Diseases of despair include: alcohol, prescription drug and illegal drug overdose, suicide, and alcoholic liver disease/cirrhosis of the liver. Mortality data from the National Center for Health Statistics (NCHS) National Vital Statistics System (NVSS) Multiple Cause of Death database were analyzed for this study, focusing on individuals aged 15–64. Results Over the past two decades, the mortality rate due to diseases of despair has been increasing across the United States, but the gap has widened between the Appalachian Region and the rest of the nation. In 2017, the combined diseases of despair mortality rate was 45% higher in the Appalachian Region than the non-Appalachian United States. When looking at just overdose mortality, this disparity grows to 65% higher in the Appalachian Region. Within the Appalachian region disparities are most notable in the Central and North Central Appalachian subregions, among males, and among individuals age 45 to 54. Discussion These findings document the scale and scope of the problem in Appalachia and highlight the need for additional research and discussion in terms of effective interventions, policies, and strategies to address these diseases of despair. Over the past two decades, mortality from overdose, suicide, and alcoholic liver diseases/cirrhosis has increased across the United States, but the disparity between Appalachia and the non-Appalachian U.S. continues to grow.
INTRODUCTION
he Appalachian Region is a 205,000-square-mile region that spans from southern New York to northern Mississippi, includes 420 counties and 8 independent cities in 13 states, and has a population of 25 million people ( Figure 1). 1 Appalachia lags behind the rest of the nation in educational attainment, economic development, and health outcomes. 2,3 Appalachia's median household income is 83% of the U.S. average, and the region's poverty rate is 16%. 2,4 Certain Appalachian subregions experience greater disparities than others; for example, labor force participation, household income, and bachelor's degree attainment are lowest in Central Appalachia. 2,4-6 A number of studies have shown that the residents of the Appalachian Region experience significant health disparities compared to the rest of the nation. 3,7-10
Figure 1. Map of the Appalachian Region
Research conducted by Case and Deaton has demonstrated increasing morbidity and mortality from three main causes-alcohol, prescription drug and illegal drug overdose; suicide; and alcoholic liver disease/cirrhosis of the liver-which have been referred to collectively as "diseases of despair." 11 The rise in overdose mortality described by Case and Deaton was driven by the opioid crisis in the U.S. According to data from the Centers for Disease Control and Prevention (CDC), in 2017, the number of overdose deaths involving opioids was six times T higher than in 1999. 12 Socioeconomic factors, including education, employment, and income, are potential factors influencing the growing opioid crisis in the U.S. This study investigated mortality due to diseases of despair within the Appalachian Region, with an additional focus on deaths attributable to opioid overdose.
METHODS
This study aimed to detect differences in the mortality rates from diseases of despair between the Appalachian Region and the non-Appalachian U.S. (the rest of the country, excluding Appalachia), in addition to differences by age groups and gender. Appalachian rates were further analyzed by subregion, county economic status, and levels of rurality. Appalachian subregions, as defined by the Appalachian Regional Commission (ARC), represent contiguous geographies of relatively homogeneous characteristics (topography, demographics, economics, and transportation) and include: Northern, North Central, Central, South Central, and Southern Appalachia. ARC also provides county-level economic classifications based on an index of three economic indicators (3-year unemployment rate, per capita market income, and poverty rate). Counties are designated based on the index as distressed, at-risk, transitional, competitive, or attainment. 13 Lastly, ARC designations for rurality were used for these analyses. These designations of large metro, small metro, nonmetro adjacent to large metros, nonmetro counties adjacent to small metros, and rural counties are based on a simplification of the UDSA's Economic Research Services (ERS) 2013 Urban Influence Codes (UIC). 14 The majority of the findings presented are from 2015 mortality data from the National Center for Health Statistics (NCHS) National Vital Statistics System (NVSS) Multiple Cause of Death database, accessed through the CDC Wideranging Online Data for Epidemiologic Research (CDC WONDER). 15 Select findings include data through 2017. The Multiple Cause of Death database provides the underlying cause-of-death, as well as up to 20 additional multiple causes, as reported on an individual's death certificate by a physician, coroner, and/or medical examiner. 16 Deaths are coded to the International Classification of Disease Tenth Revision (ICD-10) codes.
These analyses included the ICD-10 codes referenced by Case and Deaton, reflecting the underlying cause of death from each of the three diseases of despair: alcohol, prescription drug and illegal drug overdose (X40-X45, Y10-Y15, Y45-Y49); suicide (Y87.0, X60-X84); and alcoholic liver disease/cirrhosis of the liver (K70, K73-K74). 11 Multiple cause-of-death ICD-10 codes (T40.0, T40.1, T40.2, T40.3, T40.4, T40.6) that specify the type of drug causing the overdose were used to determine the percentage of alcohol, prescription drug and illegal drug overdose deaths attributed to opioids. 17 The findings present age-adjusted mortality rates for the population aged 15 to 64. Additional analyses provide a more granular focus on mortality rates by age group (10-year increments between ages 15 and 64). Statistical significance was assessed at the 0.05 level using two-sided significance tests (z-tests).
RESULTS
Over the past 2 decades, the mortality rate due to diseases of despair has been increasing across the U.S. While the non-Appalachian U.S. has seen a rise in these deaths, the gap has widened between the Appalachian Region and the rest of the nation. As shown in Figure 2, the combined diseases of despair mortality rate in Appalachia and the non-Appalachian U.S. were nearly identical in 1999. By 2009, the mortality rate in the Appalachian Region was 24% higher than the non-Appalachian U.S., and by 2017, this difference had grown to 45%. Between 2009 and 2017, the rate in Appalachia has nearly tripled, showing the increasing burden associated with these diseases of despair. Appalachian U.S. Among the three causes of death (overdose, suicide, and alcoholic liver disease/cirrhosis), the disparity between Appalachia and the non-Appalachian U.S. was greatest for overdose deaths. In 2017, the overdose mortality rate was 65% higher in Appalachia than in the non-Appalachian U.S. (48.3 deaths per 100,000 population in Appalachia compared to 29.2 deaths per 100,000 population in the rest of the U.S.) The suicide mortality rate was 30% higher in Appalachia, and the alcoholic liver disease/cirrhosis mortality rate was 10% higher. Overdose Deaths. Overdose deaths have become increasingly common over the past decade, largely due to the opioid crisis, which has greatly impacted Appalachia. In the Appalachian Region in 2015, there were 5594 overdose deaths among those aged 15 to 64 years, and of these, 69% were caused by opioids (opium, heroin, methadone, other opioids and synthetic narcotics). Overdose mortality was 34% higher in economically distressed counties than economically nondistressed (attainment, competitive, transitional, and at-risk) counties. When comparing metro (large metro and small metro) and nonmetro (nonmetro, adjacent large metro; nonmetro, adjacent small metro; and rural) counties, the difference was less notable-the overdose mortality rate in metro counties was 6% higher than the rate in nonmetro counties.
Overdose mortality significantly impacts Appalachian adults between the ages of 25 and 54. Among males, the burden in Appalachia was the greatest among those aged 25 to 34 years (61.9 deaths per 100,000 population) and 35 to 44 years (61.0 deaths per 100,000 population). These age groups also experienced the highest mortality rates in the non-Appalachian U.S., though large disparities were observed when comparing Appalachia to the non-Appalachian U.S. Specifically, the overdose mortality rate was 78% higher among those ages 35 to 44 years and 72% higher among those aged 25 to 34 years in Appalachia than the non-Appalachian U.S. While the overall burden was lower among females than males, the disparity between Appalachia and the non-Appalachian U.S. was greater for females than for males. Among females, the burden in Appalachia was the greatest among those aged 45 to 54 years (35.8 deaths per 100,000 population) and 35 to 44 years (34.7 deaths per 100,000 population). The overdose mortality rate for Appalachian females ages 35 to 44 was more than double the rate for females in the non-Appalachian U.S., and among those aged 25 to 34 years, the Appalachian rate was 92% higher.
As shown in Table 2, states within Appalachia differ in terms of the burden of overdose mortality, and also in the percentage of overdose deaths that were opioid-related. West Virginia had both the highest overdose mortality rate (59.7 deaths per 100,000) and the largest percentage of deaths attributed to opioids, at 88%. In contrast, Appalachian Mississippi had an overdose mortality rate of 12.9 deaths per 100,000, of which only 34% were caused by opioids. The greatest burden from overdose is located within the states of Central and North Central Appalachia, specifically, West Virginia, Appalachian Kentucky, and Appalachian Ohio.
LIMITATIONS
There are several limitations to this study. First, while we highlight the significant increase in overdose deaths between 2015 and 2017, the analyses presented for subgroups, including age, gender, economic status, and rurality, are based on 2015 mortality data. Due to the rapidly evolving nature of the opioid crisis in the U.S., some of these differences have likely changed since 2015. Secondly, because we conducted analyses for the diseases of despair individually, suicides that are attributable to opioids are not included in the overdose category. Finally, the data on opioid-related overdoses are based on death certificate reporting, and there are known variations by state in terms of the quality of this data.
DISCUSSION
These findings document the scale and scope of the problem in Appalachia and highlight the need for additional research and discussion in terms of effective interventions, policies, and strategies to address these diseases of despair. Over the past two decades, mortality from overdose, suicide, and alcoholic liver diseases/cirrhosis has increased across the U.S., but the disparity between Appalachia and the non-Appalachian U.S. continues to grow. Within Appalachia, the burden is concentrated within the Central and North Central subregions, where the majority of economically distressed counties are located. Economic development strategies and interventions that address other underlying contributors to the diseases of despair, in addition to increased access to treatment services, prevention, and overdose medications, may be important considerations in addressing this problem. The rise in opioid overdoses over the past several years has contributed to some of the recent increases in diseases of despair. The states with the highest mortality from overdose in the Appalachian Region are the ones most significantly impacted by the opioid crisis, as shown by the large percentage of overdose deaths attributed to opioids. When comparing Appalachia to the non-Appalachian U.S., the most notable disparities in overdose deaths existed for the group aged 25-44 years. Young adults in Appalachia are considerably more likely to die from an overdose than similar-aged adults in the rest of the U.S., which has significant implications, particularly in terms of economic development, as individuals in their prime working years are most impacted.
The 2017 data presented shows the rapid increase in overdose deaths due to the more powerful synthetic opioids, such as fentanyl; however, these rates have likely continued to rise, and the burden within the Appalachian Region is likely greater than described in these findings. It is critical to continue to track trends in diseases of despair, and particularly opioid-related mortality, to assess the growing burden related to fentanyl and other synthetic opioids.
SUMMARY BOX
What is already known about this topic? Work conducted by Case and Deaton (2015) demonstrated disparities in mortality resulting from overdose, suicide, and alcoholic liver disease among Caucasian working age adults, collectively termed diseases or deaths of despair.
What is added by this report? The work described in this paper expands upon that of Case and Deaton by providing a focused exploration of diseases of despair within the Appalachian region. Findings confirm those of Case and Deaton and provide evidence of disparities focused most prominently within the Central and North Central Appalachian subregions.
What are the implications for public health practice, policy, and research? Beyond direct implications related to greater mortality burden within the population of working age adults in the Central and North Central Appalachian subregions, diseases of despair have cascading impacts on regional economic development, children's health and well-being, and social support systems, among others. Increasing access to treatment services, harm reduction programs and prevention initiatives will benefit individuals with substance use disorder and provide broader community benefit as rates of substance misuse and overdose decline. More broadly, understanding differences in mortality rates at the subregion and population group levels support resource allocation and response efforts.
|
2019-07-26T13:49:57.353Z
|
2019-07-06T00:00:00.000
|
{
"year": 2019,
"sha1": "037ee2ae86312708b95c20aac24825dfdc7097c2",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "71a5946d006828fd1f16038c6a2859c575b7d6db",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine",
"History"
]
}
|
247879161
|
pes2o/s2orc
|
v3-fos-license
|
“Earnings management and impression management: European evidence”
This study explores the relationship between Earnings Management and Impression Management in the context of some European listed companies. The analysis focuses on the readability of annual reports, measured by the file size. Earnings management is assessed using the modified Jones model. The sample consists of 2,953 listed companies from 17 industries of 24 European countries between 2012 and 2018 resulting in 13,020 firm-year observations. It has been found that one standard deviation increase in financial reports file size increases discretionary accruals in around 4%. These re- sults are robust across different sample specifications in terms of firms’ size, industry and country. The findings show that increased intensity in the use of discretionary accruals is obfuscated by the disclosure of less readable annual reports, implying that Earnings Management and Impression Management are used complementarily. The conclusions have impact both for investment management and for policy, preventing inefficient allocation of capital budgeting and providing additional information that improves regulation on financial reporting transparency. in the concept of IM (Merkl-Davies & Brennan, 2007). The second one is that bad news is just inherently more difficult to communicate and is contextualized as ontologi-cal theory. Ajina et al. (2016) and Lo et al. (2017) present evidence of management opportunism and they report a negative relationship between earnings management (EM) practice and narrative readability. This study analyzes the association between EM and IM practices in the context of annual reports. EM is measured using discretionary accruals using the modified Jones model. The measure of IM is the size of firms’ annual reports. The sample consists of 2,953 listed firms in 24 European countries, with data between 2012 and 2018, corresponding to 13,020 firm-year observations. A positive and significant association is found between EM (discretionary accruals) and IM (report file size). The results are robust across different robustness tests. The same positive and significant association is obtained after controlling for the most representative countries or industries and controlling for year, country and sector fixed effects. The results support that the increased intensity in the use of discretionary accruals leads managers to obfuscate these accounting choices with the disclosure of less readable annual reports, suggesting a complementary relationship between EM and IM.
INTRODUCTION
The need for information and transparent communication gives corporate media the status of potential vehicle of Impression Management (IM) that managers can use to manage the perceptions that the public builds about the company (Clatworthy & Jones, 2006). In fact, the literature has studied managers' communications from the perspective of IM as an attempt to obfuscate or reinforce information (Merkl-Davies & Brennan, 2007).
Empirical research on information obfuscation in financial reports has focused on the readability of the narratives disclosed by managers. Bloomfield (2008) suggests two alternative explanations for a positive relationship between the readability of annual reports and the level of reported earnings. The first is that the decreased readability of annual reports is an attempt by managers to obfuscate results, a practice included in the concept of IM (Merkl-Davies & Brennan, 2007). The second one is that bad news is just inherently more difficult to communicate and is contextualized as ontological theory. Ajina et al. (2016) and Lo et al. (2017) present evidence of management opportunism and they report a negative relationship between earnings management (EM) practice and narrative readability.
LITERATURE REVIEW
Every year (or more frequently), managers release financial reports presenting the economic and financial performance of the companies. The report consists of the Financial Statements and discretionary information that is intended to explain and provide additional information regarding the Financial Statements. The Financial Statements encompass quantitative information and are presented in accordance with mandatory guidelines and standards, but discretionary information may be presented in the form of narratives, photographs, and graphs and is susceptible to being used as a tool to obfuscate a company's economic reality (Courtis, 1995). However, both Financial Statements and discretionary information are subject to judgement by managers, which gives them a margin to manage information for their own benefit despite the various levels of regulation (Gonçalves, 2022;Godfrey et al., 2003;Healy & Wahlen, 1999). In fact, although annual reports are considered to be a means of conveying information that enhances the decision-making process of their users, a more skeptical perspective has emerged that considers them to be potential vehicles for the disclosure of biased information Merkl-Davies & Brennan, 2007).
Research on discretionary information presents two schools of thought: The first is the incremental information school that fits into an informational perspective, i.e., it assumes that the disclosure of discretionary information aims to overcome the barrier of information asymmetries providing complementary and additional information and having as ultimate consequence the reduction of the cost of capital (Baginski et al., 2000). The second is the IM school that considers the disclosure of discretionary information to be a way of practicing opportunistic acts to satisfy the interests of the managers thus increasing information asymmetry between internal and external agents to the company (Aerts, 2005;Godfrey et al., 2003).
Research on EM also presents two perspectives similar to those of discretionary information. The first one is the information perspective equivalent to the incremental information approach whereby managers use accounting discretion to provide private and useful information that reveals their future expectations about the company (Holthausen & Leftwich, 1983). The second is an opportunistic perspective that assumes the use of accounting discretion as a mean for managers to pursue their own interests Healy, 1985).
Opportunistic EM is well-documented in the literature. EM occurs when managers use judgment in Financial Statements and in structuring operations to alter Financial Statements to "fool" some stakeholders about the economic performance of the company, influence the contractual results (Healy & Wahlen, 1999), or to obtain some private gain (Schipper, 1989).
Impression management
The term "Impression Management" has emerged in the psychology literature (Schlenker, 1980). Later, it was defined as the process through which an individual seeks to obtain control over the impression that others have about himself (Leary & Kowalski, 1990). In the context of accounting disclosure, IM is effective through the selection of the content and the form of the disclosed information to influence the interpretation of the results by the users of the information (Neu, 1991). The study of IM has been approached via four perspectives: psychological, economic, sociological, and critical. In the literature, the psychological (based on attribution theory) and the economic perspectives (explored in the context of the agency theory (Merkl-Davies & Brennan, 2007) predominate.
Under attribution theory, IM is considered to be an opportunistic practice resulting from a cognitive process in which an individual tries to collect credit for success and denies responsibility for failure (self-serving bias) (Knee & Zuckerman, 1996). In the context of financial reporting, attribution is approached from an egocentric perspective that has been consistently observed (Bettman & Weitz, 1983;Clapham & Schwenk, 1991;Salancik & Meindl, 1984;Wagner & Gooding, 1997). This means that managers tend to attribute responsibility for good results to themselves or to internal factors (e.g., strategy, management decisions, human resources, know-how, product/service quality) and responsibility for bad results to external factors (e.g., economic environment, inflation, political action, exchange rate fluctuation, natural disasters) (Aerts, 2001;Aerts & Cheng, 2011;Clatworthy & Jones, 2003). Attribution theory focuses on the analysis of the actions and events presented as justification for financial performance (Brennan & Merkl-Davies, 2013) assuming that managers adopt attribution behavior consciously, although research in this area is not conclusive (Clatworthy & Jones, 2006;Leary & Kowalski, 1990;Schlenker, 1980). Under agency theory, IM aims to intentionally bias information reporting (reporting bias) (Bowen et al., 2005) and may have several purposes, including maximization of the managers' remuneration package with special relevance in scenarios that include stock options (Rutherford, 2003;Courtis, 2004a). The agency cost associated with IM consists of the inefficient allocation of capital as observed in most situations that fall under this theory (Davidson et al., 2004;Jensen & Meckling, 1976;Merkl-Davies & Brennan, 2007).
From the IM perspective, analysis in the context of agency theory focuses on the obfuscation of results either by covering up the results that did not meet expectations or by emphasizing the results that did meet or exceeded expectations (Gioia et al., 2000).
Obfuscation hypothesis
Obfuscation is a form of writing or presenting information that masks the content of a message. Information can be obfuscated by deliberately disseminating an opaque message or concealing undesirable facts and events that seek to mitigate negative reactions (Courtis, 2004a).
Various techniques can be used to obfuscate information. Li (2008) reported that companies with lower earnings results tend to issue annual reports with longer and more complex narratives. Aerts and Zhang (2014) found a causal relationship between accruals earnings management and intensity of performance explanation. Hyland (1998) argued that the section of the annual reports that contain a Chief Executive Officer (CEO) message can be the subject of rhetorical discourse using specific linguistic terms that convey an idea of competence, reliability, authority, and honesty about the CEO. Clatworthy and Jones (2001) found that the introduction to the CEO's communication (which includes a reference to the year's results) tends to be easier to read than the rest of the communication (which presents passages about the problems facing the company). Bowen et al. (2005) published evidence for the intention to present good news before bad news. The connotation attributed to the narrative as offering additional information to assist in forecasting future cash flows has also been shown to be an element of obfuscation (Feldman et al., 2010;Schleicher & Walker, 2010). Other ways of obfuscation include managing the visual impression, e.g., by highlighting parts of the text (Brennan et al., 2009) through the choice of color in reports and releases (Courtis, 2004b) or even by using linguistic morphology techniques such as the use of repetition to reinforce certain contents (Davison, 2008 As far as the readability of financial documents is concerned, the obfuscation hypothesis suggests that when there is bad news to disclose, the preparers of financial information tend to reduce the clarity of reports making them less transparent (Rutherford, 2003). At the level of annual reports, Li (2008) found a positive and significant association between the persistence of results and the readability of narratives presenting statistical evidence that managers resort to a greater number of words and more complex words when they have less persistent results to disclose.
In Since the literature that studies the association between EM and IM is recent and, therefore, still relatively unexplored, this study evaluates the association between EM practices, through discretionary accruals, and the readability of annual reports in a context less studied in the literature: European listed companies.
Thus, based on previous literature and on the Obfuscation Hypothesis, this study aims to analyze the complementary relationship between EM and IM and test if the readability of the annual report is associated with the level of discretionary accruals presented by a company.
Data and sample
Data were extracted from Bureau Van Dijk's Amadeus database. All listed companies in the Eurozone (EU28) were selected, excluding companies belonging to the financial and public administration sectors due to accounting and regulatory specificities (
Measuring the readability of annual reports
The Fog Index is a widely used indicator to quantify the readability of annual report narratives. However, it has been subject to several criticisms. The Fog Index is an indicator composed of a linear combination of average sentence length and proportion of complex words built to assess any type of prose. (2014), among others, argue that the Fog Index is not appropriate for measuring the readability of financial documents. In fact, the identification of sentences is not very effective, given that financial documents present lists, epigraphs, peculiar narrative structures, abbreviations, and a set of other particularities that make it difficult to identify (by computer) the punctuation that identifies the beginning and the end of each sentence. Complex words are frequently used in accounting narratives, and the Fog Index considers complex words to be all English words composed of three or more syllables. Loughran and McDonald (2014) note that words such as company, corporation, operations, and management are common in financial reports and do not test the ability of the readership. Therefore, Loughran and McDonald (2014) suggest using the size of the electronic file as an alternative to the Fog Index to quantify the readability of financial documents.
Loughran and McDonald
Dale and Chall's (1948) definition of readability includes all the elements in a printed document that affect its understanding. This definition is considered by Tekfi (1987) as the classic definition, as well as by DuBay (2007) as the most comprehensive. This definition allows the use of electronic file size as a metric of financial report readability to be extended to annual reports as elements such as charts and images.
Discretionary information is voluntary and will be disclosed under two scenarios. The first is if it is demanded a priori by investors, a scenario in which companies will be incentivized to disclose the same amount of information. It is expected that annual reports will not have significantly different electronic file sizes. The second is because managers intend to hide or obscure any reality, a scenario in which significant differences in electronic file sizes will be expected because the content and form of annual reports will have to be selected with a different purpose than serving investors with the information that they want.
Thus, the additional content voluntarily disclosed in annual reports will also have a role to play in obfuscating bad news as argued by Loughran and McDonald (2014). This helps determine the readability of annual reports.
This study focuses on the readability of annual reports considering not only the accounting narratives but all disclosed elements as potential obfuscation factors. The amount of information disclosed is analyzed following the line of Guay et al.
(2016) who suggest that the costs associated with processing long and complex documents are assumed to be high, i.e., they might be more difficult to read and understand. Thus, following Loughran and McDonald (2014) and Guay et al. (2016), this study uses electronic file size as a measure of annual report readability.
Measuring earnings management
To capture the practice of EM, the model of Jones (1991) modified by Dechow et al. (1995) and by Kothari et al. (2005) is used as follows: , 01 , (1) where, TAcc i,t is total accruals of firm i in year t; ∆REV i,t is change in sales of firm i from year t -1 to year t; ∆AR i,t is change in accounts receivable of firm i from year t -1 to year t; PPE i,t is property, plant and equipment of firm i in year t; ROA i,t is return on assets of firm i in year t as the ratio of net income to assets; and TA i,t is total assets of firm i in year t -1. All variables are divided by total assets at the beginning of the year to reduce the presence of heteroscedasticity in the residuals. These metrics are estimated for each year-industry (Gonçalves et al., 2021).
Total accruals are computed using the balance sheet approach as follows: where, ∆CA i,t is change in current assets of company i from year t -1 to year t; ∆CL i,,t is change in current liabilities of company i from year t -1 to year t; ∆Cash i,t is change in cash and cash equivalents of firm i from year t -1 to year t; ∆Debtst i,t is change in short-term debt of firm i from year t -1 to year t; and Dep i,t is depreciation and amortization of firm i in year t.
The direction of EM (upward or downward) is given by the value of the errors (ε i,t ) from equation (1), and the intensity of EM is revealed by the absolute value of these errors (|ε i,t |).
Empirical model
To study the association between IM ad EM, the following model was developed: where, lnFileSize i,t represents the natural logarithm of the size of the electronic file of the annual report corresponding to each firm-year observation in kilobytes (KB). Whenever a company has submitted more than one annual report per reporting period, then the electronic file size for that reporting period was assumed to be the value corresponding to the largest amongst the annual reports submitted during that same period. This choice does not ignore any element that has been disclosed and is consistent with Dale and Chall's (1948) theorization arguing that all elements included in the annual report increase the readability of its understanding. A higher value of lnFile-Size implies a lower readability.
The independent variable of interest EM i,t represents EM by discretionary accruals and takes the designation ABS_DACC i,t , when the focus of the analysis is on the intensity of EM, and the designation DACC i,t when the focus is on the direction of EM (upward or downward). The average of discretionary accruals is positive suggesting that, on average, the companies in the sample manage earnings upwards. The average company in the sample has a market value of equity of 145,509.987 thousand euros (e 11.888 ), a market-to-book ratio of 1.6377, and an age of approximately 35 years. Extraordinary events occurred in 28.49% of the observations of the sample. The av- The average electronic file size by country and by industry are presented in Figure 1 and Figure 2, respectively. Countries from Central Eastern Europe and Southern Europe are predominant among the countries with the largest average elec-tronic file. The countries of Northern Europe and Western Europe are the ones with the lowest average electronic file. The exceptions are Luxembourg and Latvia, which are among the countries with the lowest representation in the sample along with Greece and Slovenia. These are possibly due to factors inherent to the country itself.
Descriptive statistics
In terms of industries, the categories D. Electricity, gas, steam, and air conditioning supply and F. Construction have the largest average electronic file sizes, and categories P. Education and A. Correlation results shows a negative and significant correlation between the IM measure and the absolute value of discretionary accruals, as well as a positive and significant correlation with discretionary accruals. The highest correlation coefficient is 0.5104 between lnNitems and SpecItems, suggesting that there are no multicollinearity issues, which is confirmed by the Variance Inflation Factors (VIF) below 10 for all variables (not tabulated). (1)) or DACC (Column (2)), in order to study the association between IM and both intensity and direction of EM.
Regression results
Results show a positive and statistically significant coefficient (p-value < 0.05) of the ABS_DACC variable, indicating that lower levels of EM are associated with greater readability of annual reports, supporting the study hypothesis.
In terms of EM direction (upward and downward), the results do not provide any evidence of an association between lnFileSize and the DACC. Indeed, the coefficient, although negative, does not revel statistical significance. To extend the analysis and to circumvent the suspicion of a non-linear relationship, two additional models were estimated: the association between upward and downward EM and the readability of annual reports separately. Table 2, Panel B, presents the results for the sample of companies with DACC >0 in column (3), and for the sample of companies with DACC < 0 in column (4).
The coefficient on DACC is positive in both regressions although not statistically significant. Since the non-linearity of the relationship between the variables may be at the origin of this result, a test was carried out for the equality of means of the size of the electronic file of annual reports between the two subsamples. The result of the test (not tabulated) shows a significant difference (p-value < 0.01) between the averages of the two groups, suggesting that upwardly oriented companies present a higher average and disclose less readable annual reports than downwardly oriented companies. (3)) and DACC < 0 (column (4)). t-statistics are in parentheses, *** p < 0.01, ** p < 0.05, * p < 0.1.
Robustness analysis
To test the robustness of the main results, several analyses were performed: alternative sample composition; the influence of company size; and the influence of reporting an operating profit or loss.
Indeed, more than half of the sample is composed by firms from only three countries (United Kingdom, France and Germany) concentrated in three industries (M. Professional, scientific and technical activities; C. Manufacturing; and J. Information and communication). Table 3, Panels A and B, presents the results obtained without firms from these countries (columns (1) and (2)) and these industries (columns (3) and (4)).
Results for both EM intensity and EM direction are similar to those obtained in the main analysis. The exclusion of the three most representative countries and the three most represented industries does not alter the statistical significance of the complementary relationship between EM and IM in terms of intensity, suggesting that higher levels of earnings management are associated with less readable annual reports.
Prior results suggest an important role of a company's size in explaining the readability of annual reports. Since the sample comprises companies with significantly different sizes, the sample was split into two subsamples: small and medium entities (SMEs) and large entities (LEs). Companies with total assets below and above 43,000,000 euros (European Commission Recommendation, 2003) are considered SMEs and LEs, respectively. Table 3, Panel C, reports the results for SMEs in columns (5) and (6) and for LEs in columns (7) and (8).
Positive coefficients of ABS_DACC suggest a decrease in the readability of annual reports as the intensity in the use of discretionary accruals increases. However, only the SMEs group has a significant coefficient. The absence of statistical significance in the LEs sample may be due to (1) and (2)). Panel B shows results for the influence of predominant industries (columns (3) and (4)). Panel C shows results for the subsamples of SMEs (columns (5) and (6)) and LEs (columns (7) and (8)). Panel D shows results for the influence of reporting operating profit or loss (columns (9) and (10)). t-statistics are in parentheses. *** p < 0.01. ** p < 0.05. * p < 0.1. the greater scrutiny that this group of firms is subjected compared to SMEs. Again, the coefficients of DACC are positive but without statistical significance.
Finally, to analyze the effect of operating performance on the association between EM and IM, a dummy variable, Loss, was included in the model. Loss takes the value 1 if a firm reported operating loss and 0 otherwise. Table 3, Panel D, presents the results in columns (9) and (10).
There is evidence that companies disclose less readable annual reports when they report operating losses rather than operating profits. The results of the main analysis remain unchanged with the ABS_DACC showing a positive and statistically significant coefficient and DACC having a positive but not significant coefficient.
DISCUSSION
This study documents a positive association between EM intensity and IM practices in the context of annual reports. The results suggest that managers seek to obfuscate the intensity with which they manage earnings by disclosing more complex, meaning less readable annual reports, reinforcing the conclusions of Ajina (2016), Li (2008), and Lo (2017).
Thus, there is evidence of a complementary relationship between the practice of EM through accruals and IM through managing the readability of annual reports, suggesting that firms present annual reports with more content as an attempt to obfuscate discretionary accounting choices. This evidence is consistent with the results from Aerts and Cheng (2011) and Godfrey et al. (2003). In terms of narrative readability, Ajina et al. (2016) and Lo et al. (2017) also found that companies that practice EM tend to make their annual report less readable.
No evidence was found in terms of the association between the direction of EM and the practice of IM, but further analysis suggests that companies that practice income-increasing EM have on average higher file size of annual re-ports than companies that practice income-decreasing EM.
Finally, the robustness of the results was confirmed by using a different sample composition, without the influence of the three countries and three industries more representative and by analyzing the role of firm size and financial performance on the relationship between EM and IM.
Additional results suggest that, although large firms tend to present annual reports more complex, it is in the context of small and medium enterprises that the practice of obfuscating EM is more significant. There is also evidence that companies that report operating losses are more likely to disclosure more complex annual reports than those that report an operating profit, consistent with prior research (Li, 2008;Lo et al., 2017).
This study contributes to the literature in several ways. First, it extends a rare stream of research on the association between EM practices and annual report readability by providing evidence of the complementarity of EM and IM under managerial discretion. Second, it provides a better understanding of these relationship by analyzing a broad sample of European companies. Third, it uses an alternative and novel measure of readability (the size of the electronic file) that mitigates the criticism associated with the measures used in previous literature. Fourth, it provides evidence that the association is stronger in the context of small and medium sized firms revealing the scrutiny effect to which large companies are subjected.
The results have economic and practical implications. Understanding the relationship between EM and IM is relevant to avoid inefficient allocation of capital, which can harm investment profitability and therefore negatively affect value creation. It is also relevant to regulators who, by understanding the strategies for managing information and communication, obtain guidelines for establishing a standardization that is more effective in eliminating information asymmetries.
CONCLUSION
This study analyzes the association between EM and IM practices in the context of annual reports. EM is measured using discretionary accruals using the modified Jones model. The measure of IM is the size of firms' annual reports. The sample consists of 2,953 listed firms in 24 European countries, with data between 2012 and 2018, corresponding to 13,020 firm-year observations.
A positive and significant association is found between EM (discretionary accruals) and IM (report file size). The results are robust across different robustness tests. The same positive and significant association is obtained after controlling for the most representative countries or industries and controlling for year, country and sector fixed effects.
The results support that the increased intensity in the use of discretionary accruals leads managers to obfuscate these accounting choices with the disclosure of less readable annual reports, suggesting a complementary relationship between EM and IM. The direction of EM (upward or downward)
LnSize
Is the natural logarithm of the market value Larger firms are expected to have more complex operations and higher politic al costs leading managers to disclose more information and, consequently, annual reports with a larger size of the respective electronic file
MTB
Is the market value of equity plus book value of liabilities divided by total assets Controls for the impact of the firm's growth opportunities assuming that firms with more growth opportunities will disclose annual reports with more information in order to bridge the uncertainty associated with their business models
FirmAge
Is the difference between the year of observation and the year of incorporation of the firm Controls for the effect of firm seniority on the readability of the annual report. On one hand, companies with greater seniority may present greater diversity or investment in their activities leading to the disclosure of less readable annual reports. On the other hand, if information users are familiar with and have more accurate information about the business model of older firms, then one would expect these firms to release more readable annual reports
SpecItems
Is a dummy variable that takes a value of 1 if the company reported Extraordinary and other P/L items and 0, otherwise Controls for the effect of the occurrence of extraordinary events that lack explanation in the annual report. It is expected that, when they occur, they will contribute to the increase in the size of the electronic file
EarnVol
Is the standard deviation of operating income over the last 3 reporting years divided by assets Control for the effect of the volatility of the business and operations that may make reporting more complex and extensive because a decrease in predictability of the results is associated with increased volatility and, users demand for additional explanations in order to reduce uncertainty RetVol Is the standard deviation of monthly stock returns over the last 12 months lnNitems is the natural logarithm of the number of items disclosed according to the Global Standard Format and is available in Bureau Van Dijk's Amadeus database Controls for the complexity of the firm. Companies that disclose more items in the Financial Statements should present more complex and extensive annual reports
|
2022-04-03T15:21:19.645Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "690a3c7c8d4b183a418740d04216c87e723a26b4",
"oa_license": "CCBY",
"oa_url": "https://www.businessperspectives.org/images/pdf/applications/publishing/templates/article/assets/16323/PPM_2022_01_Goncalves.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9d4333f84e882b2a8af3c956774473912e60517c",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": []
}
|
247279328
|
pes2o/s2orc
|
v3-fos-license
|
Sound Reflections in Indian Stepwells: Modelling Acoustically Retroreflective Architecture
: Retroreflection is rarely used as a surface treatment in architectural acoustics but is found incidentally with building surfaces that have many simultaneously visible concave right-angle trihedral corners. Such surfaces concentrate reflected sound onto the sound source, mostly at high frequencies. This study investigated the potential for some Indian stepwells (stepped ponds, known as a kund or baori/baoli in Hindi) to provide exceptionally acoustically retroreflective semi-enclosed environments because of the unusually large number of corners formed by the steps. Two cases— Panna Meena ka Kund and Lahan Vav—were investigated using finite-difference time-domain (FDTD) acoustic simulation. The results are consistent with retroreflection, showing reflected energy concentrating on the source position mostly in the high-frequency bands (4 kHz and 2 kHz octave bands). However, the larger stepped pond has substantially less retroreflection, even though it has many more corners, because of the greater diffraction loss over the longer distances. Retroreflection is still evident (but reduced) with non-right-angle trihedral corners (80 ◦ –100 ◦ ). The overall results are sufficiently strong to indicate that acoustic retroreflection should be audible to an attuned visitor in benign environmental conditions, at least at moderately sized stepped ponds that are in good geometric condition.
Introduction
In architecture, there are some notable acoustic phenomena that can provide fascinating auditory experiences for a listening visitor. Such phenomena include long echoes, extreme reverberation (e.g., in large, hard-surfaced rooms and cisterns), whispering walls and domes (e.g., Gol Gumbaz's dome, Bijapur, India) [1]. Steps can be a source of acoustic intrigue, as exemplified by the chirp-like diffraction effects at the Mayan Chichén Itzá pyramid [1][2][3], or the efficient sound transmission over arrays of stepped seats in the Epidaurus theatre [4,5]. These and other acoustic phenomena can take the listener's spatio-temporal experience beyond the limits of vision, opening up another mode of experiencing space and time as an expression of the architectural form. As well as a source of intrigue for tourists, such experiences provide concrete exemplars of sound propagation phenomena which may have broader applications in architectural acoustics design.
The present paper investigated another step-related acoustic phenomenon in distinctive and sometimes monumental and ancient architecture: acoustic retroreflection in some Indian stepwells. This phenomenon arises in cases where there are many simultaneously visible concave trihedral right-angle corners and has been previously studied in building façades [6][7][8]. It is characterised by a multitude of reflections returning to the source
Introduction to Acoustic Retroreflection in Architecture
A retroreflector returns the incident sound to the direction from which it came. This is distinct from specular reflections, which only reflect to the incident direction at normal incidence, and it is different to scattered reflections, which reflect to all directions (including the incident direction). Physically, acoustic retroreflectors can be formed by right-angle concave corners (both dihedral and trihedral). In optics, the trihedral retroreflector is often referred to as a 'cube corner reflector', 'corner-cube reflector' or just a 'corner cube'. Arrays of corner cubes are used for some optical treatments (e.g., photo-electric sensor reflectors, bicycle reflectors and lunar ranging reflectors [10]), so that the reflected light around the source is intensified as each trihedron in the array reflects back [11,12]-this provides a model for analogous acoustic treatment. An important consideration in audible-range acoustics is that the wavelengths involved can be large, and thus the trihedron size needs to be correspondingly large to perform as a retroreflector. As a rule of thumb, an indicative frequency ( f R ) above which a reflector at normal incidence has little or no diffraction loss can be calculated from the speed of sound (c), distance from the collocated source-receiver to the reflector (r) and the reflector width (d) as per Equation (1) [13,14]. For a square reflector at normal incidence, the denominator is its surface area, S. Below f R /2, the diffraction loss slope is 6 dB/octave, and Rindel [14] suggests f R /2 as a practical frequency above which diffraction loss at normal incidence is of minor importance.
Arguably, the simple rectangular room-one of the most-studied forms in theoretical room acoustics-is retroreflective, because every reflection (planar, dihedral and trihedral) is returned to the source location. However, it is only trivially so, since every reflection fills the entire room volume, meaning the source location is not a focus point from retroreflection. For a room to be non-trivially retroreflective, the retroreflectors must be smaller than the room's basic surfaces (but sufficiently large for a useful f R ), so that the reflected wavefronts are returned to the source without filling the rest of the room. With arrays comprising many retroreflectors, each returning sound to the source, there is an energy focus at any sound source, with comparatively little acoustic energy reflected elsewhere in the room. This phenomenon has been demonstrated physically and computationally using a small specially designed retroreflective room [9]. The audible result is that high-frequency phonemes from one's own voice are particularly strong, a phenomenon which is envisaged for practical application in room acoustics to influence relaxed voice projection [15]. Considering the scarcity of room acoustics research literature on retroreflection, the question arises as to whether larger-scale pre-existing cases of acoustically retroreflective rooms exist, and how the phenomenon of retroreflection is manifest within them.
While pre-existing cases of acoustic retroreflection in architecture have been identified in building façades [6][7][8], finding pre-existing extensive cases in room acoustics is more While pre-existing cases of acoustic retroreflection in architecture have been identified in building façades [6][7][8], finding pre-existing extensive cases in room acoustics is more challenging. Indoors, candidates for incidental retroreflection could include rooms with simple coffered ceilings (where surfaces are planar and perpendicular) [16]. Retroreflection has been used intentionally in some auditoria to support performers on stage [17,18]. However, such retroreflective treatment does not dominate the room's acoustic design-which is mostly motivated to provide high-quality sound to the audience.
Steps can provide dihedral and trihedral concave corners, and thus an environment with many steps could be acoustically retroreflective. Indian stepped ponds are some of the most extensive cases of step-based architecture. Furthermore, unlike stepped pyramids, the overall form of a stepped pond is concave-the stepped pond is a type of room. They provide vantage points from which hundreds of trihedral corners are simultaneously visible. In this way, stepped ponds provide potentially rich cases for retroreflective room acoustics, notwithstanding issues such as diffraction loss, atmospheric dissipation, variable wind and geometric deviations that could reduce retroreflectivity. Most importantly, Equation (1) raises the question of whether the steps are just too small for significant acoustic retroreflection over typical distances from vantage points within the frequency range important for humans.
Introduction to Indian Stepped Ponds
The term stepwell is used for a wide range of architectural forms that are built into the ground with steps leading to water. In northern and western parts of India, where the structures studied in this paper are located, they can be referred to using several terms, which may depend on their history, purpose, colloquial usage, use in the literature, etc. Some common terms include baori/baoli (बावड़ी/बावली in Hindi), vav (વાવ in Gujarati), vapi (वाप़ी in Sanskrit and Hindi), barav (बारव in Marathi) and kund (क ुं ड in Hindi). It must be noted, however, that these terms are oftentimes interchangeable. Still, a baori/vav/vapi generally refers to an underground building, which, in some instances, is extensive and elaborate, with one or more staircases leading to water [19]. A kund, which is the type of stepwell that this paper is concerned with, is typically a 'stepped pond' or 'stepped tank', often with many staircases leading to a pool of water, sometimes resembling an inverted pyramid [20]. Kunds have been built for religious reasons (Yagna kund), for medicinal bathing (Brahma kund) and for general bathing (Snan kund), and some may be used for drinking water [20][21][22][23][24]. Some are associated with temples, some are in public places and some are in private residences. Many larger stepped ponds have a rectangular plan with arrays of intersecting steps on three sides, a more vertical structure on the fourth side and a pool at the base. In more elaborate cases, this fourth side may have arched loggias and rooms, and a platform from which the architectural spectacle of the three other sides can be fully viewed. The water level can vary greatly with the season, and thus part of the ingenuity of the design is that the steps lead to the water regardless of its level.
More than 30 stepped ponds that approximately or exactly follow the above form are documented by the main sources referred to in this paper [19][20][21][22][23][24][25], mostly in Rajasthan and Gujarat. Many more are documented in the crowd-sourced Atlas of Stepwells [26]. Their scale varies from intimate to monumental-with Chand Baori (Abhaneri, Rajasthan) being one of India's largest and best known, dating from around 800 CE. Chand Baori has many hundreds of steps which could contribute to acoustic retroreflection over its three intricately stepped faces, but smaller stepped ponds also have good prospects for acoustic retroreflection-and may be better for retroreflection because of their smaller size. Table 1 provides some examples of stepped ponds for which basic architectural data are documented (the data in the table were extracted from the cited architectural drawings). Table 1 indicates that there is typically about one step per square metre of top area (ranging from 0.87 at Jaipura Kund to 1.42 at Idar Stepped Pond). This ratio is mostly governed by the horizontal area of individual steps and platforms and the area of the pool in Hindi), vav ( Acoustics 2022, 4 FOR PEER REVIEW 3 While pre-existing cases of acoustic retroreflection in architecture have been identified in building façades [6][7][8], finding pre-existing extensive cases in room acoustics is more challenging. Indoors, candidates for incidental retroreflection could include rooms with simple coffered ceilings (where surfaces are planar and perpendicular) [16]. Retroreflection has been used intentionally in some auditoria to support performers on stage [17,18]. However, such retroreflective treatment does not dominate the room's acoustic design-which is mostly motivated to provide high-quality sound to the audience.
Steps can provide dihedral and trihedral concave corners, and thus an environment with many steps could be acoustically retroreflective. Indian stepped ponds are some of the most extensive cases of step-based architecture. Furthermore, unlike stepped pyramids, the overall form of a stepped pond is concave-the stepped pond is a type of room. They provide vantage points from which hundreds of trihedral corners are simultaneously visible. In this way, stepped ponds provide potentially rich cases for retroreflective room acoustics, notwithstanding issues such as diffraction loss, atmospheric dissipation, variable wind and geometric deviations that could reduce retroreflectivity. Most importantly, Equation (1) raises the question of whether the steps are just too small for significant acoustic retroreflection over typical distances from vantage points within the frequency range important for humans.
Introduction to Indian Stepped Ponds
The term stepwell is used for a wide range of architectural forms that are built into the ground with steps leading to water. In northern and western parts of India, where the structures studied in this paper are located, they can be referred to using several terms, which may depend on their history, purpose, colloquial usage, use in the literature, etc. Some common terms include baori/baoli (बावड़ी/बावली in Hindi), vav (વાવ in Gujarati), vapi (वाप़ी in Sanskrit and Hindi), barav (बारव in Marathi) and kund (क ुं ड in Hindi). It must be noted, however, that these terms are oftentimes interchangeable. Still, a baori/vav/vapi generally refers to an underground building, which, in some instances, is extensive and elaborate, with one or more staircases leading to water [19]. A kund, which is the type of stepwell that this paper is concerned with, is typically a 'stepped pond' or 'stepped tank', often with many staircases leading to a pool of water, sometimes resembling an inverted pyramid [20]. Kunds have been built for religious reasons (Yagna kund), for medicinal bathing (Brahma kund) and for general bathing (Snan kund), and some may be used for drinking water [20][21][22][23][24]. Some are associated with temples, some are in public places and some are in private residences. Many larger stepped ponds have a rectangular plan with arrays of intersecting steps on three sides, a more vertical structure on the fourth side and a pool at the base. In more elaborate cases, this fourth side may have arched loggias and rooms, and a platform from which the architectural spectacle of the three other sides can be fully viewed. The water level can vary greatly with the season, and thus part of the ingenuity of the design is that the steps lead to the water regardless of its level.
More than 30 stepped ponds that approximately or exactly follow the above form are documented by the main sources referred to in this paper [19][20][21][22][23][24][25], mostly in Rajasthan and Gujarat. Many more are documented in the crowd-sourced Atlas of Stepwells [26]. Their scale varies from intimate to monumental-with Chand Baori (Abhaneri, Rajasthan) being one of India's largest and best known, dating from around 800 CE. Chand Baori has many hundreds of steps which could contribute to acoustic retroreflection over its three intricately stepped faces, but smaller stepped ponds also have good prospects for acoustic retroreflection-and may be better for retroreflection because of their smaller size. Table 1 provides some examples of stepped ponds for which basic architectural data are documented (the data in the table were extracted from the cited architectural drawings). Table 1 indicates that there is typically about one step per square metre of top area (ranging from 0.87 at Jaipura Kund to 1.42 at Idar Stepped Pond). This ratio is mostly governed by the horizontal area of individual steps and platforms and the area of the pool in Gujarati), vapi ( Acoustics 2022, 4 FOR PEER REVIEW 3 While pre-existing cases of acoustic retroreflection in architecture have been identified in building façades [6][7][8], finding pre-existing extensive cases in room acoustics is more challenging. Indoors, candidates for incidental retroreflection could include rooms with simple coffered ceilings (where surfaces are planar and perpendicular) [16]. Retroreflection has been used intentionally in some auditoria to support performers on stage [17,18]. However, such retroreflective treatment does not dominate the room's acoustic design-which is mostly motivated to provide high-quality sound to the audience.
Steps can provide dihedral and trihedral concave corners, and thus an environment with many steps could be acoustically retroreflective. Indian stepped ponds are some of the most extensive cases of step-based architecture. Furthermore, unlike stepped pyramids, the overall form of a stepped pond is concave-the stepped pond is a type of room. They provide vantage points from which hundreds of trihedral corners are simultaneously visible. In this way, stepped ponds provide potentially rich cases for retroreflective room acoustics, notwithstanding issues such as diffraction loss, atmospheric dissipation, variable wind and geometric deviations that could reduce retroreflectivity. Most importantly, Equation (1) raises the question of whether the steps are just too small for significant acoustic retroreflection over typical distances from vantage points within the frequency range important for humans.
Introduction to Indian Stepped Ponds
The term stepwell is used for a wide range of architectural forms that are built into the ground with steps leading to water. In northern and western parts of India, where the structures studied in this paper are located, they can be referred to using several terms, which may depend on their history, purpose, colloquial usage, use in the literature, etc. Some common terms include baori/baoli (बावड़ी/बावली in Hindi), vav (વાવ in Gujarati), vapi (वाप़ी in Sanskrit and Hindi), barav (बारव in Marathi) and kund (क ुं ड in Hindi). It must be noted, however, that these terms are oftentimes interchangeable. Still, a baori/vav/vapi generally refers to an underground building, which, in some instances, is extensive and elaborate, with one or more staircases leading to water [19]. A kund, which is the type of stepwell that this paper is concerned with, is typically a 'stepped pond' or 'stepped tank', often with many staircases leading to a pool of water, sometimes resembling an inverted pyramid [20]. Kunds have been built for religious reasons (Yagna kund), for medicinal bathing (Brahma kund) and for general bathing (Snan kund), and some may be used for drinking water [20][21][22][23][24]. Some are associated with temples, some are in public places and some are in private residences. Many larger stepped ponds have a rectangular plan with arrays of intersecting steps on three sides, a more vertical structure on the fourth side and a pool at the base. In more elaborate cases, this fourth side may have arched loggias and rooms, and a platform from which the architectural spectacle of the three other sides can be fully viewed. The water level can vary greatly with the season, and thus part of the ingenuity of the design is that the steps lead to the water regardless of its level.
More than 30 stepped ponds that approximately or exactly follow the above form are documented by the main sources referred to in this paper [19][20][21][22][23][24][25], mostly in Rajasthan and Gujarat. Many more are documented in the crowd-sourced Atlas of Stepwells [26]. Their scale varies from intimate to monumental-with Chand Baori (Abhaneri, Rajasthan) being one of India's largest and best known, dating from around 800 CE. Chand Baori has many hundreds of steps which could contribute to acoustic retroreflection over its three intricately stepped faces, but smaller stepped ponds also have good prospects for acoustic retroreflection-and may be better for retroreflection because of their smaller size. Table 1 provides some examples of stepped ponds for which basic architectural data are documented (the data in the table were extracted from the cited architectural drawings). Table 1 indicates that there is typically about one step per square metre of top area (ranging from 0.87 at Jaipura Kund to 1.42 at Idar Stepped Pond). This ratio is mostly governed by the horizontal area of individual steps and platforms and the area of the pool in Sanskrit and Hindi), barav ( Acoustics 2022, 4 FOR PEER REVIEW 3 While pre-existing cases of acoustic retroreflection in architecture have been identified in building façades [6][7][8], finding pre-existing extensive cases in room acoustics is more challenging. Indoors, candidates for incidental retroreflection could include rooms with simple coffered ceilings (where surfaces are planar and perpendicular) [16]. Retroreflection has been used intentionally in some auditoria to support performers on stage [17,18]. However, such retroreflective treatment does not dominate the room's acoustic design-which is mostly motivated to provide high-quality sound to the audience.
Steps can provide dihedral and trihedral concave corners, and thus an environment with many steps could be acoustically retroreflective. Indian stepped ponds are some of the most extensive cases of step-based architecture. Furthermore, unlike stepped pyramids, the overall form of a stepped pond is concave-the stepped pond is a type of room. They provide vantage points from which hundreds of trihedral corners are simultaneously visible. In this way, stepped ponds provide potentially rich cases for retroreflective room acoustics, notwithstanding issues such as diffraction loss, atmospheric dissipation, variable wind and geometric deviations that could reduce retroreflectivity. Most importantly, Equation (1) raises the question of whether the steps are just too small for significant acoustic retroreflection over typical distances from vantage points within the frequency range important for humans.
Introduction to Indian Stepped Ponds
The term stepwell is used for a wide range of architectural forms that are built into the ground with steps leading to water. In northern and western parts of India, where the structures studied in this paper are located, they can be referred to using several terms, which may depend on their history, purpose, colloquial usage, use in the literature, etc. Some common terms include baori/baoli (बावड़ी/बावली in Hindi), vav (વાવ in Gujarati), vapi (वाप़ी in Sanskrit and Hindi), barav (बारव in Marathi) and kund (क ुं ड in Hindi). It must be noted, however, that these terms are oftentimes interchangeable. Still, a baori/vav/vapi generally refers to an underground building, which, in some instances, is extensive and elaborate, with one or more staircases leading to water [19]. A kund, which is the type of stepwell that this paper is concerned with, is typically a 'stepped pond' or 'stepped tank', often with many staircases leading to a pool of water, sometimes resembling an inverted pyramid [20]. Kunds have been built for religious reasons (Yagna kund), for medicinal bathing (Brahma kund) and for general bathing (Snan kund), and some may be used for drinking water [20][21][22][23][24]. Some are associated with temples, some are in public places and some are in private residences. Many larger stepped ponds have a rectangular plan with arrays of intersecting steps on three sides, a more vertical structure on the fourth side and a pool at the base. In more elaborate cases, this fourth side may have arched loggias and rooms, and a platform from which the architectural spectacle of the three other sides can be fully viewed. The water level can vary greatly with the season, and thus part of the ingenuity of the design is that the steps lead to the water regardless of its level.
More than 30 stepped ponds that approximately or exactly follow the above form are documented by the main sources referred to in this paper [19][20][21][22][23][24][25], mostly in Rajasthan and Gujarat. Many more are documented in the crowd-sourced Atlas of Stepwells [26]. Their scale varies from intimate to monumental-with Chand Baori (Abhaneri, Rajasthan) being one of India's largest and best known, dating from around 800 CE. Chand Baori has many hundreds of steps which could contribute to acoustic retroreflection over its three intricately stepped faces, but smaller stepped ponds also have good prospects for acoustic retroreflection-and may be better for retroreflection because of their smaller size. Table 1 provides some examples of stepped ponds for which basic architectural data are documented (the data in the table were extracted from the cited architectural drawings). Table 1 indicates that there is typically about one step per square metre of top area (ranging from 0.87 at Jaipura Kund to 1.42 at Idar Stepped Pond). This ratio is mostly governed by the horizontal area of individual steps and platforms and the area of the pool in Marathi) and kund ( Acoustics 2022, 4 FOR PEER REVIEW 3 While pre-existing cases of acoustic retroreflection in architecture have been identified in building façades [6][7][8], finding pre-existing extensive cases in room acoustics is more challenging. Indoors, candidates for incidental retroreflection could include rooms with simple coffered ceilings (where surfaces are planar and perpendicular) [16]. Retroreflection has been used intentionally in some auditoria to support performers on stage [17,18]. However, such retroreflective treatment does not dominate the room's acoustic design-which is mostly motivated to provide high-quality sound to the audience.
Steps can provide dihedral and trihedral concave corners, and thus an environment with many steps could be acoustically retroreflective. Indian stepped ponds are some of the most extensive cases of step-based architecture. Furthermore, unlike stepped pyramids, the overall form of a stepped pond is concave-the stepped pond is a type of room. They provide vantage points from which hundreds of trihedral corners are simultaneously visible. In this way, stepped ponds provide potentially rich cases for retroreflective room acoustics, notwithstanding issues such as diffraction loss, atmospheric dissipation, variable wind and geometric deviations that could reduce retroreflectivity. Most importantly, Equation (1) raises the question of whether the steps are just too small for significant acoustic retroreflection over typical distances from vantage points within the frequency range important for humans.
Introduction to Indian Stepped Ponds
The term stepwell is used for a wide range of architectural forms that are built into the ground with steps leading to water. In northern and western parts of India, where the structures studied in this paper are located, they can be referred to using several terms, which may depend on their history, purpose, colloquial usage, use in the literature, etc. Some common terms include baori/baoli (बावड़ी/बावली in Hindi), vav (વાવ in Gujarati), vapi (वाप़ी in Sanskrit and Hindi), barav (बारव in Marathi) and kund (क ुं ड in Hindi). It must be noted, however, that these terms are oftentimes interchangeable. Still, a baori/vav/vapi generally refers to an underground building, which, in some instances, is extensive and elaborate, with one or more staircases leading to water [19]. A kund, which is the type of stepwell that this paper is concerned with, is typically a 'stepped pond' or 'stepped tank', often with many staircases leading to a pool of water, sometimes resembling an inverted pyramid [20]. Kunds have been built for religious reasons (Yagna kund), for medicinal bathing (Brahma kund) and for general bathing (Snan kund), and some may be used for drinking water [20][21][22][23][24]. Some are associated with temples, some are in public places and some are in private residences. Many larger stepped ponds have a rectangular plan with arrays of intersecting steps on three sides, a more vertical structure on the fourth side and a pool at the base. In more elaborate cases, this fourth side may have arched loggias and rooms, and a platform from which the architectural spectacle of the three other sides can be fully viewed. The water level can vary greatly with the season, and thus part of the ingenuity of the design is that the steps lead to the water regardless of its level.
More than 30 stepped ponds that approximately or exactly follow the above form are documented by the main sources referred to in this paper [19][20][21][22][23][24][25], mostly in Rajasthan and Gujarat. Many more are documented in the crowd-sourced Atlas of Stepwells [26]. Their scale varies from intimate to monumental-with Chand Baori (Abhaneri, Rajasthan) being one of India's largest and best known, dating from around 800 CE. Chand Baori has many hundreds of steps which could contribute to acoustic retroreflection over its three intricately stepped faces, but smaller stepped ponds also have good prospects for acoustic retroreflection-and may be better for retroreflection because of their smaller size. Table 1 provides some examples of stepped ponds for which basic architectural data are documented (the data in the table were extracted from the cited architectural drawings). Table 1 indicates that there is typically about one step per square metre of top area (ranging from 0.87 at Jaipura Kund to 1.42 at Idar Stepped Pond). This ratio is mostly governed by the horizontal area of individual steps and platforms and the area of the pool in Hindi). It must be noted, however, that these terms are oftentimes interchangeable. Still, a baori/vav/vapi generally refers to an underground building, which, in some instances, is extensive and elaborate, with one or more staircases leading to water [19]. A kund, which is the type of stepwell that this paper is concerned with, is typically a 'stepped pond' or 'stepped tank', often with many staircases leading to a pool of water, sometimes resembling an inverted pyramid [20]. Kunds have been built for religious reasons (Yagna kund), for medicinal bathing (Brahma kund) and for general bathing (Snan kund), and some may be used for drinking water [20][21][22][23][24]. Some are associated with temples, some are in public places and some are in private residences. Many larger stepped ponds have a rectangular plan with arrays of intersecting steps on three sides, a more vertical structure on the fourth side and a pool at the base. In more elaborate cases, this fourth side may have arched loggias and rooms, and a platform from which the architectural spectacle of the three other sides can be fully viewed. The water level can vary greatly with the season, and thus part of the ingenuity of the design is that the steps lead to the water regardless of its level.
More than 30 stepped ponds that approximately or exactly follow the above form are documented by the main sources referred to in this paper [19][20][21][22][23][24][25], mostly in Rajasthan and Gujarat. Many more are documented in the crowd-sourced Atlas of Stepwells [26]. Their scale varies from intimate to monumental-with Chand Baori (Abhaneri, Rajasthan) being one of India's largest and best known, dating from around 800 CE. Chand Baori has many hundreds of steps which could contribute to acoustic retroreflection over its three intricately stepped faces, but smaller stepped ponds also have good prospects for acoustic retroreflection-and may be better for retroreflection because of their smaller size. Table 1 provides some examples of stepped ponds for which basic architectural data are documented (the data in the table were extracted from the cited architectural drawings). Table 1. Ten examples of stepped ponds that follow the form described, with architectural drawing sources cited. Some have alternative names, and in such cases, the name in the table is taken from a recent source, such as [23]. The top plan area is of the basic well opening, excluding surrounds. The number of steps only counts those in the main surfaces, quantified as the number of discrete horizontal walking surfaces.
Name (Location)
Top ( Table 1 indicates that there is typically about one step per square metre of top area (ranging from 0.87 at Jaipura Kund to 1.42 at Idar Stepped Pond). This ratio is mostly governed by the horizontal area of individual steps and platforms and the area of the pool (which is the minimum for the Table 1 This paper investigated Panna Meena ka Kund (Amer, Rajasthan), which is a wellknown stepped pond, dating from the 16th century. Not as deep as Chand Baori, it nevertheless is a large and deep stepped pond with hundreds of crisscrossing steps that lead to the pool-including some on the fourth side ( Figure 1a). Mostly, the walls are steps, with only one platform near the base of the stepped pond (submerged in Figure 1 photograph). It has 1.31 steps per square metre of top area. The vertical surfaces are rendered with a form of polished plaster [22]. The stepwell is in good condition and potentially provides an excellent case for acoustic retroreflection.
As an instance of a smaller stepped pond, this paper also focused on Lahan Vav (Basantgarh, Rajasthan, Figure 1b)-which is not well known. Dating from 976-999 CE, this stepped pond has bare stone block surfaces, some of which are damaged or displaced. It was chosen because of its smaller size and the availability of architectural drawings, and in its original state, it would be expected to have similar acoustic characteristics to other similarly sized stepped ponds, some of which are in better condition. It has a step-to-area ratio of 1.27.
Modelling and Simulation
The main predictive method used in this study was finite-difference time-domain (FDTD) simulation. This approach, which is widely used in architectural acoustics research, inherently accounts for the diffraction and higher-order propagation effects from architectural forms. Its biggest limitation is the size of the calculation, which requires large computational resources when large volumes and/or high frequencies are simulated.
Computer models of the exemplar stepwells were created based on published architectural drawings and photographs [22,25]. Due to limitations in computer resources, simulations of Panna Meena ka Kund were conducted using a partial model of the stepwell (Figure 2a). Simulations of Lahan Vav were conducted using the entire stepwell (Figure 2b). Simulations were also conducted with an anechoic environment the same size as each model for each source position, allowing the direct sound wave to be removed by subtraction. Values for reflected sound energy levels are expressed relative to the free field energy level 1 m from the source.
Omnidirectional point source positions were chosen for a commanding view of the stepwells, at a height of 1.5 m above the tower platforms ( The open parts of the models were surrounded with a 10-voxel perfectly matched layer, as described by Chern [28], providing anechoic boundaries. The simulated stepwell surfaces were hard, with no sound absorption introduced.
Prediction of Retroreflected Energy from Trihedral Corners
A simple method to predict the retroreflected sound energy level returned to the source was presented by Cabrera et al. [8]. This method treats each visible concave trihedral corner as an equivalent mirror facing the source and uses first-order image-source calculation accounting for diffraction loss in the frequency domain. The complex transfer function for each reflector depends on geometric dispersion and delay (distance travelled is twice the distance, r, from the source to the reflector; k is the wave number), and the diffraction coefficient, K. The diffraction loss depends on the size and distance of the reflector (as indicated indirectly by Equation (1)) and can be calculated using the Kirchhoff-Fresnel approximation (used here), or alternatively Rindel's more efficient approximation [14]. The total retroreflected energy level, L retro (re 1 m) , is obtained by summing the transfer functions of all n reflectors (Equation (2)). Octave band transfer functions can then be derived from the constituent spectrum components.
This is a simplification of the theory in [8], neglecting the absorption coefficient of the surface and atmospheric loss, to match the FDTD simulation.
For a given source position, the first stage is to identify the visible concave trihedral corners-since occluded corners will not act as retroreflectors. For a given corner with its particular geometry, the size of the equivalent mirror is hard to precisely determine. The simplified approach taken in this study is to start with the boresight shadow of a square trihedron of the same edge length. For diffraction coefficient calculation, an equivalent area square reflector is used, multiplied by the cosine of the incidence angle. For the stepwells, there are many small trihedral corners of individual steps, as well as a smaller number of large corners. For the modelling in this paper, a conservative approach was taken, i.e., the equivalent square trihedron is the largest square trihedron that fits the corner, meaning that the edge length is the minimum edge of the three surface edges. This means that for steps, the step height is taken as the trihedron edge length (step heights of l = 0.254 m for Panna Meena ka Kund and 0.27 m for Lahan Vav). For the large corners, again, the minimum edge length is taken, which is generally the width of the walking surface (l = 0.917 m for Panna Meena ka Kund and 0.546 m for Lahan Vav). The boresight shadow area then is S = √ 3 × l 2 , which can then be adjusted by the cosine of the incidence angle. It should be borne in mind that trihedral reflections are considerably more complicated than this simplification [29,30], but the point of this model is to provide a simple estimate without the large computational demands of wave-based simulation or higher-order image-source modelling. The model provides a rough theory incorporating diffraction loss that can help interpret simulation results.
This modelling is expected to underestimate the reflected energy for several reasons. It only considers the retroreflected energy from individual reflectors, with no other reflected energy included (e.g., scattered reflections, specular reflections). It does not include dihedral corner reflections. Furthermore, it only includes first-order reflections and has no diffuse reverberation contribution. The effective size of the reflectors remains constant with the frequency, even though small geometric deviations become insignificant at long wavelengths, potentially increasing the reflectors' effective size. The potential for larger faces to contribute to trihedral retroreflection at oblique angles of incidence [31] is neglected.
Despite these limitations, the modelling allows some analysis of the contribution of various retroreflectors to the sound returned to the source. It is expected to be most useful at high frequencies, where retroreflection is expected to dominate the sound returned to the source. Figure 3 shows the spatial distribution of reflected energy for the eight source positions at Panna Meena ka Kund from the FDTD simulation. Retroreflection is clearly evident in the high-frequency octave bands from the concentration of reflected energy on the respective source position. Retroreflection is clear in the upper three bands evaluated but also more weakly suggested in the 500 Hz and 250 Hz bands. The top row of Figure 3 shows the spatial distribution of reflected energy for a flat-surfaced simplification of the Panna Meena ka Kund partial model, which has energy distributed much more evenly across the receiver plane, and no concentration at the source position. Figure 4 shows the spatial distribution of reflected energy for the six source positions at Lahan Vav. An important difference between this and the Panna Meena ka Kund visualisation (Figure 3) is that the ground reflection from the tower platform is includedwhich is clearly evident in many of the subplots. The top row of Figure 4 shows the spatial distribution of reflected energy for a flat-surfaced simplification of Lahan Vav, which helps to disambiguate the ground-reflected energy (seen in all subplots) from the retroreflected energy (seen in the non-flat simulation subplots, especially in the higher-frequency bands). Retroreflection is evident in the 1-4 kHz octave bands, most strongly in the 4 kHz band.
FDTD Simulation Results
Reflected energy levels at the source-receiver positions are shown in Figure 5. Values are greater in the higher-frequency bands for both stepwell models, especially the 2 kHz and 4 kHz bands. Values in these high-frequency bands are about 11 dB greater at Lahan Vav than at Panna Meena ka Kund. At both stepwells, there is a tendency for greater reflected energy levels at lateral positions (higher numbered positions). At both stepwells, this may be influenced by the increased proximity to the side face steps. Another feature of Figure 5 is that it shows the reflected energy levels at Lahan Vav with and without the ground reflection (which was absent at Panna Meena ka Kund). The contribution of the ground reflection is small in the high-frequency bands, for which retroreflection occurs, and larger at lower frequencies. A 3 dB difference would indicate that the ground reflection had equal energy to the subsequent reflections, and some of the positions have differences greater than or equal to 3 dB in the 125 Hz and 250 Hz bands. The analytically calculated reflection energy level from an extensive ground plane for a source-receiver height of 1.5 m is 20 log(1/3) = −9.5 dB. This does not take the edges of the plane into account, which would be relevant for the simulated stepwell vantage points. Nevertheless, many of the Lahan Vav reflected energy levels (including the ground reflection) are similar to this value in the 125 Hz and 250 Hz bands. Values are considerably greater in the upper octave bands. The reflected energy levels in every octave band at Panna Meena ka Kund are all lower than the analytically calculated ground reflection.
Equivalent Reflector Model Prediction
The results from Equation (2), evaluated as described in Section 2.2 (at a 10 Hz resolution, combined into octave bands), are shown in Figure 6. For Panna Meena ka Kund, the predicted reflected energy levels in the 4 kHz band are greater than the simulated results ( Figure 5), which is not surprising considering that a partial model was used for the simulation, while a full model was used for evaluating Equation (2). At Position 1, the reflected energy levels increase due to in-phase summation from symmetry. Reflected energy levels in the low-frequency bands are much lower than the simulation results because the calculation only models first-order retroreflection from the trihedra and neglects all other reflections and reverberation. Bands tend to be spaced at about 6 dB intervals, indicating that diffraction loss is important, and that band frequencies are mostly below f R /2 of most reflectors. The calculated values for Lahan Vav are 3-6 dB less than the simulation results in the 4 kHz band but diverge more at lower frequencies. Again, the excessive loss at lower frequencies is expected because only first-order retroreflection is included in the calculation. The calculation does not include the ground reflection because it would obscure the much lower values in the lower-frequency bands. The band results are separated by about 6 dB, reflecting the fact that most bands are below f R /2. This is illustrated directly in Figure 7, The 4 kHz band was chosen for further analysis because it is the band with the greatest retroreflection (both modelled and simulated). The energy contribution of individual reflectors is shown in Figure 8 as a function of distance for the two extreme positions in each stepwell, for the 4 kHz octave band only. This highlights the importance of proximity to retroreflectors. The combined effects of diffraction loss and geometric dispersion should yield an individual reflector spatial decay rate of −12 dB per distance doubling for f < f R /2, which is seen for the small reflectors. However, more distant reflectors of a given size can be more numerous, and thus their combined spatial decay rate should be less severe. When f > f R /2, the spatial decay rate is mainly affected by geometric dispersion (−6 dB per distance doubling)-hence the shallower slopes for the large reflectors. While there is a much smaller number of big reflectors than small reflectors, the big reflectors can make an out-sized contribution, as seen at Panna Meena ka Kund (evident in the cumulative distribution curves). On the other hand, at Lahan Vav, the small reflectors make a greater contribution than the big ones.
Temporal Characteristics of Impulse Responses at Lahan Vav
Being based on a partial model, the Panna Meena ka Kund simulation lacks many reflections and reverberant decay that would be seen in the full stepwell, and thus this section just focuses on Lahan Vav. Previous studies of retroreflective façades showed a prominent cluster of reflections due to retroreflection in measured and simulated impulse responses for collocated source-receivers. This is also seen at Lahan Vav (Figure 9). With the source at Position 1, the retroreflection cluster starts at 40 ms and is increasingly prominent with the frequency. With the source at Position 6, the retroreflection cluster starts at 22 ms. Based on geometry, the first-order reflections should be finished by 90 ms, and thus subsequent energy is from higher-order reflections (or reverberation). At Position 6, a persistent 19 ms flutter echo (unrelated to retroreflection) is evident, most obviously in the 1 kHz octave band. For comparison, the figure also shows impulse responses for receiver positions distant from the source-these distant positions lack the retroreflection cluster.
Sensitivity to Geometric Error
The models used for the main simulations were created from perfectly flat planes with exact right-angle corners. Real cases are not that simple, and hence the question arises as to whether retroreflection focusing remains when deviations from the ideal geometry are introduced. The trihedral step corners of the Panna Meena ka Kund and Lahan Vav models were manipulated to angles other than 90 • . Simulations were run for the source at Position 1. The results (Figure 10) show the largest effect of angle deviation is at high frequencies, for which deviations from 90 • reduce the reflected energy level at the source. In the 4 kHz octave band, the reduction at Panna Meena ka Kund is 2.7 dB for a ±10 • deviation; at Lahan Vav, it is 1.6 dB. Reflected energy level reductions from angle deviations are not evident in the bands 1 kHz and below. Even with a ±10 • deviation, the values of the reflected energy returned to the source are still much greater than for a flat-surfaced model (shown in the top rows of Figures 3 and 4)-which for 4 kHz is −33.3 dB at Panna Meena ka Kund and −13.2 dB at Lahan Vav.
Effect of Temperature Gradient
It is frequently observed that the temperature at the base of a stepped pond may be less than the general temperature on a hot day. Reports of a 5 • C reduction are common, even if anecdotal. The authors are not aware of detailed data on this, but similar phenomena have been studied in basins and sinkholes, in which cool air pools at the base [32]. This introduces the possibility of acoustic refraction, as the speed of sound reduces deeper into the well. This should not necessarily degrade retroreflection, because reciprocity still applies (the path from the source to the reflector is the same as the path from the reflector to the source). Conceivably, this may increase the well's acoustic effect by reducing the amount of sound lost to the sky.
A simulation was conducted using the Panna Meena ka Kund model, with a temperature gradient of 0.5 • C/m for a source at Position 1. The resulting sound returned to the source is essentially the same as the simulation without a temperature gradient. Deviations of the reflected energy level across the receiver plane were < 0.1 dB in all octave bands.
The Retroreflective Potential of Stepped Ponds
Stepwells are designed for water access, not for acoustics; they are certainly not designed for optimum acoustic retroreflection. Yet, this study shows that stepped ponds can be good candidate cases of incidentally acoustically retroreflective rooms-at least when the water level is low. In this respect, stepped ponds are very distinctive acoustic environments, considering that predominantly retroreflective rooms are rarely seen in architecture. Steps are designed for people to ascend and descend, but fortuitously, the steps used in these and similar stepped ponds are larger than modern standard steps. Nevertheless, from an acoustics perspective, the steps are undersized for retroreflection, especially in larger stepped ponds such as Panna Meena ka Kund, which, at any vantage point, will have large distances to numerous step trihedral corners. This is indicated by the values of f R , which generally predict large diffraction loss over the important frequency range for humans. Conceivably, a stepped pond smaller than Lahan Vav (with shorter distances) should perform even better as a retroreflective environment for autophonic sound such as speech and clapping. As a guide, Figure 12 shows the calculated values of f R /2 for steps of various sizes and distances from the source: for example, a somewhat impractical stepped pond with 0.5 m steps and most distances within 10 m would have diffraction loss mostly limited to frequencies below 4 kHz. Given the above, it is remarkable that retroreflection is strongly evident in the spatial distribution of reflected energy in both stepwell models studied in the 4 kHz and 2 kHz bands and is even seen to a smaller extent at relatively low frequencies (e.g., 500 Hz). Based on Figure 12, the steps at Panna Meena ka Kund are too small for the distances involved. However, considering that the FDTD simulation of Panna Meena ka Kund only included two big trihedra, the simulation results show that the steps nonetheless function as an ensemble of weak retroreflectors.
In Appendix A, simplified models of ten stepped ponds are evaluated (based on the features of those listed in Table 1), suggesting that a smaller top area may be a good simple indicator of a stepped pond's retroreflective potential. The size disadvantage of a large stepped pond may be overcome at a source position lower into the well, such as at a lower platform or loggia. At lower positions, the number of visible concave trihedra reduces, but this should be more than compensated for by the reduced distance to the remaining reflectors. This concept is developed in Appendix B, which shows an increase in retroreflected energy at lower source positions in models of large stepped ponds.
Considerations with Real Stepwells
Real stepwells are more complicated than the models used in this paper. Geometric deviations from the ideal modelled surfaces are inevitable and, in some cases, are likely too great for retroreflection. This might be an issue affecting the real Lahan Vav (chosen because of the availability of architectural drawings), which now has somewhat rough and irregular stones. Hence, the modelling and simulation of this case are a representation of the stepped pond's original acoustics, but not necessarily of its present-day acoustics. The simulation study of sensitivity to geometric error indicates that perfect trihedra are not required for a stepped pond to be significantly retroreflective-with an angular error of 10 • causing a focus loss less than 3 dB in the 4 kHz band, and less than 2 dB in the 2 kHz band (Panna Meena ka Kund), and smaller losses for the Lahan Vav simulations.
The simulations and modelling used in this study assume no sound absorption by the surfaces-all sound absorption is by the open space (which is anechoic). The smooth plaster and stone surfaces used in some stepped ponds would absorb little sound and thus would not be expected to be substantially different to the modelling. However, some stepped ponds have surfaces that are expected to absorb and scatter more (e.g., exposed rubble masonry).
Acoustic conditions at real stepwells differ from the models in other ways too. Wind and temperature may affect sound propagation. Based on the simulated case, a simple temperature gradient does not damage the retroreflective quality of a stepped pond. However, turbulent windy conditions would be expected to variably refract, and hence scatter, high frequencies, severely degrading retroreflection. Background noise affects the audibility of a room's impulse response, including retroreflection. It is obvious that quiet and still conditions would be needed to experience retroreflection on site.
The findings in this study apply to stepped ponds that have about one step per square metre of top area. Generally, this requires that the pool area not be very extensive. Cases with a larger pool (e.g., Roda [22] p. 33) would be expected to have less retroreflection evident.
Audibility of Retroreflection
The most prominent features observed in this paper are in the 4 kHz octave band. While retroreflection focusing may be theoretically stronger at higher frequencies (e.g., 8-16 kHz bands), higher frequencies are of less interest because atmospheric propagation losses become significant, they are more sensitive to surface roughness and they are towards the upper limit of human sound production and hearing. Even the 4 kHz band is an unusually high frequency range to concentrate on. For people with noise-induced hearing loss, acuity in the 4 kHz band is typically reduced [33], which would limit their potential experience of retroreflective focusing.
The audibility of acoustically retroreflective architecture depends on the acoustic excitation, the retroreflected energy quantity and distributions over time and frequency, the background noise conditions and the listener's interest in and attentiveness to sound. Experience with other retroreflective environments is varied. In some, handclaps are effective, even though a typical clap has an approximately pink power spectrum [34,35] (whereas a white or high-frequency-dominated spectrum would be preferable). Clapping is most likely to be effective when there is an audibly significant time delay for the retroreflection cluster arrival. Another autophonic excitation method that can make retroreflection obvious is the use of high-frequency phonemes or tongue clicks. The directionality of such excitation may help in suppressing the ground reflection and other unwanted reflections from surfaces near the person. When retroreflection is strong, it may be audible directly from one's own speech, with the reflections giving an unusual 'crisp' quality to the sound [9]. Considering that the partial model of Panna Meena ka Kund yielded quite low reflected energy levels, the noticeability of retroreflection from the vantage points at that site needs further investigation. A positive indicator is that the first-order image-source calculation from equivalent reflectors for Panna Meena ka Kund yielded higher reflection levels because of the full stepped pond being included, with the high frequency values at Position 1 quite close to those for Lahan Vav's Position 1. Furthermore, the relatively long time delays from the large distances at Panna Meena ka Kund favor audibility. The retroreflection at a stepped pond such as Lahan Vav, but with surfaces in good geometric condition, is expected to be audible considering that the reflected energy levels and distances are similar to those at the previously studied Ainsworth Building, where retroreflection is directly audible [8]. The presence of room reverberation is a difference between the stepped ponds and the previously studied retroreflective building façades.
Future Study
On-site acoustic measurements for this project have been delayed due to COVID-19 disruption. Future study incorporating in situ measurements (similar to [8]) will be conducted when possible.
Conclusions
From an acoustics perspective, the broad research problem addressed by this paper was to find and characterise pre-existing instances of predominantly retroreflective roomsand it has shown that some Indian stepwells are good candidates for this. While many types of buildings have concave trihedral corners, the stepped pond type of stepwell sometimes takes this to an extreme. As demonstrated in this paper, these sometimes ancient buildings potentially provide intense instances of acoustically retroreflective room acoustics, albeit with an open-sky ceiling in the exemplar cases. With the ravages of time, some sites will have lost their original acoustic characteristics, but wave-based computational acoustics allows their acoustic restoration for analysis. Other sites appear to be sufficiently well preserved to retain their retroreflective characteristics, which should be observable in good environmental conditions to an attuned visitor.
The main findings are as follows: • Acoustic retroreflection in stepped ponds can be substantial in the high-frequency range, resulting in reflected sound focusing onto the source position-which is seen as a dense cluster of high-frequency reflections in the early part of the impulse response; • Both small trihedral corners (from steps) and large trihedral corners (from wall intersections at each level) contribute to retroreflection, with the balance of them depending on the scale of the stepped pond; • Retroreflection is not reduced greatly with angular distortion of trihedra of up to 10 • ; • Smaller stepped ponds tend to be more retroreflective, because the effect of shorter distances is stronger than the effect of the smaller number of reflectors; however, lower positions in a large stepped pond see increased retroreflection.
Overall, the acoustic result of the stepped pond form is distinctively bright and strong reflections to the source position, while positions away from the source receive much less sound from early reflections, especially at high frequencies. Acknowledgments: Photographs and permission to use them were provided by the American Institute of Indian Studies.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A.1 Simplified Model of Stepped Pond Retroreflection
A simplified generic model of stepped ponds was constructed to examine the relationship between acoustic retroreflection and architectural parameters. The model assumes that a stepped pond takes the form of an inverted frustrum, with one of its faces steeper or vertical. The other three faces are the stepped surfaces. The ten stepped ponds listed in Table 1 were represented using this simplified modelling approach. Some advantages of the approach are as follows: (i) stepped ponds with incomplete architectural data could be quickly modelled, considering that only plans are available for most cases; (ii) the modelling focuses on the general form without quirks of particular cases; and (iii) the reflector size can be held constant, allowing the effects of larger-scale architectural features to be seen more clearly.
While the exact locations of the steps might be hard to determine, they are numerous and fairly uniformly distributed over the surfaces, and thus, here, the steps are statistically modelled by evenly distributing small retroreflectors across the three 'stepped' frustum surfaces, with alternating boresight directions. The number of distributed retroreflectors equals the total number of steps (also derived from the plan, as per Table 1). Then, the visibility of the retroreflectors with respect to a particular source position is determined, and the distance and the incidence angle of each visible retroreflector are calculated. Using these data, the retroreflected energy contributed by each small retroreflector is calculated as per Section 2.2. Similarly, large retroreflectors are also included in the modelling, determined as four per stepwell level, at the four intersections of the stepped surfaces. Their visibility, distance and incidence angle are determined using the same method as the small retroreflectors to calculate their retroreflected energy.
The concept is illustrated in Figure A1. In the modelling reported here, the sourcereceiver position is placed at the centre of the front edge of the platform, 1.5 m above the ground, as with Position 1 in the Panna Meena ka Kund case (shown in Figure 2a). A trihedron edge length of 0.25 m was used for all small retroreflectors, and 0.7 m was used for big ones. The key parameters used in the estimations, along with the estimated retroreflected energy, are provided in Table A1. Because the model is a general approximation rather than an exact replica of a given stepwell, reflections are summed energetically rather than using complex values. The energetic average of the 1-4 kHz octave bands is used as an indicator of the strength of the retroreflected sound returned to the source. On average, the 1-4 kHz value is 3.3 dB less than the 4 kHz octave band value in these cases (ranging from −2.9 dB for Bala Kund to −3.6 dB for Hadi Rani). Table A1. Parameters used for simplified retroreflected energy level estimates based on the ten example stepped ponds from Table 1. Symbols: r is the distance of visible retroreflectors, θ is the incidence angle from the trihedron boresight and n is the number of visible reflectors. Subscripts: s denotes small reflectors, and b denotes big reflectors. 'Levels' is the number of levels in the well. The table also shows the estimated retroreflected energy level results for the cases (L) averaged over the 1-4 kHz octave bands. Note that the average values of the distance and incidence angle are provided here, but the underlying individual values are used in the estimations of retroreflected energy.
Appendix A.2 Simplified Model Predictions
The modelling indicates that the simplified model of Bala Kund has the greatest prospects for retroreflection, followed by Lahan Vav and Idar Stepped Pond (Table A1). The large stepwell models have the weakest prospects, even though they have the greatest number of steps. The results span a wide range of retroreflective conditions and suggest that the Lahan Vav case chosen for detailed analysis represents relatively strong retroreflective conditions.
The physical parameters are mutually correlated-larger-area wells tend to have more steps, a greater depth, more levels, a larger pool, etc. In the following, the architectural parameters are used logarithmically, i.e., the natural logarithm of each parameter is used. This provides more evenly distributed values that have a better linear fit with the reflected sound energy level. Table A2 shows the correlation coefficients between parameters. The best single architectural predictor of the estimated retroreflective energy level is the stepwell top area. A linear fit of the logarithm of the top area is shown in Figure A2 (R 2 0.754, p = 0.001, RMSE 2.4 dB). However, if we were to only consider the six smaller cases (top area ≤ 323 m 2 ), there would be no apparent relationship between the top area and estimated retroreflection. Alternatively, the smallest case (Champa Bagh ka Kund) might be considered an outlier, hinting that there might be a small area limit on the negatively sloped relationship between stepped pond size and retroreflection strength. A further simplified way of understanding the results is that the expected reflected energy level should increase by 3 dB for doubling of the number of reflectors but decrease by up to 12 dB for doubling of the distance (accounting for geometric spreading and maximum diffraction loss). If the average distance is used as the single-number distance, then this is represented by 10 log n/(r) 4 , shown in Figure A2b. Although this very simple formulation is only designed to approximate the slope, it returns values not far from the 1-4 kHz reflected energy level values.
Appendix B.1 Retroreflection for Lower Positions in Large Stepped Ponds
While the results in the main paper, together with those in Appendix A, suggest that large stepped ponds have weak retroreflection, this finding applies to source positions at the top of the well. Another way of thinking about a large stepped pond is that it is a small stepped pond extended upwards (except that a larger well may have a larger pool). A source position lower into the well loses visibility of trihedral corners at upper levels, but the distances to the smaller number of visible trihedra should decrease. This trade-off is examined briefly in this appendix by varying the source-receiver height in two large stepped pond examples.
The first example is Panna Meena ka Kund, which was modelled as described in Section 2.2. The eight original source positions were lowered to be 1.5 m above the upper and lower loggias (height reduced by 4 m and 8.2 m, respectively). Note that the upper loggia's arches are filled in in the present day, but here, we imagine that they are open. The mean numbers of visible concave trihedra on the three levels for the eight positions are: 385 (top level), 331 (upper loggia) and 180 (lower loggia). The mean distances of the visible trihedra are: 15.9 m (top level), 13.9 m (upper loggia) and 11.5 m (lower loggia).
The second example is a generic stepwell model (using the Appendix A method) based on the main features of Chand Baori. The real Chand Baori has a complicated building on its non-stepped face, but for this example, source-receiver positions are simply in a vertical line, with one per stepwell platform level (14 positions at 1.88 m intervals).
Appendix B.2 Results
The results for Panna Meena ka Kund ( Figure A3a) are consistent with the hypothesis that lower positions in a large stepped pond have greater retroreflective potential. The mean reflected energy levels are −14.8 dB, −13.0 dB and −12.0 dB for the top level, upper loggia and lower loggia, respectively. Values at the two loggias are similar to those at Lahan Vav in Table A1, and only Bala Kund has a retroreflected energy level greater than that of the lower loggia. The results for the simplified Chand Baori calculation ( Figure A3b) are also consistent with the hypothesis. Excluding the bottom position, there is a highly linear relationship between the height of the source position and the reflected energy level (r = −0.998), and between the mean distance to visible reflectors and the reflected energy level (r = −0.996). Furthermore, the number of visible reflectors is similarly negatively correlated with the reflected energy level (r = −0.980)-which, of course, is the opposite to what would be expected if all other factors were held constant. Overall, this demonstrates the importance of the distance to reflectors in determining the retroreflected energy level, consistent with the relationship 10 log n/(r) 4 , introduced in Appendix A (R 2 of 0.88 for all datapoints, or R 2 of 0.93 for datapoints excluding the bottom position).
The drop in reflected energy at the bottom position of the Chand Baori approximate model appears to be from the loss of access to big reflectors (six on the level above, two on the bottom level). The mean incidence angle also increases from 26.9 • to 39.3 • between the level above and the bottom level. The value at the second lowest position is again similar to that at the much smaller Lahan Vav in Table A1.
|
2022-03-08T16:56:56.619Z
|
2022-03-02T00:00:00.000
|
{
"year": 2022,
"sha1": "a1389f85c3569684f0abf28b85987a17177f1cbd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2624-599X/4/1/14/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9a158d41da4434c0ceaf9b7560c8e7dc0a0855c5",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
271468168
|
pes2o/s2orc
|
v3-fos-license
|
What Do the General Public Know about Infertility and Its Treatment?
Rates of infertility are rising, and informed decision making is an essential part of reproductive life planning with the knowledge that ART success decreases dramatically while a woman’s age increases and that high costs can often be incurred during fertility treatment. We aimed to determine the current knowledge of infertility and its treatments in the general public through an online survey. We received 360 complete responses. The average age of respondents was 35 years with most respondents being female (90%), heterosexual (88%), white (85%) and university educated (79%). Of the total, 49% had children and 23% had a condition that affects their fertility; 41% had concerns about future fertility and 78% knew someone who had had fertility treatment. Participants’ understanding of basic reproductive biology and causes of infertility varied with correct responses to questions ranging from 44% to 93%. Understanding of IVF outcomes was poorer with only 32% to 55% of responses being correct, and 76% of respondents felt that their education in fertility was inadequate. This survey highlights the inconsistencies in the general public’s understanding of infertility in this relatively educated population. With increasing demands on fertility services and limited public funds, better education is essential to ensure patients are fully informed with regard to their reproductive life planning.
Introduction
Infertility is defined as a disease of the reproductive system with a failure to achieve a clinical pregnancy after 12 months or more of regular unprotected intercourse [1].Infertility is common, with one in six heterosexual couples struggling to conceive [2], with a global prevalence of over 48 million couples affected [3].In developed countries fertility rates continue to decrease, partly due to better access to contraception, but also due to increasing maternal age, increasing levels of obesity and continued negative stigma towards young parenthood [4][5][6][7].The average maternal age in England and Wales has increased from 26.4 years in 1974 to 30.9 in 2021 [8], with this trend being replicated in other developed countries [9][10][11].Infertility can lead to distress, depression, discrimination and ostracism with associated costs to individuals and society being huge [3].
Despite its prevalence, the perception of and knowledge about infertility amongst the general public continues to be poor.One of the first surveys on infertility perceptions in 2000, which included 8194 adults from eight different countries found that 62% of respondents did not perceive infertility to be a disease and their awareness of the definition and incidence of infertility was low [12].Other subsequent surveys have failed to show improvements in knowledge despite the increase in demand for assisted reproductive treatments (ARTs) [9,10,[13][14][15].
The primary aim of our study was to assess the general knowledge about infertility in the UK.The secondary aim was to evaluate whether a difference in age, gender, education or sexual orientation accounted for any significant differences in an individual's knowledge of infertility.
Ethics
This online anonymous questionnaire was approved by the University of Liverpool's Institute of Life Course and Medical Sciences Research Ethics Committee (ref-11997).
The Survey
An initial literature review of previous surveys on infertility knowledge was performed, this informed our final survey, which included 40 questions (Supplementary File S1).The survey was divided into five main subsections: demographics, personal fertility history, knowledge of basic fertility, causes and risk factors and knowledge of in vitro fertilisation (IVF) as a treatment option.The demographic data collected from the questionnaire included age, gender, sexual orientation, ethnicity, country of residence and their highest level of education.The personal fertility history section included questions about previous fertility treatment and if they would consider fertility preservation methods in the future.The section on knowledge of basic fertility biology was included to highlight areas of knowledge that are incomplete or incorrect.Questions related to causes and risk factors for infertility determined the participants' knowledge with regard to lifestyle factors or conditions that impacted fertility.Finally, knowledge related to IVF allowed us to determine the respondent's understanding of IVF treatments and their success rates.
The survey was advertised on social media through the online survey tool SurveyHero (www.surveyhero.com).A participant information sheet (PIS) was included for respondents to read prior to completing the survey and the first question in the survey confirmed the participants consented to complete the questionnaire.Only the participants who provided consent were eligible to progress and complete the questionnaire.Questionnaires were included in the analysis if all questions were answered and the data provided by the respondents fulfilled the inclusion criteria of being over the age of 18 years.Survey responses were collected over a 3-month period.
Statistical Analysis
This survey was not designed as a comparative study to test a hypothesis, and thus, power calculation was not appropriate.Therefore, in line with our research, we report summary statistics of the data obtained from the survey.Where possible, the Statistical package for the Social Sciences (SPSS) for Windows (Version 26; IBM Corporation, New York, NY, USA) was used to analyse categorical data using the chi squared test or the students paired t-test for continuous data.
Results
There were 428 responses to the survey; 68 were excluded due to incomplete responses and 1 respondent was excluded due to being 16 years old.The final number of complete responses for analysis was 360.
Personal Fertility History
Of those surveyed, 190 (52%) did not have a child, 52 (14%) of participants were trying to conceive and 77 (21%) had a known condition that could affect their ability to conceive in the future.Of the participants, 151 (42%) were concerned about having a child in the future and the majority of the participants (278, 77%) knew someone who had gone through fertility treatment previously, with 51 (14%) having had fertility treatment (Table 2).* Some people had more than one treatment.
Basic Fertility Knowledge
The number of participants that were correctly able to define the duration of time needed to have passed prior to an infertility diagnosis was 211 (58.6%) (12 months).The majority of participants were able to correctly identify how many days were in the average menstrual cycle (335, 93.1%) and the ovulation window (265, 73.6%) and the most likely day of ovulation (233, 64.7%).Optimal frequency of intercourse was answered correctly by 63.5% of participants (229).Respondents' knowledge about the lifespan of sperm in the female reproductive tract and oocyte lifespan following ovulation varied.Although most participants were aware that a female's age has an impact on her fertility potential (340, 94.4%), knowledge of when the fertility started to decrease varied significantly between respondents (Table 3).
Participants were less aware that male fertility was affected by age with only 164 (46%) choosing the correct response.Of those that correctly identified that male fertility decreases with age, only 79 (48%) correctly answered that its deterioration starting between 40 and 45 years old.
Out of the nine questions in this section the average number of correct responses was 3.9/9 (43%).
Causes of and Risk Factors for Infertility
Participants were given a list of potential causes of infertility in both women and men and were able to pick multiple options.The results can be seen in Tables 4 and 5.The main cause of tubal blockage (chlamydia) was identified by 210 (58.3%) participants.The majority (266, 73.9%) also answered correctly that infertility causes are equally spread amongst both male and female partners.Out of this section the average number of correct responses was 26.1/31 (84%).
Knowledge of IVF
Understanding of IVF was poor across all participants, with the highest correct response rate being 55% (n-198) (cost of IVF GBP 1500-GBP 5000).Only 114 (32%) of participants were aware that there were 48 million couples affected by infertility worldwide, 139 (38.6%) participants correctly identified the current IVF success rate of 32% and 166 (46.1%) correctly answered that 8 million children have been born through IVF.
Similarly, 140 (38.9%)participants were aware that the average number of IVF cycles funded by the NHS is two.There was strong agreement that IVF should be funded by the NHS (326, 90.6%) and that two or three cycles of IVF should be funded (227, 63%).In this section, the average number of correct responses was 2.1/5 (42%).
When asked about whether participants had received substantial teaching on fertility in school/college, most felt their teaching was insufficient (272, 75.5%).
In total, the mean number of correct responses per participant was 34.7/47 (74%).When removing the causes of fertility, the mean number of correct responses dropped to 8.6 out of 16 questions (54%).
Subgroup Analyses
There was no difference in responses by different age groups.When grouped into gender, males were less likely to identify the correct average menstrual cycle length (p < 0.001); otherwise, there was no difference between male and female responses.As there were only two non-binary participants in the survey, they were not included in the statistical analysis.When comparing education levels, the only question that showed a statistical difference in responses was on which out of a couple were more likely to be the cause of infertility (p = 0.011).As there was only one participant who had no formal education, they were not included in the statistical analysis.
All groups thought that the cause of infertility was both the male and female partner in equal measure; however, when equal was excluded as an answer, those who had a secondary-level education felt females (n = 4) were more likely than males (n = 0) to be the cause of infertility.To a lesser extent, university-educated participants felt that females (n = 48) were more likely than males (n = 25) to be the cause of infertility.Those with a vocational education felt that males (n = 6) and females (n = 6) were both as likely to be the cause of infertility.
When comparing answers between groups with different sexual orientation there were a number of statistically significant differences in the groups' responses.Heterosexual participants were less likely to think male depression impacted fertility (p = 0.018, Table 6) and homosexual participants were more likely to think males having multiple sexual partners would affect fertility (p < 0.001, Table 6).
Discussion
This contemporary survey updates the 20-year-old previous worldwide survey [12] on the public perception of fertility.Whilst there are still some areas for improvement regarding particular responses, there seem to be an encouraging improvement in the participants understanding of infertility in comparison to previous surveys [9][10][11][12][13][14][15], with the average participant answering 74% of the questions correct.
When reviewing this cohort's basic knowledge of fertility, the correct responses to the questions ranged from 39% (how long is an oocyte capable of being fertilised by a spermatozoa?) to 93% (advancing age of females affects fertility).Superficial knowledge related to the menstrual cycle, including cycle length (93%), ovulation window (74%) and ovulation day (65%) had a high number of correct responses; however, participants responses regarding lifespan of the oocyte (39%) or sperm (44%) revealed poor knowledge.Similarly to other surveys, a high proportion of our participants were aware that female age affects fertility (94%); however, the effect of male age on fertility was not similarly well understood with only 46% answering correctly.Seventy four percent of the participants responded that in those struggling to conceive, both the male and female partner were equally likely to be the cause of infertility (74%).However, this awareness of female factors and apparent lack of awareness of the impact of male age on fertility is likely due to the focus of treatment for infertility still being on females, even in cases of male infertility [16,17].As a consequence, male infertility is discussed less in the public domain, often leading to a lack of awareness regarding the male role in infertility and conception [16,18].
Our participants showed a poor understanding of the definition of infertility with 59% answering correctly; however, in comparison to previous studies, this suggests a slight improvement [9,10,14].This lack of understanding can impact future patient care.In some cases, couples will delay treatment, potentially reducing their chances of conception with future treatment [19,20], whereas others may seek investigations and treatments too early, incurring additional costs to themselves and to the health service [21].
It was reassuring to see that participants were aware of the potential risks factors that impact fertility including smoking, obesity and alcohol.However, despite the high mean score in this subsection, there were a number of incorrect answers.Worryingly, a third of participants thought that hormonal contraceptives, previous termination of pregnancy, recurrent urinary tract infection and candida infections impact fertility prospects despite evidence to the contrary [22][23][24][25][26][27].
Despite 77% of participants knowing people who have gone through fertility treatment, the average score knowledge of IVF was 42%.The poor score in the knowledge of IVF was surprising with the high prevalence of fertility treatment.The knowledge was equally poor, amongst participants who claimed to know others who had fertility treatment or had been through treatment themselves (n-279, 77.5%), thus, highlighting the need for further education.Despite the lack of knowledge regarding the IVF process, there was strong support for IVF treatment with over 90% of participants advocating for NHSfunded treatments.This positive outcome has been mirrored in many other previous studies [10,12,28], with most agreeing with the current National Institute for Health and Care Excellence (NICE) recommendation [29] for three funded cycles of IVF treatment.
Interestingly, in the subgroup analysis, there were very few differences noted between groups.Resoundingly, the majority of participants in this cohort felt that they had insufficient education on basic fertility and treatment.Our findings highlight the need for a further review of the current secondary education exposure to fertility teaching.Relationship and sex education became mandatory in schools in England and Wales in 2020.It would be interesting to understand if responses to our questionnaire would be improved as a result of this new mandatory requirement of the curriculum.
In relation to fertility education specifically, one key strategy may be the increased integration of reproductive life planning (RLP) into both secondary and tertiary education settings in the UK.RLP aims to encourage individuals to reflect on their reproductive plans, and what actions to take to realise them [30].A combination of overestimations of IVF success rate, a reduced awareness of infertility epidemiology, an increasing age of childbearing and several respondents who would not consider fertility preservation methods is an indication for greater fertility awareness needs.Integration of these aspects in a quality RLP tool to be used in secondary and tertiary education settings may facilitate this education process.
Limitations
This survey was answered predominately by white, heterosexual, university-educated females living in the UK.Therefore, it is unfortunately not representative of the general wider population and further surveys including respondents from a wider demographic background are required to appreciate a more representative sample of participants.However, since respondents in our survey would have traditionally been expected to be more aware about their own fertility, their responses demonstrating poor overall knowledge further highlights the deficiencies and inconsistencies in the current education related to fertility.Future studies should explore the information sources currently used by the general public regarding fertility to ensure that we prevent the spread of misinformation and inform and empower people appropriately through reliable sources of information that are universally accessible and suitable despite their level of technological literacy.
Conclusions
This online survey highlights the significant inconsistencies in the understanding of infertility among responders from the UK.With increasing demands on fertility services and limited public-funds allocated for infertility treatment, patients will benefit from being well informed about how and when to start a family if desired, costs associated with fertility treatments and the options for fertility preservation.The complexities that exist with advanced fertility treatments and the limited success rates with IVF make it essential for couples to be well informed when making decisions regarding their fertility treatments.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ejihpe14080141/s1,File S1: Knowledge of fertility questionnaire.Informed Consent Statement: A participant information sheet (PIS) was included for respondents to read prior to completing the survey and the first question in the survey confirmed the participants consent to complete the questionnaire.Only the participants who provided consent were eligible to progress and complete the questionnaire.
Author Contributions:
Conceptualization, N.T., A.F. and C.M.; methodology, N.T., A.F. and C.M.; formal analysis, L.N. and L.H.; writing-original draft preparation, L.N.; writing-review and editing, N.T., A.F., C.M., LH. and D.K.H.; supervision, N.T. and D.K.H.All authors have read and agreed to the published version of the manuscript.Funding: This research received no external funding.Institutional Review Board Statement: This online anonymous questionnaire was approved by the University of Liverpool's Institute of Life Course and Medical Sciences Research Ethics Committee (ref-11997).
Table 3 .
Basic fertility knowledge.Bold shows correct answer.
Table 4 .
Which of the following factors can negatively impact female fertility?
Table 5 .
Which of the following factors can negatively impact male fertility?
Bold shows factors that affect male fertility.
|
2024-07-27T15:10:11.694Z
|
2024-07-24T00:00:00.000
|
{
"year": 2024,
"sha1": "45f86718059f5dbfea2c940fbbe0591ca7ea7e45",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2254-9625/14/8/141/pdf?version=1721806780",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "84d56bfc64f4c8e21a14b053d7434bb1f7ee9a35",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
224909989
|
pes2o/s2orc
|
v3-fos-license
|
Influence of roasted and unroasted terebinth (Pistacia terebinthus) on the functional, chemical and textural properties of wire-cut cookies
Cookie is a bakery product obtained by processing, shaping, baking, and adding raising agents into the flour along with sugar, salt, oil, food products, and one or more of the other substances allowed by regulations. The cookie is widely consumed among bakery products because of its practicality, good nutritional quality, and cheapness. Cookies have an important place among snacks because they can last for a long time without spoiling, they appeal to the taste of the consumer, they can be presented in different flavors, and they can be consumed as a snack when the three regular meals provide insufficient nutrients (Demirel & Demir, 2018; Kolawole et al., 2018).
Introduction
Cookie is a bakery product obtained by processing, shaping, baking, and adding raising agents into the flour along with sugar, salt, oil, food products, and one or more of the other substances allowed by regulations. The cookie is widely consumed among bakery products because of its practicality, good nutritional quality, and cheapness. Cookies have an important place among snacks because they can last for a long time without spoiling, they appeal to the taste of the consumer, they can be presented in different flavors, and they can be consumed as a snack when the three regular meals provide insufficient nutrients (Demirel & Demir, 2018;Kolawole et al., 2018).
Changing nutritional habits toward the consumption of more fruits, vegetables, and cereals is an effective and practical approach to the prevention of chronic diseases. In recent years, it has been scientifically demonstrated that the intake of certain foods through "natural" means prevents or cures some diseases (Coşkun, 2005). Today, consumers' preferences for a healthier lifestyle incline towards low-calorie, high-fiber foods with low sugar and salt content and fewer additives (Demir, 2015). The enrichment of cookie formulations that are consumed by particularly by children but are also appealing to adults in the form of snack food is very important because it will ensure that these components are taken into the body during consumption (Doğan & Meral, 2016).
Terebinth is one of the important plant species in terms of its chemical properties. Terebinth has recently been reported to be a plant rich in antioxidant properties, phenolic content, fat content, fatty acid components, and tocopherol content (Couladis et al., 2003;Topçu et al., 2007;Özcan et al., 2009;Dalgıç et al., 2011). In addition, some studies focused on the chemical properties of terebinth in terms of medicine have highlighted and supported the use of this plant in folk medicine (Özcan et al., 2009;Bakirel et al., 2003;Giner-Larza et al., 2001). Pistacia terebinthus L., commonly known as terebinth, is an oleiferous fruit with a specific flavor and high aroma value and is collected from trees or shrubs found in forested areas in August-October in the Mediterranean region, and is traded in markets, herbalists, and in spice stores.
In Turkey, the terebinth tree is not culturally cultivated, but its fruit is traditionally processed and consumed in unroasted or roasted form in various ways from beverages to pastes (Karakaş & Certel, 2004). It has been shown that the dry fruit extracts of P. terebinthus L. have some hypolipidemic effects without causing toxic effects in rabbits (Bakirel et al., 2003). It has been found that the fruits of P. terebinthus L. improved the lipid profile and caused reductions in atherosclerosis (Edwards et al., 1999). Epilupeol and epilupeol acetate found in the resin of the Pistacia species have been observed to have antiviral activity (Özçelik et al., 2005). Grassmann et al. (2002) emphasized that the compounds identified in the essential oil of terebinth play an active role in the prevention of many diseases such as cancer, Alzheimer's, and so on. They also stated that these substances bind types of reactive oxygen by entering the tissues easily with a lipophilic effect and thus show antioxidant properties.
Although it has many functional properties, there is no study regarding the use of terebinth as a functional component in food products. Therefore, this study investigated the effects of using terebinth fruit-which has many positive properties and is generally consumed as coffee and a snack-in cookie formulation as a functional component and particularly on the nutritional quality of cookies.
Materials
Wheat flour, powdered sugar, nonfat dry milk, sodium bicarbonate, ammonium bicarbonate, shortening and salt used in the cookie production were obtained from a local market in Kilis, Turkey. High fructose corn syrup was obtained from Beşan Starch Food Industry and Trade Inc. (Gaziantep, Turkey). The terebinth fruit used as an additive in the study was purchased from Şekeroğlu Spice Food Industry and Trade Company.
Cookie production was carried out by adding paste obtained from roasted and unroasted terebinth grains to the flour used in cookie production. Also cookie without terebinth was produced as a control sample. Terebinth grains were cleaned of foreign materials and spoilt grains and were separated into four groups: a) Group 1, roasted at 100 °C; b) Group 2, roasted at 125 °C; c) Group 3, roasted at 150 °C; d) Group 4, unroasted.
All groups except Group 4 were roasted in a pre-heated oven (Arçelik brand SUF 4000 MEB model set-top electric oven with temperature and time adjustment) for 20 minutes. Terebinth pastes were obtained by grinding the terebinth in a laboratory-type crushing mill.
Cookie preparation
Wire-cut cookies were prepared according to the principles specified in AACC Standard Method No: 10-54.01 (American Association of Cereal Chemists, 2010) by using the basic formula given in Table 1.
Terebinth pastes were added to the cookie formulation by replacing wheat flour with 0%-control, 10%, 20%, 30%, 40% and 50% by flour weight. A control sample without terebinth paste was also produced (Table 1). Nonfat dry milk, powdered sugar, salt and shortening were creamed using a KitcheneAid mixer (KSM45 model). HFCS, ammonium bicarbonate and sodium bicarbonate dissolved in water was added to the mixture, and mixed to obtain a homogeneous creamy texture. Finally, flour or flour-terebinth paste mixture was added and mixed in to form cookie dough. The dough was kneaded and sheeted to a uniform thickness of 5 mm and cut into circular shapes 50 mm in diameter. Baking was carried out at 205 °C for 11 min in an oven. The cookies were cooled at room temperature and stored in polyethylene bags until further analysis.
The sample names in the text are abbreviated in code names. According to this, unroasted terebinth is shown as A, terebinth roasted at 100 °C is B, terebinth roasted at 125 °C is C, and terebinth roasted at 150 °C is D. Terebinth addition rates are expressed as 1 for 10%, 2 for 20%, 3 for 30%, 4 for 40% and 5 for 50%.
Physical analysis
Diameter (D) and thickness (T) of the cookie samples after baking were measured by using a digital caliper according to AACC Method No: 10-54.01 (American Association of Cereal Chemists, 2010). The spread ratio (S) of cookies was estimated by calculating D/T values.
Total Dietary Fiber (TDF)
This analysis was conducted in flour, terebinth pastes, and cookie samples by using a total dietary fiber test kit (Megazyme International Ireland Ltd., Bray Business Park, Bray, Co. Wicklow, Ireland). This method follows the same procedure as the method developed by Lee et al. (1992). Accordingly, after the samples were suspended in 10 mL of Mes-Tris buffer (pH: 8.2), they were treated with thermal α-amylase, protease and amyloglucosidase enzymes, respectively, in order to remove starch and proteins. Starch at 100 °C with thermal α-amylase, proteins at 60 °C with protease were hydrolyzed and the starch was broken down into glucose units at 60 °C with amyloglucosidase. In order to precipitate the non-starch polysaccharides (dietary fibers) and remove soluble protein and glucose units from the environment, 95% ethyl alcohol was added to the samples and left for 60 minutes. Later, filtration was performed from glass crucibles with a Por 3 sintered filter. The residue in the flasks was washed again with 78% ethyl alcohol, 95% ethyl alcohol, and acetone, respectively, and filtered again. To determine the total dietary fiber, glass crucibles were dried overnight at 105 °C and weighed. Later, the content in the glass crucibles was burned at 525 °C and the ash amount found was removed from the predetermined total dietary fiber and the ash was verified. Then, the total dietary fiber was calculated in dry matter as a percentage.
Phytic acid
Phytic acid determined in flour, terebinth pastes and cookies was carried out by using Haug & Lantzsch's (1983) colorimetric method. A fine ground sample was extracted with 0.2 M HCl for 180 min in a shaker at 175 rpm. After extraction the solutions were centrifuged at 3000 rpm for 20 min and clear supernatants were used for phytic acid determination. Two milliliters of ferric solution (0.2 g ammonium iron (III) sulphate.12 H 2 O (Merck 3776) was dissolved in 100 mL 2 HCI and made up to 1000 mL with distilled water) was added to 1 mL of supernatant (containing 3-30 µg/mL phytate phosphorus) and the solutions were mixed and heated in a boiling water bath for 30 min. The samples taken from the boiling water bath were put into the cold-water bath and cooled to room temperature. After cooling, the samples were centrifuged again for 10 minutes at 3000 rpm. One milliliter was taken from the centrifuged samples into glass test tubes and 3 mL of 2,2 bipyridine solution (10 g of 2,2' bipryridine (Merck 3098) dissolved in 10 mL of thioglycolic acid (Sigma 528056) in distilled water and made up to 1000 mL) was added (after adding bipyridine, the samples appeared pink) to the solution, mixed, and the absorption was measured at 519 nm against the distilled water. The method was calibrated with the reference solutions as a substitute for the sample solution with each set of analyses. The calibration curve was made using sodium salt of phytic acid (Sigma P-8810). The measurements were performed in duplicate.
Total Phenolic Content (TPC)
Total phenolic content (TPC) of wheat flour, terebinth paste and cookies was analyzed according to Yu et al. (2002) with some modification. A one gram sample was extracted with 80% methanol for two hours at 200 rpm at 37 °C. The TPC of sample extracts were determined using Folin-Ciocalteu reagent. The reaction mixture contained 100 µL of sample extracts, 1000 µL of the Folin-Ciocalteu reagent, and 2 mL of 10% sodium carbonate. After one hour of reaction, the absorbance at 765 nm was measured. Triplicate reactions were conducted. TPC was calculated with the equation obtained from absorbance/ concentration standard graph previously generated with gallic acid and the results were expressed as the amount of mg gallic acid equivalent (GAE) for a 100 gram sample.
Antioxidant activity
The modified version of Yu et al. 's (2002) method was used to analyze wheat flour, terebinth pastes, and cookies. This method is based on the spectrophotometric measurement of the decrease in color resulting from the destruction of DPPH (2,2-diphenyl-1-picrylhdrazyl) radical, a pink colored stable compound. 0.1 g sample was weighed, and then 3 mL of methyl alcohol was added, and a one-hour extraction was performed. The supernatant part of the tubes, which were centrifuged at 9000 rpm for 20 minutes, was taken into a separate tube and the extract stock was prepared. 3.9 mL of DPPH solution (0.025 g/L methanol) was mixed with 0.1 mL of extract and kept at room temperature for 120 minutes. At the end of the period, the sample absorbances were measured at 515 nm. Results were calculated by using Equation 1 as the rate of inhibition percentage of DPPH radical.
where: A Blank = Absorbance of blank; A Sample = Absorbance of sample.
Color analysis
The color tests for wheat flour, terebinth pastes and cookie samples were conducted by measuring L* (lightness, 100, white; 0, black), a* (+, red, -, green) and b* (+, yellow; -, blue) parameters by means of a tristimulus reflectance colorimeter (Hunterlab MiniScan EZ, Reston, Virginia, USA). Color measurements were made in three replicates, and the result was given as the average of these three values.
Textural properties
Fracture resistance (hardness) analysis of the cookies was performed with the TA-XT2i Texture Analyzer (Stable Micro Systems Ltd., Godalming, Surrey, UK) based on AACC Method No: 74-09.01 (American Association of Cereal Chemists, 2010). For this purpose, the three-point bending test technique (pre-test speed: 1 mm/s, test speed: 3 mm/s, post-test speed: 10 mm/s, test distance: 5 mm, trigger value: 5 g) was used. The cookies were placed on two vertical aluminum barriers with a spacing of 4 cm, and a force was applied at a speed of 3 mm/s towards their midpoint. The maximum force at the breaking point was recorded as Newton [N].
Statistical analysis
The results were compared using analysis of variance (ANOVA) with respect to the terebinth roasting temperature and the terebinth addition rate. Means that were statistically different from each other were compared by using Student's t comparison test at 5% confidence interval. JMP 11.0 (SAS Institute Inc., Cary, NC, USA) software was used to perform the statistical analysis.
Raw material properties
Some analysis results of wheat flour and terebinth pastes used in cookie formulation are given in Table 2. It was determined that the terebinth pastes had higher ash, protein, fat, TDF, TPC and antioxidant activity values than wheat flour. This condition was found to be statistically significant except for protein (p < 0.05). Therefore, terebinth would be a suitable functional component for increasing the nutritional quality of a cookie with its rich chemical composition.
The TDF, TPC, and antioxidant activity values of the terebinth pastes roasted at different temperatures increased with the increase in roasting temperature. The structure and composition of food, the type of heat treatment applied and the degree of temperature can cause an increase in the amount of phenolic components (Sakac et al., 2011). It has been stated that some gallate derivatives convert to gallic acid by the effect of heat. It has been stated that the components in the structure of food, their interaction with each other and phenolic components, and a number of reactions resulting from heat treatment can alter the antioxidant properties and phenolic substance profile of food. In addition, antioxidant activity may increase due to the presence of intermediates occurring by the Maillard reaction (Meral, 2011). As shown in Table 2, roasting caused a decrease in the amount of the phytic acid in the terebinth pastes. The reason for the reduction of phytic acid by heat treatment is due to the effects of temperature on phytic acid and the formation of water insoluble complexes between phytate and other components (Udensi et al., 2007). The ash, protein, fat, phytic acid and total phenolic contents of wheat flour were 0.61%, 10.05%, 1.15%, 2.48 mg/g and 0.85 mg/g, respectively. These results are similar to the findings reported by Demir (2015). The ash content of the unroasted terebinth fruit (A) used in the study was determined as 2.40%, the protein content was 10.10% and the fat content was 38.11% (Table 2). In a study conducted by Özcan (2004), the ash, protein and fat contents of terebinth fruits were found to be 3.1%, 9.67% and 38.74%, respectively. As may be seen in the table, different roasting temperatures caused a significant increase in all chemical, functional and color characteristics of the terebinth except the ash, protein, and fat (p < 0.05). The amount of ash, protein and fat in terebinths increased with the roasting process. The increase in fat content is thought to be due to the release of fat molecules within the cell by denaturing of the cell wall in the protein structure through the roasting process. The increase in ash and protein values are thought to be caused by a partial increase due to moisture loss. When Table 2 is examined, it may be seen that L*, a* and b* values of wheat flour were 96.56, 0.50, and 8.02, respectively. The color values of unroasted terebinth fruit (A) were L* = 14.79, a* = 4.04 and b* = 14.27. When the color values of unroasted terebinth were examined, it was seen that it was more yellow, red, and darker than wheat flour. It was found that color values decreased and, in particular, color darkness increased as the roasting temperature of terebinth samples increased (p < 0.05).
Chemical and physical properties of cookies
The values of some chemical and physical properties of cookies are given in Table 3. The ash values of the cookies increased significantly with the increase in the terebinth addition ratio at each roasting temperature (p < 0.05).
The ash contents of the cookies varied between 1.37% and 2.33%. D5 sample had the highest ash values with 2.33% while the control sample had the lowest ash value with 1.37%. It was observed that the use of terebinth at increasing ratios in cookie formulation significantly increased the amount of protein (p < 0.05).
While the amount of protein was determined as 9.50% in the control cookies, it ranged between 9.67% to 10.53% in the cookies with the terebinth addition. When Table 3 was examined, it was seen that the protein values of the cookie samples with terebinth addition were found to be higher than the control sample. The protein value of the samples increased with both the increase in roasting temperature and the increase in terebinth addition ratio. The increase in protein values was due to the high protein content of the terebinth fruit.
The fat content of terebinth fruit is higher than raw materials such as sunflower seed, olives, cottonseed, soy, safflower and rapeseed which are widely used in oil production. However, it is lower than the fat contents of oily seeds such as peanut, palm, sesame and coconut (Kaya, 2012). Since the fat content in terebinth is higher than the fat content in wheat flour, the fat content in the cookies increased significantly as the terebinth addition ratio increased (p < 0.05) ( Table 3). The fat values of the cookies also increased with the increase in the roasting temperature. While the amount of fat in the control cookie was 20.56%, the highest fat content was observed in the D5 sample with a value of 28.50% in cookies with the terebinth addition.
Diameter, thickness and spread ratio values are important parameters in terms of determining the technological quality of the cookie, and in general, it is desired that the diameter is wide, the spread ratio is high and thickness is low (Demir, 2015). The diameter values of the cookies with the terebinth addition varied between 71.1 mm and 73.6 mm. While the average diameter values of cookies increased with the increase in the terebinth addition ratio, as the roasting temperature increased, the diameter values decreased (Table 3). The highest diameter value was obtained in the A5 sample (74.6 mm) and the lowest diameter value (68.4 mm) was obtained at the cookie sample (D1) with terebinth that was roasted at 150 °C. It was found that the addition of terebinth had a significant effect on the average thickness and spread ratio values of the cookies (p < 0.05). With the addition of terebinth, the spread ratios of cookies increased but their thickness decreased. The spread ratio of cookies is largely dependent on the dough viscosity (Baumgartner et al., 2018). The increased spreading ratio in the cookies with the terebinth addition can be explained with the decrease in dough viscosity. The increase in diameter and spread ratio and the decrease in thickness value were found to be consistent with other studies in the literature (Kaur et al., 2019;Yağcı, 2019;Baumgartner et al., 2018;Bilgiçli et al., 2006). It is also reported in the literature that there is a negative correlation between the spread ratio of the cookies and the protein content of wheat flour (Barak et al., 2013). However, in this study, there was an increase in the spread rates of cookies with an increase in protein contents. This is thought to be due to changes in the gluten structure during cooking. In relation to this situation, Miller et al. (1996) stated that the gluten structure does not occur during dough mixing but forms a continuous network with a visible glass transition during cooking. The spread of the cookie dough stops when this network is sufficient to stop the expansion of the cookie dough. Thus, terebinth might have weakened the gluten structure, leading to an increase in the spread ratio of cookies.
Functional properties of cookies
The total dietary fiber (TDF), total phenolic content (TPC), antioxidant activity (percent inhibition of DPPH radical), and phytic acid values of the cookie samples are shown in Table 4.
The addition of terebinth in both unroasted and roasted forms caused a significant increase in TDF, TPC and antioxidant values of cookies (p < 0.05). The reason for this increase is that TDF, TPC and antioxidant levels of both unroasted and roasted terebinth are higher than those of wheat flour (Table 2). TDF, TPC, and antioxidant values of all cookies with the terebinth addition were found to be higher than those of the control cookie. The highest TDF value (17.58%), TPC value (5.11 mg/g) and antioxidant value (33.83%) of all cookies produced in the study were found in sample D5. The results were found to be consistent with those reported in previous studies (Kaur et al., 2019;Nakov et al., 2018;Molinari et al., 2017;Konak et al., 2015).
Although phytic acid has important functions for plants, it has some negative effects on the human body. One of the most important of these is that it forms a complex with some essential minerals such as Ca, Fe, Zn, and Mn to prevent their absorption. It can also be effective by binding a large part of the phosphorus as phytate phosphorus in its body or interacting with some amino acids (Zhou & Erdman, 1995;Dendougui & Schwedt, 2004;Egli et al., 2004;Hurrell, 2003. From this point of view, it is desirable that the level of phytic acid in food to be low. The addition of terebinth in roasted form caused a decrease in phytic acid value in cookies. The highest amount of phytic acid (1.15 mg/g) was found in sample A5 while it decreased to 0.79 mg/g in sample D5. In a study conducted to find the phytate levels, iron content and in vitro usability of the cookies samples with wheat fiber, oat fiber, apple fiber, unilin, soybean flour, amaranth flour and carob flour additions, the cookie samples with carob flour addition were found to have the lowest levels of phytic acid content (5.31 mg/g) (Vitali et al., 2007). Bilgiçli et al. (2005) studied the effect of addition of dietary fiber on the nutritional properties of cookies and found that the phytic acid values of cookies with apple fiber, lemon fiber and wheat fiber addition decreased. They reported that phytic acid values decreased from 2.26 mg/g to 1.83 mg/g in the cookies with the apple fiber addition, from 2.19 mg/g to 1.62 in cookies with the lemon fiber addition, and from 2.22 mg/g to 1.78 mg/g in cookies with wheat fiber with the increase in the addition ratio of the fibers.
In this study, the decrease in phytic acid values was thought to be caused by the roasting process applied to the terebinth fruit.
Color and textural properties of cookies
Color is one of the main features affecting the acceptability of cookies by consumers. The color of the cookies is mainly attributed to the Maillard reaction which occurs when the reducing sugars react with the amino groups of proteins in the medium during cooking due to the effect of high temperatures (Hadiyanto et al., 2007). When the color values given in Table 5 are examined, it may be observed that L* and b* values of the cookies produced by the terebinth addition decreased, but a* values increased (p < 0.05). The highest L* and b* values were observed in the control group sample. In general, the addition of terebinth and the increase in the addition rates caused changes in the color of the end product, resulting in a duller, red color product. Similarly, the roasting process applied to the terebinth also decreased the L* and b* values of the cookies and increased a* values. The dull and dark color of the cookies can be attributed to terebinth's natural dark color (Table 2). Giuberti et al. (2018) reported that the use of alfalfa seed flour in cookie production affected the end product's color and that darker and less yellow products were obtained. In a similar study, it was found that a values of cookies increased, and L and b values decreased after the addition of bulgur bran (Özkeser, 2015). Textural properties such as hardness and durability defined as the resistance of cookie to deformation are very important parameters in bakery products (Ahlborn et al., 2005). They are also the main parameters affecting consumer acceptance. The hardness measurement values of cookies produced within the study decreased with the increase in the terebinth addition ratio. These decreases were observed especially in the terebinth addition ratios of 30% or more (Table 5). Pareyt et al. (2008) found that, as the gluten content in cookie dough increased, hardness values decreased due to raising and density changes. The reason for the decrease in the hardness value of the cookies with terebinth addition can be attributed to the decrease in the amount of gluten and the increase in the amount of fat due to terebinth addition. Kaur et al. (2019) reported that the hardness values of cookies with the addition of 10% and 20% roasted flaxseed flour were 13.41 N and 12.01 N, respectively. In another study, it was reported that increasing the burdock root flour addition to the cookie formulation decreased the hardness values of the cookies (Moro et al., 2018).
Conclusions
In this study, the effect of unroasted and roasted terebinth addition at different temperatures on cookie quality was investigated. Terebinth addition to the cookie formulation increased the functional properties of the samples in terms of ash, protein, fat, dietary fiber, total phenolic content and antioxidant content. All baked formulations indicated a higher browning index as a result of formation of Maillard reaction which results in an increase in antioxidant capacity due to the formation of melanoidins which are extensively known to have antioxidant activity. In conclusion, it was concluded that the use of terebinth in cookie formulation increased the nutritional properties, specifically of total dietary fiber, total phenolic content, and antioxidant activity, and that terebinth could be used as a functional component in cookie enrichment. In subsequent studies, it would be beneficial to further study how storage duration and conditions would affect the quality characteristics and antioxidant potential of terebinth fortified cookies or a related bakery product.
|
2020-10-19T18:09:25.634Z
|
2020-09-28T00:00:00.000
|
{
"year": 2020,
"sha1": "a751b6ed5595d49a26ae987accb3d29f8688d906",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/cta/a/ZGhSWYWN94bvBpW7dvBDGkc/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b4db82a51cbf5dfe6706352d2e4607873f1c2c50",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
242778019
|
pes2o/s2orc
|
v3-fos-license
|
Co-circulation of two viral populations under vaccination
The interaction and possibly interference between viruses infecting a host population is addressed in this work. We model two viral diseases with a similar transmission mechanism and for which a vaccine exists. The vaccine is characterized by its coverage, induced temporary immunity, and efficacy. The population dynamics of both diseases consider infected individuals of each illness and hosts susceptible to one but recovered from the other. We do not incorporate co-infection. Two main transmission factors affecting the effective contact rates are postulated: i) the virus with a higher reproduction number can superinfect the one with a lower reproduction number, and ii) there exists some induced (indirect) protection induced by vaccination against the weaker virus that reduces the probability of infection by the stronger virus. Our results indicate that coexistence of the viruses is possible in the long term, even considering the absence of superinfection. Influenza and SARS-CoV-2 are employed to exemplify this last point, observing that the time-dependent effective contact rate may induce either alternating outbreaks of each disease or synchronous outbreaks. Finally, for a particular parameter range, a backward bifurcation has been observed for dynamics without vaccination.
Introduction
The interaction between viral species follows known patterns of coexistence. [1] were the first to show that the inclusion of superinfection makes coexistence between competing species possible. Superinfection is a process where a competitive hierarchy exists among a group of species that compete for the same resources. In general (in the absence of vaccines or treatments), this hierarchy facilitates coexistence and prevents competitive exclusion of the weaker species in a process that is mainly driven by the relative magnitude of the reproduction numbers of both species [2] that reflect the abilities to use a contested common resource.
The process of superinfection has been identified as a promoter of the coexistence of pathogen strains in a given host population. The structure of the models that have been used to theoretically demonstrate this property [1,3,4], has also been applied to explain the organization of community structure for species that inhabit a common landscape but are not necessarily closely related [5]. For infectious diseases, the concept of superinfection has pervaded theoretical explanations for coexistence, in the same host population, of variants of a given pathogen [6,7]. The pioneering model in [3] is about infectious diseases and has been used to explore a plausible hypothesis for the length of the latent period of HIV before the onset of AIDS. Simultaneously, these same authors [8] addressed the problem of community structure postulating a trade-off between colonization and extinction [9] in the presence of a hierarchy of competitive abilities for the exploitation of the resources in a common landscape. For acute respiratory infections, this competitive hierarchy that henceforth we will call superinfection has been postulated to explain the alternating dynamics between influenza and RSV (respiratory syncytial virus). It is known that the reproduction number of influenza is higher than that of RSV. This fact, coupled with weather variability (seasonality), induces alternating patterns where influenza and RSV infections have a limited temporal overlap [6]. Seasonal influenza has a median basic reproduction number of 1.28 [10]; while for SARS-CoV-2, the median R 0 is 2.79 [11,10]. This difference in transmission potential supports the assumption that while competing for hosts, their common resource, the likelihood of long-term coexistence and co-circulation is high and that this balance may be associated with superinfection.
Vaccines have a protective effect via immune responses or indirect effects by reducing the burden of viral and bacterial respiratory diseases on individual patients, among others [12]. However, in general, the impact of vaccination on the transmission of a viral disease starts slowly and builds up over several months to reach target coverage levels. Vaccines are evaluated in terms of efficacy, existing population immunity, coverage and temporary immunity, both natural and vaccine-induced, reduction of mortality or infection risks [13]. Vaccines are applied in order to eradicate a disease. However, this aim depends on many factors and may not be fully achieved.
The present work addresses the general theoretical problem of the co-circulation and long-term persistence or eventual extinction of two viral respiratory infections subject to vaccination. In addition, we explore: i) the order relation between basic reproductive number and vaccine reproduction number, and ii) the existence of backward bifurcation when vaccination is not considered. Finally, we exemplify some of our results with the case of the developing ecological interaction between SARS-CoV-2 and influenza.
The paper is organized as follows. In Section 2 we formulate our mathematical model. In Section 3 we develop the local analysis, including cases of backward bifurcation. In Section 4, we address a particular case, co-infection dynamics between influenza and SARS-Cov-2. Finally, in Section 5 we draw some conclusions about this work.
Mathematical model set-up
We formulate a mathematical model considering the simultaneous presence of two viruses and vaccination for each one. We do not distinguish between viral subtypes and, therefore, we approximate viral dynamics as if each of them were a single viral population. Both diseases are assumed to present temporary immunity and, thus, the possibility of reinfections. The model is a coupled system of two SIRS (susceptible-infected-recovered-susceptible) equations ( Figure 1). Figure 1: A compartmental mathematical model for two coupled SIRS. s, i, y, r i , r y , y r i , i ry and r represent the population of susceptible, infected by virus i, infected by virus y, immune from virus i, immune from virus y, immune from virus i but infected with virus y, immune from virus y but infected with virus i, and immune from both viruses. Here, r k includes population recovered after infection and successfully vaccinated by virus k, where k represents i or y viruses. Dashed blue lines represent vaccination dynamics for both viruses and dashed red line denotes superinfection process.
We assume a constant total population, normalized to the total population N . In the presence of vaccination against both viruses, the equations stand as: In eq. (1), s, i, y, r i , r y , y ri , i ry and r are defined as described in Figure 1. Note that the 3 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) immune population for each virus can be infected (vaccinated) by (against) the other virus. With the aim of approximate the seasonal component that drives ILIs [14,15,16], for numerical analysis, we assume that the effective contact rates for both viruses, β i (t) and β y (t), are time-dependent. It is also assumed that virus y may potentially competitively exclude virus i from its host. The effective contact rate of this transmission route is modulated by α, with 0 ≤ α ≤ 1 that we define as the superinfection coefficient.
On the other hand, we define p i , 0 ≤ p i ≤ 1, to measure the indirect protective effects of ivaccinated individual against infection by virus y (e.g., as described for influenza and SARS-CoV-2 in [12], additionally [17], found a significant reduction in the odds of testing positive for COVID-19 in patients who received an influenza vaccine compared to those who did not receive the vaccine. In pediatric populations, seasonal influenza and pneumococcal vaccination may have protective value in symptomatic COVID-19 diseases [18]). This protective effect is modelled by a reduction of the effective contact rate of virus y when in contact with an immune individual from virus i (r i ). We assume that there are reinfections by both viruses due to the non-lasting immunity conferred by previous infections. Thus, θ k is the loss of immunity rate related to virus k. To simplify the model, we assume that θ, the loss of immunity rate once an individual becomes immune from both viruses, is a constant and does not depend on the order of the infections. Other parameter descriptions are shown in Table 1 We do not distinguish between vaccinated and recovered individuals. Instead, we collect immunized individuals into a single compartment; r i and r y contain, therefore, those individuals that have either been vaccinated against virus i or virus y, respectively or that, alternatively, are recovered from a natural infection for either of these two viruses. This modeling choice reduces the system's dimensionality.
Effective target coverage definition We define the vaccination rate, φ k where k denotes either virus i or virus y as an effective vaccination rate. In other words, φ k incorporates vaccine efficacy. Under vaccination, susceptible individuals are constantly leaving this compartment at a rate −φ k S. Thus the probability of having been vaccinated at time t is 1 − exp(−φ k t). Therefore, if we wish that a proportion q E k of the susceptible population is vaccinated at time T , then we set a vaccination rate such that q E k is the effective target coverage or ETC with time horizon T k : if the vaccine has σ k % efficacy and we apply the vaccine to q k % of the population, then only a fraction q E k = σ k q k is effectively protected where k = i denotes virus i and k = y denotes virus y.
Local analysis
First, we briefly characterize some basic properties of the solutions of model eq. (1).
The proof is immediate and follows from Proposition A.17 in Appendix A of [19].
Reproduction number
The disease-free equilibrium of eq. (1) always exists and it is given by E 0 = (s * , i * , y * , r * i , r * y , y * ri , i * ry , r * ) where E0 = s * , 0, 0, φis * µ + θi + φy , φys * µ + θy + φi , 0, 0, s * θ + µ φiφy µ + θi + φy with where H = (µ + θ) (µ + θ i + φ y ) (µ + θ y + φ i ). Note that the susceptible population at the diseasefree equilibrium depends on the parameters for coverage and immunity for both viruses . Given the interaction of both viruses, their reproduction numbers give information on conditions for coexistence, competitive exclusion, or extinction. In what follows, we give a first characterization for these properties. When the diseases have not yet invaded the host population, but the host is vaccinated against both viruses, we compute the vaccine reproduction number. We proceed as in [20] to obtain: Here, s * , r * i and r * y are defined in eq. (3). Then, the vaccine reproduction number is given by the spectral radius of matrix F V −1 : 5 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 26, 2021. ; https://doi.org/10.1101/2020.12.29.20248953 doi: medRxiv preprint where the vaccine reproduction numbers for virus i (R vi ) and virus y (R vy ) are: Remark: In the absence of vaccination for either one of the viruses, and for constant effective contact rates, the basic reproductive numbers are the classical expressions of an SIR epidemic, that is From eqs. (5) and (6), the order relation kept by R vk and R 0k , with k = i, y, is not obvious. For example, Figure 2 shows that taking µ = 0.000039139, β i = 0.3, β y = 0.2, η i = 1/5, η y = 1/14, θ = 1/365, θ i = 1/365, θ y = 1/180, ω i = 1/7, ω y = 1/16, p i = 0.05. Varying both coverages q E k from 1% to 99% (eq. (2)), it is possible that R vk > R 0k with k = i or y, i.e, vaccine application may enhance rather than reduce the occurrence of an outbreak. It is therefore, important to find the conditions that guarantee that only a reduction occurs, i.e., that R vk < R 0k . In Figure 2, the time horizon to reach vaccination coverages against viruses i and y is fixed on four and three months, respectively. We further characterize the relationship between these reproductive numbers.
6
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 26, 2021. ; Lemma 3.2 Let R vk and R 0k be defined as in eqs. (5) and (6), respectively; and M i = s * + (ηi+µ) (ωi+µ) r * y and M y = s * + M i and M y can be interpreted as the effective proportion of susceptible individuals available to infection by virus i and y, respectively. This lemma underlines the dependence of the order relation between the reproductive numbers R vk and R 0k , with k = i, y, on the pool of susceptible individuals for each virus.
The proof is given in Appendix A.
When R 0k < R vk
In lemma 3.2, we have shown that there is a range of parameter combinations such that R 0k < R vk , with k = i, y, that is, where vaccination is not effective in reducing transmission. In this section we explore this case. We fix µ = 0.000039139, α = 0.5, 001603, φ y = 0.002341 and θ y = 1/180. Thus, M i = 1.131837 and M y = 1.076793. Figure 3 shows the asymptotic equilibria, when t = 36500, of i and y as function of R vi and R vy . Both effective contact rates values are constant and defined such that 0.4 ≤ R vi ≤ 1.6 and 0.5 ≤ R vy ≤ 2, with the initial condition (s(0), i(0), y(0), r i (0), r y (0), i ry (0), y ri (0), r(0)) = (0.8498, 0.0001, 0.0001, 0.1, 0.05, 0, 0, 0). Figure 3 illustrates that when R vk > 1, with k = i, y, both disease coexist. Moreover, there exist combinations of parameter values such that R vi < 1 < R vy which also implies coexistence ( Figure 3A). This phenomenon can be interpreted as a kind of rescue effect of one virus by the other. Figure 3B also shows this phenomenon. Finally, when R vk < 1, with k = i, y, both disease go extinct. Figure 4 shows a viral coexistence. Here, β i and β y are 0.19 and 0.07, respectively. This case shows that even when both basic reproductive numbers R 0k are less than one, both viruses persist in the absence of vaccination. This pattern strongly indicates the existence of bi-stability and of a backward bifurcation [21]. Figure 5 confirms the existence of bi-stability. Here, we fix initial conditions (s(0), i(0), y(0), r i (0), r y (0), i ry (0), y ri (0), r(0)) = (0.8499 − y 0 , 0.0001, y 0 , 0.1, 0.05, 0, 0, 0). When A backward bifurcation means that the reproduction number being less than unity becomes only a necessary, but not sufficient condition, for disease elimination. To further explore the existence of the backward bifurcation shown in Figures 4 and 5, we performed the numerical continuation of the equilibrium points in appropriate parameter regions.
7
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 26, 2021. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 26, 2021. ; Figure 5: Bi-stability when R 0k < 1, with k = i, y. Red line represents coexistence of both viruses. Blue line shows extinction. The inset is a zoom of the extinction dynamics.
of β k so that R 0k is below and above unity. All numerical results were implemented in Matcom (Matlab tool) [22]. Figure 6 illustrates the numerical continuation of equilibrium points for viral coexistence and extinction of both viruses (in the absence of vaccination), projected on the plane β k and virus k for k = i, y. Parameter values are β i = 0.19, α = 0.5, θ i = 1/365, θ y = 1/180, θ = 1/365, η i = 1/5, η y = 1/14, ω i = 1/21, ω y = 1/56, p i = 0. Figure 6A shows that the continuation extinction curve of both viruses given by s is locally stable (solid line) up to the value β y = 0.0714676, corresponding to a reproduction number of R 0y = 1.0054. In curve s as the virus y infection rate increases, the disease-free equilibrium curve becomes unstable (dashed line). The coexistence curve of both viruses denoted by iy, is locally stable (solid line) for β y ∈ [0.05814, 0.2697] or, equivalently for 0.81 < R 0y < 3.77.
In Figure 6B, we show the numerical continuation of equilibrium points: for viral coexistence and extinction of both virus projected on the plane i − β y . The disease-free equilibrium branch s, is locally stable (solid line) until R 0y = 1.0054. The stability interval of the branch with the two virus present, iy (solid line) is 0.81 < R 0y < 3.79. Figs. 6C-D illustrate the numerical continuation of equilibrium points (coexistence and extinction of both virus) projected on the planes y − β i and i − β i , respectively. In both, the disease-free equilibrium branch s is locally stable (solid line) until R 0i = 1.09, the stability of the coexistence curve starts at R 0i = 0.502.
In all cases, we show that a stable coexistence equilibrium exists together with a disease free equilibrium when R 0k < 1. The usual causes of backward bifurcation in some standard deterministic models are imperfect vaccination [23,24], the existence of exogenous re-infections [25] or vaccinederived immunity waning at a slower rate than natural immunity [26], the role of re-infection [27], among others [28]. Since the recovery rate from the second infection for both viruses is smaller 9 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 26, 2021. ; than the recovery rate from the first infection, the influx to the pool of susceptibles occurs in a time window larger than expected if only one infection were present. This continuous and extended influx generates a backward bifurcation. Therefore the vaccination strategy must be efficient (large coverage in as short a time as possible), to prevent the increase in the pool of susceptibles that may lead to undesirable outcomes. In general, as superinfection increases in strength (α increases), we observe a corresponding slight decrease in the number of superinfected ( with virus i) hosts and an increase in the superinfector (with virus y). This is consistent with standard results, [1,3,4]. However, there exist parameter values where an increase of α produces an unexpected change in disease dynamics. Figure 7 shows the effect of increasing superinfection (α) for a particular initial conditions and parameter values. As an example we take (s(0), i(0), y(0), r i (0), r y (0), i ry (0), y ri (0), r(0)) = (0.8496, 0.0001, 0.0003, 0.1, 0.05, 0, 0, 0). Other parameters are as in Figure 5 to insure that R 0k < 1, with k = i, y. Apparently, there is a threshold value for α that switches the dynamics from coexistence to extinction in both viruses.
A particular case: Influenza and SARS-CoV-2
The year 2020 and early 2021 have been atypical, at least in two ways. One is the SARS-CoV-2 pandemic becoming the dominant and most prevalent respiratory viral infection from the beginning of 2020. The other is the characteristic absence of a significant number of influenza cases; influenza activity has been almost null in the southern hemisphere and now in the northern, while the SARS-CoV-2 pandemic is active [29]. Figure 8 exemplifies this in the case of Mexico. The winter months (Northern hemisphere) did not produce influenza outbreaks concurrent with COVID-19 resurgence and, thus, hospital capacity in Mexico and elsewhere was not compromised [30,31]. A possible explanation for the absence of influenza is that the measures are taken to prevent COVID-19 (social distancing, mask use, etc.) also prevent influenza transmission. These measures have limited ability to stop the COVID-19 (aerosols) but may be quite effective at preventing influenza transmission. Nevertheless, the occurrence of a syndemic episode with co-circulating influenza and SARS-CoV-2 viruses is still a potential reality.
In this section, Eq. (1) is used to explore co-circulation dynamics between SARS-CoV-2 and influenza. Given that, to date, there exist no evidence to suggest that a COVID-19 infection has been observed to displace an influenza infection, we consider α = 0. Baseline parameters are given in Appendix B.1.
11
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 26, 2021. ;
Reproduction numbers, vaccine efficacy, effective coverage and temporary immunity
The dependence of the vaccine reproduction number (eq. (4)) on vaccination coverage and temporary immunity is explored now. For the rest of this section, we fix µ = 0.000039139, η i = 1/5, η y = 1/14, θ = 1/365, θ i = 1/365, ω i = 1/5 and ω y = 1/14. The ETC for vaccine k (q E k ) determines the corresponding φ k through eq. (2), with k = i, y. We fix the effective transmission rates to β i = 0.32 and β y = 0.15. Figure 9 shows R V as function of ETC for SARS-CoV-2 (q Ey ) achieved in three months (see eq. (2)) and the average duration of immunity by SARS-CoV-2 (θ −1 y ). The protective effect of the influenza vaccine (p i ) are also left free to vary. As expected, the maximum value of R v is achieved when there is not vaccination, that is, q Ey = 0. Likewise, we also observe that as increases p i the maximum value of R v is lower.
The effect of the vaccine efficacy and time horizon over vaccine reproduction numbers are presented in Appendix B.2.
Transmission reduction by vaccination (R vk < R 0k )
Numerical explorations of eq. (1) allows us to postulate the diagram in Figure 10. We observe that when both vaccine reproductive numbers are less than one, both epidemics die out. When only one of the vaccine reproductive numbers is greater than one, then the virus associated with that reproductive number persists and the other goes extinct. Finally, if both vaccine reproductive numbers are greater than one, coexistence of both diseases ensues. In Appendix B.3 we present numerical simulations in support of our results of this section.
12
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 26, 2021. ; Figure 9: Vaccine reproduction number as a function of coverage and average duration of immunity for SARS-CoV-2. A) No protection from influenza vaccination (p i = 0). B) 50% protection (p i = 0.5). C) Full protection (p i = 1). Target coverage (TC) for influenza is 35% in 4 months with a vaccine efficacy of 50%. Note that the greater p i is, the higher the reduction in R V .
Seasonal contact rates
Finally, we address the issue of the long-term dynamics of the interactions between the two viruses. Seasonal variability is important to explain intra-annual fluctuations of viral populations [6,7]. We incorporate seasonality using a periodic effective contact rate β k (t) = β k (1 + cos ωt) where ω = 2π 365 for an annual period; β k is the baseline constant effective contact rate for virus k and is the amplitude of the seasonal variation (strength of the periodic forcing, 0 < < 1). Due to the higher reproduction number, the lack of previous immunity, the absence of antivirals or scarce vaccines, the equal and homogeneous effect of NPIs on reducing the effective contact rate, henceforth we consider SARS-CoV-2 as the competitively dominant virus in this interaction with influenza. This behavior is similar to the RSV over influenza. Figure 11 illustrates alternation patterns between influenza and SARS-CoV-2. Here, parameter values are β i = 0.45, β y = 0.22, α = 0, θ i = 1/365, θ y = 1/180, θ = 1/365, η i = 1/5, η y = 1/14, ω i = 1/5, ω y = 1/14, p i = 0.5, = 0.8, q i = 0.2, q y = 0.3. Figure 11A shows that influenza and SARS-CoV-2 have stronger outbreaks every two years in an alternating sequence. This behavior is related to a reduction in susceptibility and an increase in the strength of the periodic forcing under 13 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 26, 2021. ; https://doi.org/10.1101/2020.12.29.20248953 doi: medRxiv preprint Figure 10: Summary of the asymptotic dynamics when varying both vaccine reproduction numbers. Scenarios for SARS-CoV-2 and influenza when R vk < R 0k , with k = i, y. a vaccination scheme. Note also that SARS-CoV-2 peaks are weaker every two years when they coincide with stronger influenza outbreaks and vice versa, suggesting competition for hosts.
Figs. 11B-C show that this alternating sequence can be preserved even under vaccination for both viruses. Thus, in this scenario, every two years, the amplitude of the primary infections of both viruses is alternatively greater and, also, that this behavior may indeed correspond to the competitive alternation of both viruses.
Other patterns, between both viruses, are shown in Section B.4.
Conclusions
Influenza and SARS-CoV-2 will likely be co-circulating in the near future in many countries. However, vaccination campaigns have not begun at the same time. For example, in the Northern hemisphere the influenza vaccination campaign started in the Fall 2020, but that for SARS-CoV-2 has begun in early 2021 in many part of the World. Therefore, it is important to carefully plan vaccination campaigns and to define a realistic and sufficient coverage to avoid the situation described in the first paragraphs of this section.
Our results show that the parameters related to SARS-CoV-2, such as the average time of loss immunity, effective target coverage, and protective effect against infection by the coronavirus, all are relevant in reducing the vaccine reproduction number (Figure 9). To date, there are some 14 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 26, 2021. ;
Figure 11: Alternation patterns. A) Annual prevalence cycles for primary influenza infections (blue line) and primary SARS-CoV-2 infections (dashed blue line). B) Annual cycles for primary (orange dashed line) and secondary (orange line) influenza infections. C) Annual cycles for primary (black dashed line) and secondary (black line) SARS-CoV-2 infections.
parameter estimates for SARS-CoV-2 that remain unknown like temporary immunity and the large uncertainity of vaccine availability for developing or poor countries.
Our model also shows, as expected, that asymptotic behavior is closely associated with the vaccine reproduction number for each type of virus. For example, when considering realistic parameters for influenza and SARS-CoV-2, Figure 10 shows that if the vaccine reproduction number for influenza is greater than one and the vaccine reproduction number for SARS-CoV-2 is less than one, then influenza persists, and SARS-CoV-2 is eradicated. Coexistence sets in when both vaccine reproductive numbers are above one. This may happen as a consequence, for example, of low coverage or the time horizon for achieving it is large.
We have also numerically explored the behavior of our model with time-dependent effective contact rates. We consider it relevant because COVID-19 is a new disease with a transmission route similar to other viral infections, and in consequence, seasonal variability can explain future behaviors when considering co-circulation dynamics.
In general, we observe that influenza epidemics have less amplitude and show inter-epidemic periods with very low prevalence, whereas SARS-CoV-2 epidemics are broader in amplitude and show a clear endemic phase between outbreaks. In the simulated scenarios, we have observed that the prevalence of secondary cases (hosts that are susceptible to one but have recovered from the other virus) of both viruses decreases. The simulations assume an effective contact rate for influenza higher than that of SARS-CoV-2; however, the force of infection of this last virus (the one with the higher reproduction number) is much greater since the infection rate depends on the number of 15 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 26, 2021. ; contacts per unit time, but also on the infection probability per contact and the infectious period which are different for both viruses. Also, we have found that the transmission patterns lead to alternation of patterns where influenza and SARS-CoV-2 have stronger outbreaks every two years in an alternating sequence, suggesting competition for hosts (see Figure 11).
In a more theoretical and general case, our results show that the vaccine reproduction number for some parameter combinations can be higher than the basic reproduction number opening up the possibility of the undesired outcome where vaccination may have a negative public health impact than otherwise at the population level. This outcome underlines the importance of the design of the vaccination strategy and of the availability of vaccines. Figure 2 shows that a low ETC may push the vaccine reproduction number above the basic reproduction number, resulting in a higher prevalence than the case without vaccination. This low ETC case is unrealistic in most contexts but could be of consequence in critical settings such as war, social disturbance, and natural disasters where coverage may fall short of the desired target.
Finally, in the absence of vaccination, we have shown that there are conditions under which the basic reproductive numbers do not need to be greater than one for both diseases to coexist ( Figure 4). The above confirms the existence of bistability and of a backward bifurcation where for this special case, the recovery rate from the second infection for both viruses is slower than the recovery rate of the first infection. For this same situation, when R 0k < 1 k = i, y, there are initial conditions where superinfection can switch the stability of equilibria: low α gives coexistence and higher α, extinction (see Figure 7).
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 26, 2021. ; https://doi.org/10.1101/2020.12.29.20248953 doi: medRxiv preprint Then, the characteristic polinomial of the J E0 is We observe that a 1 , a 2 , a 3 , a 4 > 0 and a 1 a 2 a 3 − a 2 3 − a 2 1 a 4 > 0. Hence, p 1 (λ) satisfying the RouthHurwitz criterion. In consequence, all roots of p 1 (λ) have negative real parts. Likewise, it is clear that all roots of p k (λ) are negative if and only if R vk < 1, with k = i, y. Therefore, all eigenvalues of J E0 have negative real parts if and only if R v < 1.
Appendix B Influenza and SARS-CoV-2 B.1 Model parametrization
To explore numerical scenarios from Influenza and SARS-CoV-2, we estimate the baseline parameters from bibliographical sources. According to [32], for seasonal influenza the median reproduction 20 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 26, 2021. ; number is 1.28 and that for SARS-CoV-2 is 2.79, but sources report large variability [33]. Likewise, [10] citing several sources, reports an R 0 for influenza in the range 1.06-3.4 with a mean of 1.68. The same source gives an incubation period in the range of 1-6.3 days with a mean of 2.61 and an infectious period (η i ) range of 1-9 days with a mean of 4.58 days. For SARS-CoV-2, [34] reports an incubation period of 3 to 4 days and an infectious period (η y ) of 4-5 days but [10] gives an incubation period in the range 1.9 to 14.7 days with a mean of 5 days, and an infectious period in the range 7-35 days with a mean of 15.2 days.
Influenza vaccine efficacy varies every year. For 2019-2020 the [35] reports σ i = 0.29 but in 2010-2011 σ i = 0.6. [36,37] have argued that efficacy declines because of waning immunity that may last 6 months for influenza A(H1N1) and influenza B and at least 5 months for influenza A(H3N2). Besides this reported efficacies we are postulation a parameter p i that mimics a protective role conveyed by influenza vaccination against SARS-CoV-2 infection. This hypothesis is based on the work of [38] that reports that protective influenza vaccination does not negatively affect the risk of contracting coronaviruses. We explore the possibility that the effect is positive, thus conferring a reduction in the risk of SARS-CoV-2 infection.
For SARS-CoV-2, several vaccines have been deployed with efficacies in the range of 50-95% with more likely scenarios of 70%. The Pfizer, Moderna vaccines have efficacies at the upper end of this interval. Astra-Zeneca vaccine efficacy is around 75%. On the other hand, coverage has three basic scenarios: low 20%, medium 50%, and high 80%. Given the form in which we are modeling coverage, we set up scenarios where the above percentages are reached after T = 90, 180 and 365 days. We assume that SARS-CoV-2 immunity ranges from half a year to lifelong, with a more likely scenario of one year. These estimates are largely based on data on immunity to other coronaviruses [39]. Currently, whether past infections will prevent severe COVID-19 on re-infection to SARS-CoV-2 is not known.
B.2 Reproduction numbers, vaccine efficacy, effective coverage and temporary immunity
21
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 26, 2021. ;
22
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 26, 2021. ; Every year influenza arrives in waves with each new influenza epidemic produced, in general, by a different viral strain in a process of lineage or variant replacement [40]. To mimic this situation in a simple, yet reasonable way, we aggregate all the influenza strains as a single influenza epidemic and allow for reinfections given the assumption that temporary immunity against influenza lasts one year. With this simplification, our model produces an annual pattern driven by the yearly weather variability.
B.5 Sensitivity analysis
For completeness, a variance based sensitivity analysis known as Sobol method [41,42] was conducted to evaluate the parameters' influence on the model's state variables. A global sensitivity analysis assumes that the output of a system is a function of a set of inputs (parameters). By 23 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 26, 2021. assuming that the vector of parameters is a random variable, the output is also a random variable. The total variability of the output, induced by the variability of the inputs, is decomposed in proportions associated with individual or sets of parameters. The higher the proportion of variability caused by changes in a specific parameter, then the higher the sensibility of the model to that parameter. θ, θ i , θ y , φ i , φ y , p i , β i , β y vary uniformly in the ranges presented in Table B.5.1, while parameters η i = 1/5, η y = 1/14, ω i = 1/5 and ω y = 1/14 are held constant. most important parameter, with its individual influence decreasing over time. φ i is the second most influential parameter followed by β y . This implies that changes in vaccination schemes are indeed important to control influenza. In the case of the SARS-CoV-2 (y) virus, the most dominant parameter is β y while the influence of the others is very small. Figure B.5.2: Sobol indices for secondary infections. It can be seen that the influence of the parameters on primary and secondary infections of influenza are very similar.In the case of SARS-CoV-2 secondary infections, the protective effect gained from influenza (p i ) is only below β y in terms of importance.
Parameter Lower bound Upper bound
We also perform a sensitivity analysis for the vaccine reproduction numbers R vi and R vy , Figure B.5.3. For each disease, the parameter that changes the most the reproduction number is the corresponding contact rate. The other parameters that show an important effect are the vaccination coverage φ i and φ y which, of course, depend on the corresponding ETC q Ei and q Ey .
25
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 26, 2021. ; https://doi.org/10.1101/2020.12.29.20248953 doi: medRxiv preprint
26
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 26, 2021. ; https://doi.org/10.1101/2020.12.29.20248953 doi: medRxiv preprint
27
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 26, 2021. ; https://doi.org/10.1101/2020.12.29.20248953 doi: medRxiv preprint . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 26, 2021. ; https://doi.org/10.1101/2020.12.29.20248953 doi: medRxiv preprint . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
|
2021-08-27T16:23:42.233Z
|
2021-01-04T00:00:00.000
|
{
"year": 2021,
"sha1": "927a1a84fe4c1b0534f170da9ac71c1b4271a925",
"oa_license": "CCBY",
"oa_url": "https://www.medrxiv.org/content/medrxiv/early/2021/05/26/2020.12.29.20248953.full.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "177acf1fceeea3157d1c779a35c372758b5b9213",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
270526048
|
pes2o/s2orc
|
v3-fos-license
|
Foot and Mouth Disease Vaccine Matching and Post-Vaccination Assessment in Abu Dhabi, United Arab Emirates
Simple Summary Livestock in the United Arab Emirates (UAE) undergo annual vaccination against foot and mouth disease (FMD). The UAE animal health plan centers on the use of FMD vaccines to minimize disease impacts and control the spread of the disease. In this study, serotype O FMD virus (FMDV) isolates collected from outbreaks in 2021 were subjected to a vaccine matching analysis against six serotype O vaccine strains. Additionally, post-vaccination coverage for serotypes A and O of FMDV was evaluated using a solid-phase competitive ELISA. The findings indicate that the FMD vaccinal strains utilized in the Abu Dhabi Emirate were antigenically matched with the field isolates. Moreover, the implemented FMD vaccination program with a booster dose elicited FMDV-specific antibody responses in sheep and goat herds with >80% coverage. Abstract Despite the annual vaccination of livestock against foot and mouth disease (FMD) in the United Arab Emirates (UAE), outbreaks of the disease continue to be reported. The effective control of field outbreaks by vaccination requires that the vaccines used are antigenically matched to circulating field FMD viruses. In this study, a vaccine matching analysis was performed using the two-dimensional virus neutralization test (VNT) for three field isolates belonging to the O/ME-SA/PanAsia-2/ANT-10 and O/ME-SA/SA-2018 lineages collected from different FMD outbreaks that occurred within the Abu Dhabi Emirate in 2021 affecting Arabian oryx (Oryx leucoryx), goat, and sheep. In addition, post-vaccination antibodies in sheep and goats were measured using solid-phase competitive ELISA (SPCE) for FMDV serotypes A and O at five months after a single vaccine dose and a further 28 days later after a second dose of the FMD vaccine. An analysis of vaccine matching revealed that five out of the six vaccine strains tested were antigenically matched to the UAE field isolates, with r1-values ranging between 0.32 and 0.75. These results suggest that the vaccine strains (O-3039 and O1 Manisa) included in the FMD vaccine used in the Abu Dhabi Emirate are likely to provide protection against outbreaks caused by the circulating O/ME-SA/PanAsia-2/ANT-10 and O/ME-SA/SA-2018 lineages. All critical residues at site 1 and site 3 of VP1 were conserved in all isolates, although an analysis of the VP1-encoding sequences revealed 14–16 amino acid substitutions compared to the sequence of the O1 Manisa vaccine strain. This study also reports on the results of post-vaccination monitoring where the immunization coverage rates against FMDV serotypes A and O were 47% and 69% five months after the first dose of the FMD vaccine, and they were increased to 81 and 88%, respectively, 28 days after the second dose of the vaccine. These results reinforce the importance of using a second booster dose to maximize the impact of vaccination. In conclusion, the vaccine strains currently used in Abu Dhabi are antigenically matched to circulating field isolates from two serotype O clades (O/ME-SA/PanAsia-2/ANT-10 sublineage and O/ME-SA/SA-2018 lineage). The bi-annual vaccination schedule for FMD in the Abu Dhabi Emirate has the potential to establish a sufficient herd immunity, especially when complemented by additional biosecurity measures for comprehensive FMD control. These findings are pivotal for the successful implementation of the region’s vaccination-based FMD control policy, showing that high vaccination coverage and the wide-spread use of booster doses in susceptible herds is required to achieve a high level of FMDV-specific antibodies in vaccinated animals.
Introduction
Foot and mouth disease (FMD) is caused by a virus (FMDV) belonging to the genus Aphthovirus in the family Picornaviridae.It is a highly contagious disease that affects all cloven-hoofed animals.FMDV is transmitted through direct contact between animals, animal products (such as milk, meat, and semen), mechanical transfer via people or fomites, and via the airborne route [1,2].The incubation period following infection ranges from 2 to 21 days (average 3-8 days) depending on factors such the species of the animal, infectious dose, serotype, and strain of the virus [3].Affected animals commonly exhibit symptoms such as fever; the cessation of rumination; excessive salivation; and the appearance of blisters on the lips, tongue, mouth, nose, between the toes, and occasionally on the teats.Additionally, there may be a decrease in milk production.Young animals infected with FMD may experience higher mortality rates due to myocarditis, while those that recover from the disease may become carriers [3].Although the fatality rate associated with FMD infection is low, it significantly impacts the economic capability of countries where the disease is endemic by reducing productivity and hindering the export of livestock and livestock products [4,5].FMDV is classified into seven immunologically distinct serotypes (O, A, C, Asia 1, SAT 1, SAT 2, and SAT 3) distributed across seven geographic virus pools (1-7) [6,7].The United Arab Emirates (UAE), located on the Arabian Peninsula within Pool 3, hosts FMDV serotypes O, A, and Asia 1.The predominant topotypes/lineages currently prevalent in this region include O/ME-SA/PanAsia-2, A/ASIA/Iran-05, and Asia 1/Sindh-08 [8,9].However, the region has also encountered introductions of O/ME-SA/Ind-2001, O/ME-SA/SA-2018, and A/ASIA/G-VII from Pool 2 (South Asia), and SAT1/I and SAT 2/XIV from Pool 4 (East Africa) [10][11][12][13][14].The current animal population in the UAE is estimated to be around 5 million animals, comprising 4.35 million small ruminants, 0.5 million camels, and 111,000 cattle.FMD is endemic in the UAE, affecting both domestic livestock and wildlife.It is classified as a notifiable disease in the UAE and was officially reported to the World Animal Health Information System (WAHIS) for the first time in 2003.To date, a total of 30 FMD outbreaks have been reported to WAHIS [15].Serotype O is considered predominant in the UAE, and recent FMD outbreaks reported to WAHIS in 2021 have been caused by two different lineages of FMDV, namely O/ME-SA/SA-2018 and O/ME-SA/PanAsia-2 [16].
In regions where FMD is endemic or where outbreaks are highly likely, prophylactic vaccination is commonly employed [17].FMD exhibits frequent spontaneous mutations, leading to considerable antigenic variability and the emergence of new topotypes and lineages, which can occasionally result in vaccination failure [18].This antigenic diversity both among and within serotypes hampers the cross-reactivity of immune responses elicited by one FMDV strain against another, thereby limiting potential cross-protection.Thus, assessing the antigenic and immunogenic similarities between the vaccine strain and circulating field strain and ensuring their matching are essential for optimizing vaccination programs [19].The expected level of protection provided by a vaccine is often measured by in vitro vaccine matching testing, which compares the seroreactivity of vaccine antisera to the vaccine strain (homologous reactivity) and the field strains (heterologous reactivity) [20].
The monitoring and evaluation of national FMD control strategies is a key component of the progressive control pathway for FMD (PCP-FMD) of the Global FMD Control and Eradication Strategy and the regional Middle East FMD control roadmap adopted by the FAO and the World Organization for Animal Health (WOAH).The UAE is at stage 2 (out of 5 stages) of the PCP-FMD, which implies the implementation of a risk-based control plan [2,3].This requires continual monitoring of outbreak strains, the evaluation of FMD risks, the assessment of implementation levels, and control methods.Principles and guidelines for advising countries on Post-Vaccination Monitoring (PVM) procedures are published by the FAO and the WOAH [21].
The national animal health plan of the UAE aims to control and eradicate FMD from the country by 2030, aligning with the PCP-FMD.The plan involves the mass vaccination of cattle and small ruminants against FMD.Despite bi-annual livestock vaccination efforts, FMD cases continue to occur in the UAE, raising concern about the introduction of new viral strains, the lack of antigenic matching of the FMD vaccine used, and the efficacy of livestock vaccination campaigns.Therefore, the objectives of this study were to evaluate whether FMDV field strains causing outbreaks in the UAE are antigenically matched to the commercial FMD vaccine used and to assess the herd immunity induced in small ruminants by a single dose or two doses of the FMD vaccination regime commonly practiced in the Abu Dhabi Emirate.Three FMDV field isolates collected from three different outbreaks within Abu Dhabi Emirate in 2021 were tested against strains of the vaccine used, and their VP1 sequences were further characterized and analyzed.Moreover, a small-scale trial was conducted to assess FMD vaccination coverage in sheep and goats vaccinated with a commercial FMD vaccine in 2023.
Field Isolates Used for Vaccine Matching
In 2021, three suspected outbreaks of FMD in the Abu Dhabi and Al Ain regions of the Abu Dhabi Emirate were reported in two (A and B) non-FMD-vaccinated farms and one (C) farm which received a single dose of FMD vaccination [16].These farms kept various animal species, and after outbreak investigations, samples were collected including two mouth swabs (in a viral transport media) from clinically affected animals exhibiting symptoms such as fever, lameness, and vesicular lesion tissue (from an Arabian oryx (Oryx leucoryx) and one goat), and one heart tissue from a sheep (in a plain container) (Table 1).The specimens were placed on an ice box and immediately transported to the Abu Dhabi Agriculture and Food Safety Authority (ADAFSA) veterinary laboratories for molecular diagnosis of FMD.At ADAFSA, the samples underwent testing using RT-qPCR following the methodology described previously [16].All samples were tested for the presence of FMDV where a RT-qPCR threshold of >39 indicated a positive sample.Three serotype O isolates recovered from the World Reference Laboratory-Line Fetal Bovine Kidney (WRL-LFBK) cells [22,23] were used in this study for vaccine matching at the FAO WRL for FMD (WRLFMD, Pirbright, United Kingdom) (Table 1).The VP1 sequences for these isolates were generated as previously described [16] and were deposited in the GenBank.
Analysis of Amino Acid Sequence Variability
The amino acid sequences of the FMD isolates were translated using Geneious Prime version 2023.1.To compare the amino acid sequences for field isolates with the O 1 Manisa vaccine strain (GenBank: AY593823) used in the Abu Dhabi Emirate, individual multiple sequence alignment was prepared using the Geneious alignment tool within the Geneious software.It should be noted that the sequence and identity of the other serotype O vaccine strain used in the UAE (O-3039) has not been publicly disclosed.The variability in amino acid sequences, particularly focusing on the critical residues at the BC loops, GH loops, and C-termini of VP1 at the sites 1 and 3 in the VP1 coding sequences of the field strains, were evaluated in comparison to the sequence of the O 1 Manisa vaccine strain.
Two-Dimensional Virus Neutralization Assay (2D-VNT) for Vaccine Matching
The serum utilized for the vaccine matching assay comprised a pool of sera from five vaccinated cattle, administered with a monovalent vaccine.This serum was collected 21 days post-vaccination (except O1 Campos, Biogénesis Bagó, which was collected 30 days post-vaccination), and subsequently tested against both the homologous (six different serotype O commercial vaccinal strains produced by three different companies) and heterologous (field) virus.[24].Neutralization titers were determined from the regression data, representing the log 10 reciprocal antibody dilution required for 50% neutralization of 100 tissue culture infective units of virus (log 10 SN50/100 Tissue Culture Infectious Dose 50 (TCID 50 ).The antigenic relationship between the vaccine virus and the field virus was calculated using the 'r 1 ' ratio: neutralizing the antibody titer against the field virus divided by the neutralizing antibody titer against the homologous virus.An r 1 -value greater than 0.3 indicated a close antigenic relationship between the field isolate and the vaccine strain, suggesting the likelihood of effective protection conferred by a potent vaccine containing the vaccine strain [24].
Post-Vaccination Assessment 2.4.1. Study Area
In April-June 2023, sero-monitoring post-vaccination was applied to estimate vaccination coverage and the level of FMDV-specific antibodies on sheep and goat farms after a single dose or two doses of FMD vaccination at the Al Alain region of the Abu Dhabi Emirate where the last FMD outbreaks were reported in 2021.The region harbors 1,720,164 sheep and goats distributed on 13,159 holdings, representing 65% of the total ruminant population within the Abu Dhabi Emirate.
Vaccine and Vaccination
Since 2019, a saponin-adjuvanted, NSP-purified, inactivated hexavalent FMD vaccine produced by Boehringer Ingelheim Animal Health (Pirbright, UK) and supplied by International Free Trade (IFT, Dubai, UAE) has been used to vaccinate livestock against FMD in the Abu Dhabi Emirate.This vaccine contains the following strains with a potency of over 6 PD50 per dose: O1 Manisa, O-3039, A Iran 05, A-GVII, Asia 1 Shamir, and SAT2 Eritrea.The vaccination regime comprises two doses (1 mL 4-6 months interval) for sheep and goats and three doses (3 mL 4 months interval) for cattle.The target of the FMD vaccination campaign coverage in 2023 was 85%.
Sample Size and Collection
The sizes of the animals' holdings for sample collection were estimated using a twostage cluster sampling method, aiming for a 95% confidence level, 5% precision, and an expected sero-prevalence of 80% [25].Sixty-six individual mixed holdings of sheep and goat were selected using simple random sampling.In total, 396 serum samples were collected from adult (above 3 months of age) sheep and goats between April and May 2023.Six samples were collected from each small ruminant farm.Samples were collected at five months and 28 days after the first and the second FMD vaccination doses, respectively.The selected animals were individually identified to ensure accurate monitoring and were observed for clinical signs of FMD.Blood samples were obtained by venipuncture from the jugular vein.The animal blood samples were drawn into serum separator vacutainers with a red cap and transported to the ADAFSA laboratory in cold boxes within 24 h.Upon reaching the laboratory, the serum tubes were centrifuged at 3500 rpm for 5 min at room temperature, and the sera was divided into aliquots and then stored at 4 • C until testing.
Serological Testing
Serum samples underwent testing for structural antibodies to FMDV serotypes A and O using commercially available solid-phase competitive ELISA (SPCE) following the guidelines provided by the manufacturer (Istituto Zooprofilattico Sperimentale della Lombardia e dell'Emilia Romagna (IZSLER), Brescia, Italy).
Amino Acid Sequence Variability Analysis for Identified Viruses
The sequence for isolate UAE/1/2021 (OR425051), exhibited amino acid sequence variability across 16 positions when compared to the O1 Manisa vaccine strain (GenBank: AY593823).Notably, 10 out of 16 amino acid substitutions were located at antigenic site 1 (GH loop 138-156), as outlined in Table 2.The other two isolates (UAE/9/2021 and OR425053) and UAE/15/2021 and OR425057 also demonstrated comparable amino acid sequence variability compared to the O1 Manisa vaccine strain, encompassing substitutions at 16 and 14 positions, respectively.Of these amino acid substitutions, 8/16 (50%) and 7/14 (50%) were located at site 1 of UAE/9/2021 and UAE/15/2021, respectively, as indicated in Table 2 and Figure 1.As expected, the tripeptide sequence Arg-Gly-Asp (RGD), which forms the integrin binding cellular receptor for FMDV (located at position 146-148 of the VP1 amino acid sequence), was entirely conserved in all sequences.Furthermore, the critical residues of VP1 at site 1 [ßG-ßH loop (residues 144, 148 and 154) and carboxy terminus (residue 208)] and at site 3 (residues 43 and 44 of the ßB-ßC loop) were conserved in all isolates compared to the O1 Manisa vaccine strain (AY593823).
Vaccine Matching with 2 dm VNT
The results of the two-dimensional virus neutralization test revealed that the O1 Campos (Biogénesis Bagó), O-3039( Boehringer Ingelheim), O Manisa (Boehringer Ingelheim), O PanAsia-2 (Boehringer Ingelheim), and O/TUR/5/09 (MSD Animal Health) vaccine strains were antigenically matched to the O/ME-SA/PanAsia-2/ANT-10 sublineage and O/ME-SA/SA-2018 lineage isolates, with r 1 -values ranging between 0.32-0.48and 0.32-0.75,respectively.However, poor vaccine matching results were observed for the O 1 Campos (Boehringer Ingelheim) vaccine strain against both lineages (had r 1 -values < 0.3) [Table 3].Specifically, the two FMD vaccinal strains (O-3039, Boehringer Ingelheim; O Manisa, Boehringer Ingelheim) included in the vaccine used by ADAFSA exhibited antigenic matching with all three field isolates, causing FMD outbreaks with r 1 -values ranging from 0.38 to 0.75.Table 3.The vaccine matching results obtained for UAE isolates UAE/1/2021, UAE/9/2021, and UAE/15/2021.For each field isolate, the r 1 -value is presented, followed by the heterologous neutralization titer (r 1 -value, titer).An r 1 greater than 0.3 suggests a close antigenic relationship between the field isolate and the vaccine strain, indicating potential protection.Conversely, an r 1 -value less than 0.3 suggests an antigenic difference between the field isolate and the vaccine strain.The vaccine strains used in the Abu Dhabi Emirate are highlighted in gold.
Evaluation of FMD Vaccination in 2023
During blood sample collection, a clinical examination of the presence of FMD symptoms indicated an absence of infection in the targeted animal holdings.Across all species tested, the seroprevalence rates at five months after the first initial FMD vaccination dose against FMDV serotypes A and O were 47% and 69%, respectively (Table 4).This prevalence increased to 81 and 88%, respectively, 28 days after the second FMD vaccination dose.Specifically, for serotype A, the immunization coverage rates were 53% and 39% in sheep and goats five months after the first vaccination, increasing to 78% and 86% in sheep and goats, respectively, 28 days after the second vaccination.Similarly, for serotype O, the immunization coverage rates of species were 72% and 47% in sheep and goats five months after the first vaccination, which increased to 92% and 82% in sheep and goats, respectively, 28 days after the second vaccination.Except for serotype O in sheep, the percentages of immunization coverage for both serotypes were below 70% in both species five months after the initial vaccination.
Discussion
Vaccination stands as the primary strategy for controlling FMD in endemic regions like the UAE.The combination of vaccination and stamping out has proven effective in reducing or eradicating FMD from Europe and large parts of South America.Nonetheless, the highly contagious nature of FMDV, the presence of various circulating serotypes and their associated topotypes, the possible presence of wildlife reservoirs, and the continual emergence of new strains that are poorly matched to existing vaccines pose significant challenges for endemic countries in effectively controlling and mitigating the disease burden on both the national and regional levels [7].The Sub-Saharan Africa and the Middle East-South Asia regions remain endemically affected by various FMD serotypes circulating extensively across these regions.This situation has profound implications for livestock production and poverty alleviation, hindering access to international markets, restricting genetic improvement, and impeding diary production development [26].
In the Abu Dhabi Emirate, and according to the national livestock vaccination program, sheep and goats undergo bi-annual vaccination against FMD using commercial vaccines to control the disease in these animal populations.The selection of FMDV strains included in the vaccine used in implementing the UAE national FMD control and eradication plan aligns with the recommendations regularly provided by the WRL-FMD and the updated outcomes of the UAE national animal health plan.Specifically, the current vaccine manufactured by Boehringer Ingelheim and used in the Abu Dhabi Emirate comprises FMDV strains O1 Manisa, O-3039, A Iran 05, A/ASIA/G-VII 2015, Asia 1 Shamir, and SAT2 Eritrea.
The occurrence of FMD outbreaks in the UAE in 2021 [16] highlighted the potential for the emergence and circulation of new FMD viral strains that may not match the currently administered vaccine.This raises questions regarding the effectiveness of FMD vaccination among the targeted population in the Abu Dhabi Emirate in preventing FMD outbreaks.Regular post-vaccination assessments and vaccine matching analyses are essential requirements of the national animal health plan aimed at eradicating FMD from the UAE.However, these activities have not been previously reported.While in vivo vaccination-challenge experiments are considered the gold standard for FMD vaccine matching, they have limitations in terms of animal welfare, biosafety, and cost-effectiveness.In practice, FMD vaccine selection relies heavily on in vitro serological vaccine matching tests, such as virus neu-tralization tests (VNTs) and a liquid-phase blocking ELISA (LPBE) [27].Therefore, in this study, samples confirmed to be infected with FMD were from an Arabian oryx, goat, and sheep, each originating from three different FMD outbreaks.Their VP1 coding sequences were analyzed to assess critical amino acid variability, and their antigen matching with FMD vaccinal strains used in the UAE was evaluated.Furthermore, the study included an assessment of FMD immunization coverage in vaccinated sheep and goat farms in a specific region of the Abu Dhabi Emirate.
The UAE-FMD isolate from Arabian oryx reported in this study was classified within the O/ME-SA/PanAsia-2/ANT-10 sublineage, while isolates from the sheep and goat were assigned to the O/ME-SA/SA-2018 lineage previously reported in the Abu Dhabi Emirate in 2021 [16].Our investigation revealed that five out of six different FMDV-O vaccine strains tested were antigenically matched with these FMDV-O field strains.This is not surprising, as serotype O displays less antigenic diversity compared to other serotypes like A and SAT2 [28].Of the five effective vaccine strains, two (O1 Manisa, Boehringer Ingelheim and O-3039, Boehringer Ingelheim) are currently utilized in the Abu Dhabi Emirate for FMD control.Notably, vaccine strains that exhibit poor matching with field strains may provide suboptimal protection [7].Therefore, our findings suggest that the FMD-O vaccine strains employed in the Abu Dhabi Emirate antigenically matched the viruses that have caused recent FMD outbreaks [29].
The VP1 coding sequences were also analyzed to assess critical amino acid variability with one of the FMD vaccinal strains used in the VNT.Although there were several variations in amino acids, all residues critical at antigenic sites 1 and site 3 for VP1 [30][31][32], such as 148, 149, 154, and 208, were fully conserved across all isolates.All three isolates retained the conserved Arg-Gly-Asp cell attachment site [33][34][35], which is consistent with previous findings [30,36,37].The amino acid change at position 139 has been reported to impact the serum neutralization of O isolate variants [38][39][40].However, the substitution observed for the three field isolates appeared to have no effect on virus neutralization in the vaccine matching test (Table 3).It is noteworthy that the sequence of the O-3039 vaccine strain also used in the UAE is not publicly available.Therefore, we were unable to characterize the amino acid variability of UAE isolates in comparison to this vaccine strain.
Several limitations associated with the use of inactivated FMD vaccines, which are commonly employed to combat FMD in endemic regions, have been reported.These include the necessity for high-containment facilities to culture virulent FMDV, short-duration immunity, limited cross protection among various strains and topotypes within the same serotype, the frequent emergence of new variants capable of evading vaccine-induced immunity, and the inability to eliminate virus carriers [7].The investigation of two outbreaks reported here that affected Arabian oryx and unvaccinated goat in two different farms highlights the risk posed by the cohabitation of wildlife with sheep and goats in the same farm, unrestricted animal movement between farms, and factors related to vaccination coverage in the spread and control of FMD in the Abu Dhabi Emirate.Therefore, it is crucial to implement measures such as avoiding the mixing of wildlife with livestock, controlling animal movement, increasing vaccination coverage, enhancing farms' biosecurity measures, and regularly conducting post-vaccination assessments and vaccine matching for FMD field isolates in the Abu Dhabi Emirate.These measures will contribute to improving the performance of the national FMD control plan and facilitate the achievement of its objective's requirements.
FMD immunization coverage in vaccinated sheep and goat farms in the Al Ain region of the Abu Dhabi Emirate, where outbreaks occurred in 2021, was also assessed.One of the field isolates tested in this study originated from a sheep farm that received a single dose of FMD vaccination in 2021 [16].This raised the need to investigate the duration of protective immunity after one or two rounds of FMD vaccination in small ruminants.The post-vaccination immunity assessment against FMD serotypes A and O conducted here revealed overall immunity rates of 47% and 69% in both vaccinated sheep and goat herds five months after the first vaccination, respectively.This immunity increased to 81% and Vet.Sci.2024, 11, 272 9 of 12 86% for FMD serotypes A and O, respectively, four weeks after the booster second of the FMD vaccine.To ensure the effectiveness of FMD vaccines used to control the disease in specific countries, or a region, post-vaccination monitoring is required.
Generally, most commercial vaccines recommend an initial vaccination course consisting of two doses, typically administered one month apart, followed by a booster dose every 4-6 months.In an experimental study, neutralizing titers against an O/ME-SA/PanAsia strain were assessed following either a two-dose primary course (1 mL) or a single doubledose (2 mL) vaccination in sheep with a 6 PD50 vaccine.The results show that titers did not significantly differ between the groups except at six months post-vaccination, where the single double-dose group exhibited significantly lower titers below the established protective cut-off [41].
FMD vaccines are typically available in standard (3 PD50) and higher-potency (6 PD50) formulations, determined by the number of 50% protective doses in each dose.Highpotency vaccines have potential application for emergency reactive campaigns in FMD-free areas due to their rapid immunity onset.They address challenges posed by antigenic variations among virus strains of the same serotype, potentially protecting against clinical FMD even when there is a mismatch with the circulating field strain [42].Presently, specific criteria for achieving a protective level of FMD immunity have not been established, but it is advised to maintain at least 80% immune animals within susceptible populations [21,43].
Indeed, the planned vaccination coverage in the Abu Dhabi Emirate was set at 85% in 2023.The findings obtained here indicate that a single dose of small ruminant FMD vaccination alone may not suffice to reach the necessary level of protective immunity against FMD serotypes A and O.It further requires the administration of a booster dose to achieve the desired level of immunity of above 80%.Moreover, the bi-annual vaccination schedule for small ruminants in the Abu Dhabi Emirate appears effective in establishing an acceptable herd immunity level, especially when combined with other biosecurity measures for FMD control in the region.Consequently, there is a need for further investigations into the post-vaccination assessment of other FMD serotypes (SAT2) currently circulating in the region using VNT, as well as a need to determine the duration of protective immunity following the second vaccination.
Conclusions
This is the first report on FMDV vaccine matching and the first post-vaccination assessment conducted in the Abu Dhabi Emirate.The results from the virus neutralization test indicate that all of the FMD field isolates examined in this study (belonging to the O/ME-SA/PanAsia-2/ANT-10 sublineage and O/ME-SA/SA-2018 lineage) were matched with the vaccinal strains (O-3039 and O1 Manisa, Boehringer Ingelheim) included in the FMD vaccine used in the Abu Dhabi Emirate.Furthermore, a post-vaccination assessment against FMD serotypes A and O indicated that a protective herd immunity exceeding 80% could be achieved with the current bi-annual vaccination regime.A high vaccination coverage of up to 100% in susceptible herds, including sheep, goats, and cattle, coupled with post-vaccination sero-surveillance to other FMD serotypes circulating in the Arabian Peninsula, is required to monitor antibody titers in vaccinated animals to control FMD in the region and achieve the PCP-FMD and the regional Middle East FMD control roadmap requirements.
the European Union.The views expressed herein can in no way be taken to reflect the official opinion of the European Union.The post-vaccination assessment was supported by the ADAFSA.
Institutional Review Board Statement: This research was approved by the research ethics committee of the Abu Dhabi Agriculture and Food Safety Authority (ADAFSA) (approval number: ADAFSA-EA-11-2023), and this study was conducted following the guidelines for animal use.Written consent (which was included in the sample request form approved by the ADAFSA research ethics committee) was obtained for the use of samples and animals from the owner before inclusion in the study.
Informed Consent Statement: Informed consent was obtained from the owners of the animals.
Figure 1 .
Figure 1.The amino acid sequence alignment reveals differences between the serotype O FMDV UAE field isolates and O1 Manisa, AY593823.The amino acids in dots are identical to the vaccine strain.The epitope regions of the virus, including site 1 [the GH-loop (138-158) in the blue box and C-terminus (198-202, 206-210) in the dark red boxes] as well as site 3 [the BH loop (positions at 43-60) in the black box] are shown.The motif RGD at positions 146-148 in site 1 is highlighted in the green box.Putative critical amino acids at each position are numbered and indicated by black arrows.
Figure 1 .
Figure 1.The amino acid sequence alignment reveals differences between the serotype O FMDV UAE field isolates and O1 Manisa, AY593823.The amino acids in dots are identical to the vaccine strain.The epitope regions of the virus, including site 1 [the GH-loop (138-158) in the blue box and C-terminus (198-202, 206-210) in the dark red boxes] as well as site 3 [the BH loop (positions at 43-60) in the black box] are shown.The motif RGD at positions 146-148 in site 1 is highlighted in the green box.Putative critical amino acids at each position are numbered and indicated by black arrows.
Table 4 .
Evaluation of FMD vaccination in 2023 against serotypes A and O.
|
2024-06-17T15:20:49.069Z
|
2024-06-01T00:00:00.000
|
{
"year": 2024,
"sha1": "18d3b056122acf448b40e4c2d713d93b9ac63205",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2306-7381/11/6/272/pdf?version=1718370852",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce32bf4e4016623d42ea79bcf3af21fcfe6b927d",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
237340540
|
pes2o/s2orc
|
v3-fos-license
|
Centralised and Decentralised Sensor Fusion-Based Emergency Brake Assist
Many advanced driver assistance systems (ADAS) are currently trying to utilise multi-sensor architectures, where the driver assistance algorithm receives data from a multitude of sensors. As mono-sensor systems cannot provide reliable and consistent readings under all circumstances because of errors and other limitations, fusing data from multiple sensors ensures that the environmental parameters are perceived correctly and reliably for most scenarios, thereby substantially improving the reliability of the multi-sensor-based automotive systems. This paper first highlights the significance of efficiently fusing data from multiple sensors in ADAS features. An emergency brake assist (EBA) system is showcased using multiple sensors, namely, a light detection and ranging (LiDAR) sensor and camera. The architectures of the proposed ‘centralised’ and ‘decentralised’ sensor fusion approaches for EBA are discussed along with their constituents, i.e., the detection algorithms, the fusion algorithm, and the tracking algorithm. The centralised and decentralised architectures are built and analytically compared, and the performance of these two fusion architectures for EBA are evaluated in terms of speed of execution, accuracy, and computational cost. While both fusion methods are seen to drive the EBA application at an acceptable frame rate (~20 fps or higher) on an Intel i5-based Ubuntu system, it was concluded through the experiments and analytical comparisons that the decentralised fusion-driven EBA leads to higher accuracy; however, it has the downside of a higher computational cost. The centralised fusion-driven EBA yields comparatively less accurate results, but with the benefits of a higher frame rate and lesser computational cost.
Introduction
In today's state-of-the-art technology, the application of multiple sensors that are fine tuned to perceive the environment precisely is seen as instrumental for increasing road safety [1,2]. Thanks to robust and reliable exteroceptive sensors, such as the LiDAR sensor [3], the radio detection and ranging (RADAR) sensor [4], cameras, and ultrasonic sensors [5], amongst several others, intelligent vehicles are capable of accurately perceiving the environment around them [2]. This allows them to anticipate and/or detect emerging dangerous situations and threats.
In case of mono-sensor applications, the system is prone to errors, as failure of the only available sensor can lead to breakdown of the entire system [6]. Having multiple sensors with different field-of-views and capabilities often helps in making the system more robust, as the system can still operate with acceptable efficacy after failure of one or more sensors from an agglomeration of multiple sensors [6,7]. Different sensors have different levels of reliability under a multitude of scenarios. For example, the performance of a camera sensor is deteriorated substantially in dark conditions. Thus, the probability of false positives or false negatives increases under such circumstances [8,9]. A LiDAR sensor is relatively robust under dark situations, thereby allowing for more dependable detections. However, it has certain drawbacks as it cannot recognise the colour of the detected object [10]. Hence, applications like 'traffic sign recognition' cannot be carried out by using LiDAR alone. In such cases, use of a camera sensor is mandatory [8,11].
Multi-sensor fusion is the process of combining data from multiple sensors so that the cumulative data are enhanced in terms of reliability, consistency, and quality, compared to the data that would be acquired from a single sensor [12,13]. In this paper, we focus on 'object-level' or 'high-level' multi-sensor fusion techniques for emergency brake assist (EBA) systems. First, we concentrate on the importance of a multi-sensor fusion approach, followed by an exegesis of 'centralised' and 'decentralised' object-level multi-sensor fusion to drive the EBA feature. Once the environment is fully perceived, with the present conditions known and future conditions estimated, the vehicle software can then undertake proactive decisions and actions to either avoid the upcoming threat, or, in case the situation is inevitable, boost safety of the driver and other occupants [12,13]. Thus, ADAS applications utilising multi-sensor fusion as their backbone have potential to make mobility safer and more efficient [14,15].
Emergency brake assist (EBA) is an ADAS system that assists the driver in avoiding a collision or decreasing the impact of collision with other vehicles or vulnerable road users when the collision is unavoidable [16,17]. Research shows that in many critical situations, human drivers tend to react either too late or in a wrong way [18]. In such scenarios, the best alternative is to apply the vehicle brakes with the safe maximum force to minimise the consequences of the unavoidable impact [19]. EBA primarily consists of two parts [20]: (1) Detection: identify critical scenarios which can lead to an accident and warn the driver accordingly through audio and/or visual indications; (2) Action: in scenarios where impacts or accidents are inevitable, EBA can decrease the speed of the ego-vehicle by applying brakes in advance to achieve minimal impact.
In this paper, we will only focus on the 'detection' part of EBA. For EBA to function appropriately, a precise environment perception is required. As a result, a reliable and consistent sensor fusion network is necessary to drive this algorithm [21]. For this paper, the EBA is designed such that an alert shall be displayed on the dashboard notifying that either a critical or a safe scenario has been detected.
The work outlined in this paper contributes to the research by: 1.
Highlighting the advantages and challenges of using multi-sensor fusion driven ADAS algorithms over mono-sensor ADAS features.
2.
Providing analytical comparisons of the two proposed methodologies of sensor fusion-'centralised' and 'decentralised' fusion architectures. 3.
Implementing an EBA algorithm and critically analysing the behaviour, performance, and efficacy of the feature driven by the two proposed fusion methods.
The paper is structured as follows: Section 1 provides the introduction and motivation behind this work. Section 2 presents the literature review, where a multitude of papers and work done by various researchers is critically studied and analytically compared. Section 3 sheds light on the proposed methods for sensor fusion alongside the sensor building blocks, such as the camera and LiDAR object detection, tracking and fusion algorithms, types and methods of sensor fusion, and their fundamentals. This part covers the theory required to implement the multi-sensor fusion-based EBA feature. Section 4 shows the implementation of EBA and analytically compares the performance of centralised and decentralised fusion-driven EBA, based on execution speed, accuracy, noise immunity, and computational cost. Section 5 concludes the work.
Related Work
Sensor fusion targets a variety of applications in the automotive domain. The architecture of the fusion algorithm and the methodology, and the amount and type of sensors used depend on the task to be performed and the sensitivity and criticality of the parent Sensors 2021, 21, 5422 3 of 29 system. Sensors such as cameras, LiDAR, ultrasonic sensors, and RADAR can be used to perceive the environment around the ego-vehicle under different circumstances. An efficient technology, which involves fusing information from a point cloud (generated by LiDAR) and an image (generated by camera), is discussed by Kocic et al. [18]. Accordingly, we shall use the LiDAR-camera combination of sensors in our work. By using a similar fusion architecture, the localisation and mapping can also be done; however, we shall restrict the scope of our work to the construction of an alert-based EBA system only.
The work done by Herpel et al. [1] presents a systematic and detailed analysis of high-level (object-level) and low-level sensor fusion. Low-level sensor fusion techniques usually involve heavy computation and are more susceptible to noise, and the authors of [1] highlighted the advantage of use of object-level fusion, which involves relatively less computational prowess and high immunity to noise. Thus, for real-time embedded systems, object-level sensor fusion techniques are more suited than low-level fusion. Work done by Badue et al. [22] highlights the benefit of using more than one sensor (which is also backed by Stampfle et al. [9]) and fusing their data at an object level. Accordingly, we implemented object-level multi-sensor fusion in the work described in this paper. These works were closely studied in order to understand the spatial synchronisation aspect of the LiDAR and camera sensor fusion.
Research done by De Silva et al. [23] sheds light on the factors involved in combining data from various sensors involving temporal and spatial synchronisation. In this work, a geometrical model is worked upon for spatial synchronisation of data. In our work, we use a similar model for converting a 3D point cloud bounding box into a 2D space and then fusing the camera and LiDAR sensor data together. In their work, the authors used a resolution-matching algorithm based on Gaussian process regression to estimate unreliable or missing data. To combat the problem of uncertain data, we used a tracker algorithm. Thus, we can say that our work is a further development of the framework used in the project undertaken by De Silva et al. [23]. Decisions that are undertaken in driverless cars need to be computed with the help of as many sensor inputs as possible. Moreover, these decisions must be made in the presence of uncertainties and noise that come with pre-processing algorithms and data acquisition methods. Work done by De Silva et al. [23] addresses these two issues surrounding automotive sensor fusion. Work done by Yang et al. [24] and Wan et al. [25] also describes the application of the unscented Kalman filter in target tracking for automotive-specific applications. Based on these works, we considered the use of the unscented Kalman filter (UKF) as a tracker method in our system.
Various classification schemes for sensor data fusion are discussed in the work done by Castanedo [8]. Several sensor fusion algorithms are classified based on different parameters such as type of data processed, type of output data, and structure of framework. Based on this work, we propose the two fusion algorithms that we shall be focussing on-centralised and decentralised fusion methods. The architectures of these two methods were primarily inspired from the work done by Castenado [8] and Grime et al. [26].
In their work, Stampfle et al. [9] describe the construction of a Robot Operating System (ROS)-based sensor fusion node. ROS is a meta-operating system and provides standard operating system services like contention management, hardware abstraction, and process management, alongside high-level functionalities like synchronous and asynchronous calls and centralised databases. Being language independent, it is possible to develop software modules in ROS in C++ as well as Python, which allows for freedom to use necessary software nodes off the shelf without converting the code into one standard language. ROS also allows for use of a 3D visualisation tool-RViz, which will be used extensively for this work to project the camera and LiDAR images (input as well as output). By studying the work done by Bernardin et al. [27], we critically analyse the performance of the sensor fusion algorithms used to drive the said EBA features. In this case, mean average precision (mAP) values are used to gauge the consistency of an algorithm. The false positives, false negatives, and true positives values required for the calculation of the mAP value are derived from the confusion matrix by comparing the output of the fusion algorithm against KITTI dataset's ground truth data.
Sensor technology-sensors and sensor fusion methodologies-form a critical part of modern autonomous vehicles. As specified in the work done by Badue et al. [22], different sensors used in varied fusion architectures lead to substantially different performances. It is clear from the work done by Aeberhard et al. [28] that most original equipment manufacturers (OEMs) prefer the use of high-level data fusion architecture for implementing ADAS algorithms in vehicles. Aeberhard et al. [28] showcase this through experimental analysis performed on the BMW 5 Series vehicle. Accordingly, we also consider a high-level sensor fusion architecture for our work.
Classification of Sensor Fusion Methods
Sensor data fusion involves the consideration of fundamental parameters like the speed of operation of the fusion algorithm on an embedded platform, its accuracy, computational load, architecture, type of data at the input and output, type of sensor configuration, and, ultimately, the cost of implementation. Hence, it is imperative to thoroughly classify various sensor fusion techniques. By studying various works and projects, we broadly classify sensor fusion techniques according to some criteria, as shown in Table 1. Castanedo [8] Makarau et al. [35] Heading et al. [33] Object-level sensor fusion (by Luo et al. [34]) has significant advantages over raw data-based sensor fusion (low-level fusion) as it ensures modularity and allows for ease of benchmarking. Moreover, the fusion techniques can be relatively simple to develop. Studying sensor fusion architectures and differences in performance for multiple fusion methodologies is important since, when it comes to implementing sensor fusion algorithms on embedded systems [21], it is important to use an optimum architecture which gives acceptable accuracy.
Studying sensor fusion architectures and differences in performance for multiple fusion methodologies is important since, when it comes to implementing sensor fusion algorithms on embedded systems [21], it is important to use an optimum architecture which gives acceptable accuracy.
Accordingly, in our work, we chose to use and analytically compare object-level 'centralised' and 'decentralised' fusion methods, as inspired from the work done by Castanedo [8] in order to drive the EBA features.
Fusion Architectures-Centralised and Decentralised Fusion
In this paper, the centralised sensor fusion is referred as object-level centralised sensor fusion (OCSF) and the decentralised sensor fusion as (object-level decentralised sensor fusion (ODSF). The architecture of OCSF is shown in Figure 1. The terminology used in Figure 1 is explained below: A', B'-Raw data from sensor (pixel-level data for camera and point cloud data for Li-DAR) A, B-Processed data from sensor object detection blocks. Pre-processing blocks indicate object detection algorithms implemented for the respective sensors. C-Temporally and spatially synchronised data from the two sensors. D-Fused data. Output of sensor fusion; these data are the output of the tracking algorithm, and are immune to false negatives, false positives, and other noise present in sensor data.
In OCSF, the fusion and tracking node is built inside the central processor. The fusion block receives synchronised data from various input blocks, which in this case are sensors A and B (camera and LiDAR, respectively). The output of the fusion block is given as the input to the tracker block. The tracker helps in suppressing noise, false positives, and false negatives, thereby providing fusion output with least errors.
The architecture of ODSF is shown in Figure 2. The terminology used in Figure 1 is explained below: A', B'-Raw data from sensor (pixel-level data for camera and point cloud data for LiDAR) A, B-Processed data from sensor object detection blocks. Pre-processing blocks indicate object detection algorithms implemented for the respective sensors. C-Temporally and spatially synchronised data from the two sensors. D-Fused data. Output of sensor fusion; these data are the output of the tracking algorithm, and are immune to false negatives, false positives, and other noise present in sensor data.
In OCSF, the fusion and tracking node is built inside the central processor. The fusion block receives synchronised data from various input blocks, which in this case are sensors A and B (camera and LiDAR, respectively). The output of the fusion block is given as the input to the tracker block. The tracker helps in suppressing noise, false positives, and false negatives, thereby providing fusion output with least errors.
The architecture of ODSF is shown in Figure 2. The terminology used in Figure 2 is explained below: A', B'-Raw data from sensor (pixel-level data for camera and point cloud data for Li-DAR). A, B-Data from the sensor object detection blocks. Pre-processing blocks indicate object The terminology used in Figure 2 is explained below: A', B'-Raw data from sensor (pixel-level data for camera and point cloud data for LiDAR). A, B-Data from the sensor object detection blocks. Pre-processing blocks indicate object the detection algorithm implemented for the respective sensors. C-Tracking data of Sensor A. This block ensures that data are consistent despite inconsistencies at the output of the pre-processing block. D-Tracking data of Sensor B. This block ensures that data are consistent despite inconsistencies at the output of the pre-processing block. E-Output of the fusion block. Data from both sensors are spatially and temporally aligned.
In ODSF, the fusion node is built inside the central processor; however, the tracking nodes for respective sensors are outside the central processor. The tracker, which is applied to both sensors, independently helps in suppressing false positives, noise, and false negatives for each sensor, thereby providing the central processor with data that are pure and devoid of errors and inconsistencies. As the tracker is applied independently to both sensors, it can be understood that this architecture involves higher processing and is computationally heavier than OCSF. However, as highly consistent data from both sensors are fed to the fusion algorithm, the output of the architecture is highly precise.
In both these methods, the pre-processing block comprises the respective object detection algorithms. The tracking block is the unscented Kalman filter used in both architectures (in a different manner, however). The alignment block takes care of the spatial and temporal alignment of data from the two sensors. The fusion block ultimately associates the data from the two sensors to a single fixed target.
The only difference between the two proposed methods is the way the 'tracker' block is used. As we shall later see in the experiments and results section, the position of the tracker block significantly affects the algorithm performance. In ODSF, the tracker is applied on individual sensor data before the data are fused, while in OCSF, the tracker is applied only once on the final fused output.
Camera Object Detection
You Only Look Once (YOLO) is a popular algorithm based on convolutional neural networks for detecting objects in a 2D image. It is not one of the most accurate algorithms, but it is very efficient in terms of accuracy and real-time detection [36,37]. Alongside predicting class labels, YOLO also detects the location of respective target objects within the image. For our application, we are not focusing on the class labels; however, this algorithm was chosen so that, if our work were to evolve in the future, such that object classes were to be made useful, no substantial changes would have to be made to the architecture. This algorithm divides the image into numerous smaller regions and predicts probabilities of object presence, and its bounding box within the selected region [37]. Figure 3 shows a high-level flow diagram for YOLOv3.
Compared to the prior versions, YOLOv3 has a multi-scale detection and a much stronger feature extractor network, alongside changes in the loss function [38]. As a result, YOLOv3 has the capability to detect a multitude of targets, despite their size. Like any other single-shot detectors, this algorithm also makes real-time inference possible on standard CPU-GPU devices [37,38]. The network architecture for YOLov3 is as seen in Figure 4. In YOLOv3, a slightly tweaked architecture is used with the application of a feature extractor known as DarkNet-53. DarkNet-53 consists of 53 convolutional layers such that each layer is followed by Leaky ReLU activation and batch normalisation [38]. YOLOv3 is an open-source algorithm and was used off the shelf, as worked upon by Lee et al. [36]. algorithm divides the image into numerous smaller regions and predicts probabilities of object presence, and its bounding box within the selected region [37]. Figure 3 shows a high-level flow diagram for YOLOv3. Compared to the prior versions, YOLOv3 has a multi-scale detection and a much stronger feature extractor network, alongside changes in the loss function [38]. As a result, YOLOv3 has the capability to detect a multitude of targets, despite their size. Like any other single-shot detectors, this algorithm also makes real-time inference possible on standard CPU-GPU devices [37,38]. The network architecture for YOLov3 is as seen in Figure 4. In YOLOv3, a slightly tweaked architecture is used with the application of a feature extractor known as DarkNet-53. DarkNet-53 consists of 53 convolutional layers such that each layer is followed by Leaky ReLU activation and batch normalisation [38].
YOLOv3 is an open-source algorithm and was used off the shelf, as worked upon by Lee et al. [36]. Compared to the prior versions, YOLOv3 has a multi-scale detection and a much stronger feature extractor network, alongside changes in the loss function [38]. As a result, YOLOv3 has the capability to detect a multitude of targets, despite their size. Like any other single-shot detectors, this algorithm also makes real-time inference possible on standard CPU-GPU devices [37,38]. The network architecture for YOLov3 is as seen in Figure 4. In YOLOv3, a slightly tweaked architecture is used with the application of a feature extractor known as DarkNet-53. DarkNet-53 consists of 53 convolutional layers such that each layer is followed by Leaky ReLU activation and batch normalisation [38].
YOLOv3 is an open-source algorithm and was used off the shelf, as worked upon by Lee et al. [36].
LiDAR Object Detection
Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is popular and, as the name suggests, a 'density-based clustering non-parametric algorithm' [39]. If a set of points are fed to its input, the algorithm works towards grouping points that are closely packed [40]. The algorithm marks such points as 'inliers'. On the other hand, the points which lie outside the detected clusters are called 'outliers'. In short, the DBSCAN algorithm separates high-density clusters from low-density point cloud pixels [41]. The flow chart for DBSCAN is as shown in Figure 5.
Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is popular and, as the name suggests, a 'density-based clustering non-parametric algorithm' [39]. If a set of points are fed to its input, the algorithm works towards grouping points that are closely packed [40]. The algorithm marks such points as 'inliers'. On the other hand, the points which lie outside the detected clusters are called 'outliers'. In short, the DBSCAN algorithm separates high-density clusters from low-density point cloud pixels [41]. The flow chart for DBSCAN is as shown in Figure 5. Here, 'Eps' is the maximum distance neighbourhood between points in a cluster, and 'MinPts' is the minimum size of points necessary to form a cluster.
The DBSCAN algorithm can be summarised in terms of input, output, and process as shown below: Input: N objects to be clustered and some global parameters (Eps, MinPts) Output: Clusters of objects Process: 1 Select a point p arbitrarily. 2 Retrieve all density-reachable points from p with respect to Eps and MinPts. 3 If p is a core point, a cluster is formed. 4 If p is a border point, no points are density reachable from p and DBSCAN visits the next arbitrary point in the database. 5 Continue the process until all points in the database are visited.
In LiDAR point clouds, with a typical example that can be seen in Figure 6, the vehicles and objects on and beside the road are seen as high-density, closely packed clusters. Here, 'Eps' is the maximum distance neighbourhood between points in a cluster, and 'MinPts' is the minimum size of points necessary to form a cluster.
The DBSCAN algorithm can be summarised in terms of input, output, and process as shown below: Input: N objects to be clustered and some global parameters (Eps, MinPts) Output: Clusters of objects Process:
1.
Select a point p arbitrarily.
2.
Retrieve all density-reachable points from p with respect to Eps and MinPts.
3.
If p is a core point, a cluster is formed.
4.
If p is a border point, no points are density reachable from p and DBSCAN visits the next arbitrary point in the database.
5.
Continue the process until all points in the database are visited.
In LiDAR point clouds, with a typical example that can be seen in Figure 6, the vehicles and objects on and beside the road are seen as high-density, closely packed clusters. The algorithm detects these clusters and draws a bounding box around them, as shown in Figure 7. The algorithm detects these clusters and draws a bounding box around them, as shown in Figure 7.
Tracking
Tracking in automotive sensor fusion is the ability of the system to visualise and perceive various road objects around the vehicle and successfully track or follow them along the course of navigation. Detection and tracking are the core tasks in modern autonomous vehicles [24]. While detection algorithms help to create an object list of presence of various objects surrounding the ego-vehicle, tracking helps in understanding the way the obstacle or object has been moving and estimates the position of object in near future. Tracking algorithms are important because they help in combating the phenomenon of false positives and false negatives to a great extent [43]. As the past state of the object is always known, one can estimate the present state of the object even if the detection algorithm does not detect or falsely detects the present state of said object.
In our application, the tracker shall be exposed to highly non-linear inputs, as is in the case of realistic real-world cases. The detection algorithm may or may not always detect the obstacles (in case of false negatives) or can detect non-existent obstacles in few cases (in case of false positives). Furthermore, obstacles can occur and disappear outside of any control or pattern. When the system is nonlinear, the extended Kalman filter (EKF) The algorithm detects these clusters and draws a bounding box around them, as shown in Figure 7.
Tracking
Tracking in automotive sensor fusion is the ability of the system to visualise and perceive various road objects around the vehicle and successfully track or follow them along the course of navigation. Detection and tracking are the core tasks in modern autonomous vehicles [24]. While detection algorithms help to create an object list of presence of various objects surrounding the ego-vehicle, tracking helps in understanding the way the obstacle or object has been moving and estimates the position of object in near future. Tracking algorithms are important because they help in combating the phenomenon of false positives and false negatives to a great extent [43]. As the past state of the object is always known, one can estimate the present state of the object even if the detection algorithm does not detect or falsely detects the present state of said object.
In our application, the tracker shall be exposed to highly non-linear inputs, as is in the case of realistic real-world cases. The detection algorithm may or may not always detect the obstacles (in case of false negatives) or can detect non-existent obstacles in few cases (in case of false positives). Furthermore, obstacles can occur and disappear outside of any control or pattern. When the system is nonlinear, the extended Kalman filter (EKF)
Tracking
Tracking in automotive sensor fusion is the ability of the system to visualise and perceive various road objects around the vehicle and successfully track or follow them along the course of navigation. Detection and tracking are the core tasks in modern autonomous vehicles [24]. While detection algorithms help to create an object list of presence of various objects surrounding the ego-vehicle, tracking helps in understanding the way the obstacle or object has been moving and estimates the position of object in near future. Tracking algorithms are important because they help in combating the phenomenon of false positives and false negatives to a great extent [43]. As the past state of the object is always known, one can estimate the present state of the object even if the detection algorithm does not detect or falsely detects the present state of said object.
In our application, the tracker shall be exposed to highly non-linear inputs, as is in the case of realistic real-world cases. The detection algorithm may or may not always detect the obstacles (in case of false negatives) or can detect non-existent obstacles in few cases (in case of false positives). Furthermore, obstacles can occur and disappear outside of any control or pattern. When the system is nonlinear, the extended Kalman filter (EKF) tends to diverge [43], while unscented Kalman filter (UKF) tends to produce comparatively better results [24,43].
The unscented transformation is a method used to calculate the statistics and behaviour of any random variable subjected to nonlinear transformation. The unscented Kalman filter utilises a set of points to propagate them through the actual nonlinear func-tion, instead of linearising the functions. The points to be fed to the filter are chosen such that their mean, covariance, and higher order moments match that of the Gaussian random variable. The mean and covariance can be recalculated using these propagated points to yield better and more accurate results compared to a Taylor Series function (which is fully linear). Here, sample points are not selected arbitrarily. In their work, Lee [44] demonstrated the superior performance gain of UKF over EKF for the estimation of state of the detected objects in highly non-linear systems. Thus, among the two considered predictors, we choose the UKF for our application.
The flowchart for UKF implementation can be seen in Figure 8. The states that are implemented in UKF are as follows: (1) state predictor, (2) measurement predictor, and (3) state updater. tends to diverge [43], while unscented Kalman filter (UKF) tends to produce comparatively better results [24,43].
The unscented transformation is a method used to calculate the statistics and behaviour of any random variable subjected to nonlinear transformation. The unscented Kalman filter utilises a set of points to propagate them through the actual nonlinear function, instead of linearising the functions. The points to be fed to the filter are chosen such that their mean, covariance, and higher order moments match that of the Gaussian random variable. The mean and covariance can be recalculated using these propagated points to yield better and more accurate results compared to a Taylor Series function (which is fully linear). Here, sample points are not selected arbitrarily. In their work, Lee [44] demonstrated the superior performance gain of UKF over EKF for the estimation of state of the detected objects in highly non-linear systems. Thus, among the two considered predictors, we choose the UKF for our application.
The flowchart for UKF implementation can be seen in Figure 8. The states that are implemented in UKF are as follows: (1) state predictor, (2) measurement predictor, and (3) state updater. The UKF for position estimation of objects is an open-source algorithm and was used off the shelf as worked upon by Wan et al. [25].
Data Fusion Synchronicity of Data
This step applies to both OCSF and ODSF. In OCSF, data synchronicity is maintained before tracking, while in ODSF, data synchronisation is carried out after tracking blocks. An advantage of using the KITTI dataset (http://www.cvlibs.net/datasets/kitti/, (accessed on 31 July 2021) see Section 3.3 for details) is that the data are already temporally synchronised. As a result, we only take care of the spatial synchronisation of LiDAR and camera data.
To spatially synchronise the LiDAR and camera data (tracked and filtered in the case of ODSF, and unfiltered processed data in the case of OCSF), the calib_velo_to_cam.txt file provided in the KITTI dataset is used. This file consists of the rotation matrix and translation vector necessary to map the 3D LiDAR data onto the 2D image data. The spatial synchronisation part of the algorithm is done by the 'alignment block'.
A 3D point 'x' in the 3D LiDAR space can be projected into a point 'y' into 2D camera space as shown in Equation (1) [43,45,46]: where: p is the Projection matrix after rectification: where R is the rectifying rotation matrix. For rotation in a three-dimensional space, we can describe this as an anti-clockwise rotation by an angle θ about the z-axis. The 3 × 3 orthogonal matrix representing the transformation is given by: Thus, by using Equation (1) with values from Equations (2) and (3) for p and R, respectively, we can associate the 3D LiDAR points with the 2D image points, as seen in Figure 9. The UKF for position estimation of objects is an open-source algorithm and was used off the shelf as worked upon by Wan et al. [25].
Synchronicity of Data
This step applies to both OCSF and ODSF. In OCSF, data synchronicity is maintained before tracking, while in ODSF, data synchronisation is carried out after tracking blocks. An advantage of using the KITTI dataset (http://www.cvlibs.net/datasets/kitti/, (Accessed on: 31th July 2021) see Section 3.3 for details) is that the data are already temporally synchronised. As a result, we only take care of the spatial synchronisation of LiDAR and camera data.
To spatially synchronise the LiDAR and camera data (tracked and filtered in the case of ODSF, and unfiltered processed data in the case of OCSF), the calib_velo_to_cam.txt file provided in the KITTI dataset is used. This file consists of the rotation matrix and translation vector necessary to map the 3D LiDAR data onto the 2D image data. The spatial synchronisation part of the algorithm is done by the 'alignment block'.
A 3D point 'x' in the 3D LiDAR space can be projected into a point 'y' into 2D camera space as shown in Equation (1) where R is the rectifying rotation matrix. For rotation in a three-dimensional space, we can describe this as an anti-clockwise rotation by an angle θ about the z-axis. The 3 × 3 orthogonal matrix representing the transformation is given by: Thus, by using Equation (1) with values from Equations (2) and (3) for p and R, respectively, we can associate the 3D LiDAR points with the 2D image points, as seen in Figure 9. Executing Fusion Node-OSCF and ODSF This step applies for both OCSF and ODSF. In OCSF, aligned and untracked noisy data are fed to the fusion node, while in ODSF, aligned tracked data are fed to the fusion node. However, the principles of operation and execution remain the same for both.
The objects detected by the camera object detection algorithm are identified by two parameters, namely:
1.
Parameters of top left corner of the bounding box, that is, (x1, y1), and 2.
Width and height of the bounding box, that is, (h, w).
This can be understood from the details shown in Figure 10. Executing Fusion Node-OSCF and ODSF This step applies for both OCSF and ODSF. In OCSF, aligned and untracked noisy data are fed to the fusion node, while in ODSF, aligned tracked data are fed to the fusion node. However, the principles of operation and execution remain the same for both.
The objects detected by the camera object detection algorithm are identified by two parameters, namely: 1. Parameters of top left corner of the bounding box, that is, (x1, y1), and 2. Width and height of the bounding box, that is, (h, w).
This can be understood from the details shown in Figure 10. In Figure 10, consider the bounding box (ABCD). Accordingly, the cartesian coordinates of points A, B, C, and D can be as seen in Table 2. Objects detected by the LiDAR object detection algorithm are also identified by two parameters, namely: (1) Parameters of front top left corner of the bounding box, that is, (x1, y1, z1).
(2) Width, height, and depth of the bounding box, that is, (h, w, l).
The cartesian coordinates of points B, C, D, E, F, G, and H, as they can be derived, are shown in Table 3 (consider Figure 9 for the naming convention). In Figure 10, consider the bounding box (ABCD). Accordingly, the cartesian coordinates of points A, B, C, and D can be as seen in Table 2. Objects detected by the LiDAR object detection algorithm are also identified by two parameters, namely: (1) Parameters of front top left corner of the bounding box, that is, (x1, y1, z1).
(2) Width, height, and depth of the bounding box, that is, (h, w, l).
The cartesian coordinates of points B, C, D, E, F, G, and H, as they can be derived, are shown in Table 3 (consider Figure 9 for the naming convention). By using spatial transformation, every point in the LiDAR 3D space in Table 3 will be transformed into a respective point in the 2D camera space. Thus, after transforming 3D bounding boxes into the 2D space, we shall have a total of two 2D bounding boxes for each detected object-one bounding box is a result of camera object detection algorithm and the other one is the transformed output of the LiDAR object detection algorithm. If the transformation is accurate, and both sensors have detected the object with precision, the overlap between the two bounding boxes should be high. For this work, an intersection of union (IoU) value [11] of 0.7 was used, that is, the detection is considered as a true positive if more than 70% of the area of the 2D bounding boxes is overlapping.
These two bounding boxes can be seen in Figure 11. The yellow bounding box is the transformed LiDAR detection from 3D to 2D and the green bounding box is the cameradetected 2D box.
4
D (x1, y1-h, z1) 5 E (x1, y1-h, z1 + l) 6 F (x1, y1, z1 + l) 7 G (x1 + w, y1, z1 + l) 8 H (x1 + w, y1-h, z1 + l) By using spatial transformation, every point in the LiDAR 3D space in Table 3 will be transformed into a respective point in the 2D camera space. Thus, after transforming 3D bounding boxes into the 2D space, we shall have a total of two 2D bounding boxes for each detected object-one bounding box is a result of camera object detection algorithm and the other one is the transformed output of the LiDAR object detection algorithm. If the transformation is accurate, and both sensors have detected the object with precision, the overlap between the two bounding boxes should be high. For this work, an intersection of union (IoU) value [11] of 0.7 was used, that is, the detection is considered as a true positive if more than 70% of the area of the 2D bounding boxes is overlapping.
These two bounding boxes can be seen in Figure 11. The yellow bounding box is the transformed LiDAR detection from 3D to 2D and the green bounding box is the cameradetected 2D box. The fusion node associates camera data to the LiDAR data. The transformed bounding box detected by the LiDAR detection algorithm is associated on a pixel level with the bounding box detected by the camera detection algorithm. As the intersection over union (IoU) value is more than 0.7, the detections from the camera and LiDAR are fused together, and the transformed 2D bounding box detected by the LiDAR is considered as the final detection.
However, this technique works perfectly if both sensors provide reliable data with considerable accuracy. Bounding boxes of the two sensors can be associated only if both sensors detect an object. Data cannot be associated if one sensor picks an object and the other one fails to detect the same. For OCSF, where data are inconsistent at the input of the fusion node, consider a case as below: 1. Both sensors have detected an object, and the fusion node now associates their bounding boxes. 2. Some frames later, one of the two sensor detection algorithms gives a false negative detection and does not detect the object.
In this case, the fusion cannot be carried out and the fusion node provides a NULL output (which is similar to 'No Object Detected'). This results in inconsistencies in the output of the fusion node. We then use the tracking node to tackle this problem for OCSF. The fusion node associates camera data to the LiDAR data. The transformed bounding box detected by the LiDAR detection algorithm is associated on a pixel level with the bounding box detected by the camera detection algorithm. As the intersection over union (IoU) value is more than 0.7, the detections from the camera and LiDAR are fused together, and the transformed 2D bounding box detected by the LiDAR is considered as the final detection.
However, this technique works perfectly if both sensors provide reliable data with considerable accuracy. Bounding boxes of the two sensors can be associated only if both sensors detect an object. Data cannot be associated if one sensor picks an object and the other one fails to detect the same. For OCSF, where data are inconsistent at the input of the fusion node, consider a case as below: 1.
Both sensors have detected an object, and the fusion node now associates their bounding boxes.
2.
Some frames later, one of the two sensor detection algorithms gives a false negative detection and does not detect the object.
In this case, the fusion cannot be carried out and the fusion node provides a NULL output (which is similar to 'No Object Detected'). This results in inconsistencies in the output of the fusion node. We then use the tracking node to tackle this problem for OCSF. In ODSF, however, as filtered data are received at the input of the fusion node, lesser anomalies are observed, and even if noise, false positives, or false negatives are present in the output of the camera and LiDAR object detection algorithms, the output of the fusion node is consistent, thanks to the tracking node, which is independently applied to both sensors before fusion. However, if inconsistent tracks are found in ODSF (different tracks for two different sensor outputs), the tracks are ignored, resulting in a NULL output. This is unexpected and would lead to an undesirable output from the fusion block.
Implementation of OSCF and ODSF
We have implemented the OCSF and ODSF architectures in the Robot Operating System (ROS)-based environment. For all implementation and experimentation, an Intel i5-based Ubuntu 18.04 machine was used. To make the system language agnostic, the Robot Operating System (ROS) is used. The KITTI dataset is used for the camera and LiDAR images used to develop and test the proposed algorithms. The advantage of using the KITTI dataset is the variety of testing data-for both camera and LiDAR-alongside being open source and having ease of compatibility for using the sensor data as they are [3]. The KITTI dataset also provides an easy method to convert the available to rosbag, thereby making it convenient to interface the data with a ROS environment.
Creating the ROS Environment
Using Robot Operating System (ROS) provides flexibility for development and helps in maintaining modularity. Multiple nodes can be added or removed without hassle and data can be easily debugged, envisioned, and processed. ROS provides cross-language development liberty and is language agnostic. As a result, the camera object detection algorithm in Python 3.6 and the LiDAR object detection algorithm developed in C++ can be integrated easily. In the Sensor_Fusion node, software nodes, as shown in Table 4, are used for both architectures (OCSF and ODSF). However, the order in which they are made to work differs. This node associates the synchronised LiDAR and camera data together, thereby creating an object list which includes data from both the camera and LiDAR
Tracking
This node performs functionality of the unscented Kalman filter. The UKF is implemented on fused data for OCSF and independently on sensor data in ODSF.
An evaluation node is built for evaluating the performance of the fusion architectures. This node primarily gives an idea of the computational power required for implementing the architecture. Visualisation node is built to display the fused data on RViz, which is the visualisation tool used in ROS.
Examples for Sensor Data Fusion
We tested the system in three scenarios for both OCSF and ODSF:
3.
Reliable detections at far distances in brightly lit scenarios- Figure 14. Figure 12 shows that the fusion algorithm provides output as expected when exposed to highly contrasting scenes. Vehicles in darker parts of the image (the one with green bounding box) are detected well alongside the objects in the brighter parts of the image (with purple and yellow bounding boxes).
3. Reliable detections at far distances in brightly lit scenarios- Figure 14. Figure 12 shows that the fusion algorithm provides output as expected when exposed to highly contrasting scenes. Vehicles in darker parts of the image (the one with green bounding box) are detected well alongside the objects in the brighter parts of the image (with purple and yellow bounding boxes). 3. Reliable detections at far distances in brightly lit scenarios- Figure 14. Figure 12 shows that the fusion algorithm provides output as expected when exposed to highly contrasting scenes. Vehicles in darker parts of the image (the one with green bounding box) are detected well alongside the objects in the brighter parts of the image (with purple and yellow bounding boxes). Thus, for both OCSF and ODSF, the qualitative performance of the algorithms seems acceptable under a myriad of circumstances. We shall now utilise these sensor fusion algorithms to gauge the performance of EBA.
Emergency Brake Assist (EBA) Using OCSF and ODSF
The output of central processor in the fusion framework is fed to the EBA application. The EBA is designed as worked upon by Ariyanto et al. [19], where ultrasonic sensors are used to detect any object in the vicinity of the vehicle. In this work, a similar feature is designed, except instead of ultrasonic sensors, fused data from camera and LiDAR are used to perceive the environment. The scenario shall be considered as an 'Unsafe Scenario' Figure 14. Output of sensor fusion in brightly lit surroundings when target objects are at a distance. Figure 13 depicts the scenario in a bright sunny environment. Target objects in very bright surroundings are also detected properly. Figure 14 depicts a scenario in which vehicles are far away from the ego vehicle. Such objects are also detected properly. Thus, for both OCSF and ODSF, the qualitative performance of the algorithms seems acceptable under a myriad of circumstances. We shall now utilise these sensor fusion algorithms to gauge the performance of EBA.
Emergency Brake Assist (EBA) Using OCSF and ODSF
The output of central processor in the fusion framework is fed to the EBA application. The EBA is designed as worked upon by Ariyanto et al. [19], where ultrasonic sensors are used to detect any object in the vicinity of the vehicle. In this work, a similar feature is designed, except instead of ultrasonic sensors, fused data from camera and LiDAR are used to perceive the environment. The scenario shall be considered as an 'Unsafe Scenario' if the target object(s) detected is/are closer than 5 m in the driving path of the ego-vehicle.
The projected driving path (PDP) is considered to be the area in front of the ego-vehicle with a width of 1.6 m (which is the width of the ego-vehicle) and length of 5 m. If any one of the four corners of the any detected bounding box of the target object(s) shall lie within the PDP, the EBA shall display 'Brake!' on the display window (thereby categorising the scenario as an 'unsafe' one). For safe scenarios, it shall display 'Safe' in the display window. The flow chart of functionality of this application is as seen in Figure 15.
As shown in Figure 16, the detected target bounding box does not protrude into the projected driving path (PDP). As the PDP is void of any target objects, the system considers it as a 'safe' scenario. Figure 17 shows a real-time representation of this scenario.
Safe Scenario for EBA
Consider a detected bounding box whose four corners are P1 (x1, y1), P2 (x2, y2), P3 (x3, y3), and P4 (x4, y4). Figure 16, the detected target bounding box does not protrude into the projected driving path (PDP). As the PDP is void of any target objects, the system considers it as a 'safe' scenario. Figure 17 shows a real-time representation of this scenario.
As shown in Figure 16, the detected target bounding box does not pro projected driving path (PDP). As the PDP is void of any target objects, the s ers it as a 'safe' scenario. Figure 17 shows a real-time representation of this
As shown in Figure 18, the detected target bounding box does protrude into the PDP. As a result, the system considers it as an 'unsafe' scenario and displays 'Unsafe!' in the display window, as shown in Figure 19.
As shown in Figure 18, the detected target bounding box does protrude into the PDP. As a result, the system considers it as an 'unsafe' scenario and displays 'Unsafe!' in the display window, as shown in Figure 19.
As shown in Figure 18, the detected target bounding box does protrude As a result, the system considers it as an 'unsafe' scenario and displays 'U display window, as shown in Figure 19. Figure 20, no objects are present in the PDP. As a result, the scenario is classified as a safe one. Figures 20-22 show various safe and braking scenarios. In Figure 20, no objects are present in the PDP. As a result, the scenario is classified as a safe one. In Figures 21 and 22, road occupants are present inside the PDP. As a result, the scenario is classified as an unsafe one. The PDP is a fixed area in front of the vehicle. As of now, the steering angle and vehicle speed do not affect the area under the PDP quadrilateral. However, as we are In Figures 21 and 22, road occupants are present inside the PDP. As a result, the scenario is classified as an unsafe one.
OCSF-Driven EBA
For all three circumstances shown in Figures 20-22, the following observations are of particular interest.
1.
The frame rate is consistent around 31 frames/s.
2.
Instances of false positives or false negatives are observed at times, as expected from the OCSF-driven EBA algorithm. 3.
The tracker algorithm does a good job of suppressing the false positives (FP) and false negatives (FN); however, not all FPs and FNs are filtered. It can be understood that the number of FPs and FNs would be considerably higher if the tracker were not used.
The PDP is a fixed area in front of the vehicle. As of now, the steering angle and vehicle speed do not affect the area under the PDP quadrilateral. However, as we are demonstrating the sensor fusion-based ADAS feature, this is an acceptable compromise, and this can be evolved in later versions of the work. Figure 23, one or more corners of the bounding boxes of vehicles parked alongside the road enter the PDP. As a result, the scenario is identified as an unsafe one. and this can be evolved in later versions of the work. For all three circumstances shown in Figures 23-25, the following observations are of particular interest.
ODSF-Driven EBA
1. The frame rate is consistent around 20 frames/s; thus, comparatively less frame rate is observed. 2. Compared to OCSF-driven EBA, lesser false positives and false negatives are observed, as expected from the ODSF-driven EBA. 3. In this case, the tracker algorithm suppresses the false positives and negatives. As these FPs and FNs are suppressed at a modular level before the data are fused, the accuracy of this method is much higher than the OCSF-driven EBA. 4. Like in the previous method as well, the steering angle and vehicle speed do not affect the area under the PDP quadrilateral. However, as we are demonstrating the sensor fusion-based ADAS feature, this is an acceptable compromise, and this can be evolved in later versions of the work.
Results
Object-level centralised and decentralised sensor fusion can thus be successfully used to drive the said ADAS algorithm. Furthermore, through experiments, we also conclude that both fusion techniques provide a higher qualitative performance compared to monosensor systems. For benchmarking of the mono-sensor system, we consider the EBAdriven by camera sensor alone. While there are some imminent drawbacks with cameradriven EBA, such as lesser reliability of the system in low-light conditions, we consider a scenario under perfect lighting for the sake of comparison. For experimental analysis, more than 100,000 frames of the KITTI dataset in urban, semiurban, and highway scenarios were considered. Various objects, such as commercial, heavy, and light on-road vehicles, pedestrians, and other relevant road objects were considered.
Frame Rate for the Execution of EBA
The execution speed of an algorithm is a direct indication of the computational load that is incurred by the software on the system on which it is executed. Both fusion algorithms provide acceptable speed (~20fps for ODSF and ~30fps for OCSF). For the monosensor system, the frame rate is highest at ~37fps. For the execution of multiple videos under different circumstances, Table 5 shows the frame rates observed for EBA driven by both fusion methods and mono-sensor architecture (the videos are chosen from the KITTI dataset).
1.
The frame rate is consistent around 20 frames/s; thus, comparatively less frame rate is observed.
2.
Compared to OCSF-driven EBA, lesser false positives and false negatives are observed, as expected from the ODSF-driven EBA.
3.
In this case, the tracker algorithm suppresses the false positives and negatives. As these FPs and FNs are suppressed at a modular level before the data are fused, the accuracy of this method is much higher than the OCSF-driven EBA.
4.
Like in the previous method as well, the steering angle and vehicle speed do not affect the area under the PDP quadrilateral. However, as we are demonstrating the sensor fusion-based ADAS feature, this is an acceptable compromise, and this can be evolved in later versions of the work.
Results
Object-level centralised and decentralised sensor fusion can thus be successfully used to drive the said ADAS algorithm. Furthermore, through experiments, we also conclude that both fusion techniques provide a higher qualitative performance compared to mono-sensor systems. For benchmarking of the mono-sensor system, we consider the EBA-driven by camera sensor alone. While there are some imminent drawbacks with camera-driven EBA, such as lesser reliability of the system in low-light conditions, we consider a scenario under perfect lighting for the sake of comparison. For experimental analysis, more than 100,000 frames of the KITTI dataset in urban, semiurban, and highway scenarios were considered. Various objects, such as commercial, heavy, and light on-road vehicles, pedestrians, and other relevant road objects were considered.
Frame Rate for the Execution of EBA
The execution speed of an algorithm is a direct indication of the computational load that is incurred by the software on the system on which it is executed. Both fusion algorithms provide acceptable speed (~20 fps for ODSF and~30 fps for OCSF). For the mono-sensor system, the frame rate is highest at~37 fps. For the execution of multiple videos under different circumstances, Table 5 shows the frame rates observed for EBA driven by both fusion methods and mono-sensor architecture (the videos are chosen from the KITTI dataset). The frame rate of EBA executed with OCSF is~50% higher than EBA executed with ODSF, while the frame rate of EBA executed using mono-sensor architecture is~35% higher than the one executed using OCSF. The prime reason for higher execution speed of OSCFas compared to ODSF-driven EBA is because the computationally heavy tracker algorithm is implemented only once in OCSF, whereas in ODSF, it is implemented twice (once for each sensor output) for a single frame. In mono sensor-driven EBA, a higher frame rate is observed as a processing of only one sensor has to be done. The time profiling for EBA executed with both sensor fusion methods is as given in Tables 6 and 7, respectively. On the current system, where the CPU does not allow for much parallelisation of tasks, a stark difference between the performance of two architectures can be seen. However, if a capable embedded platform like NVIDIA Drive AGX [47], which has numerous GPU cores to allow for parallelisation of independent tasks, the ODSF can be implemented as fast or nearly as fast as the OCSF [30,48].
It can be seen from Tables 6 and 7 that the tracker algorithm is computationally heavier than all other components in the system. The UKF algorithm consists of many approximations and iterations, because of which it is expected to be computationally heavy [49][50][51][52][53].
Other alternatives, such as the extended Kalman filter, might be computationally lighter but are prone to more errors [54].
Accuracy and Precision of EBA
The tracklets.xml file in the KITTI dataset contains ground truth data for all instances. For both fusion methods, OCSF and ODSF, we store the bounding box data in the /evaluation folder. Later, the contents in the /evaluation folder are compared with data in tracklets.xml to get an idea of the accuracy of each architecture. A Python script is written to compare the objects detected by the fusion algorithm against the ground truth data. The IoU measures the overlap of the two bounding boxes under consideration-the ground truth box and the actual detected bounding box. For the current project, an IoU of 0.7 was considered in calculating the accuracy and precision of the detection fusion algorithms. A detected bounding box (the output of the fusion algorithm) is considered as a true positive if the IoU with the ground truth data are greater than 0.7. By calculating the true positives, false positives, and false negatives values, the mAP values for the OCSF output, the ODSF output, and the mono-sensor output for IoU of 0.7 for four separate videos were obtained and are listed in Table 8. If the IoU threshold is increased, the mAP values decrease accordingly for both OCSF and ODSF; however, the IoU threshold is set at 0.7 for optimum results. As elaborated in Section 2.1, the false positives (FP), false negatives (FN), and true positives (TP) values are calculated by comparing the output of the fusion algorithm against the KITTI dataset's ground truth data. mAP values are then calculated using the TP, FP, and FN values. The prime downside of the mono-sensor system can be observed from Table 8. For the mono-sensor architecture, the mAP value in all tested scenarios is less than half of the mAP value for the fusion architectures. Thus, despite the high frame rate, as seen in Table 5, the application of the mono-sensor architecture is not preferred due to extremely poor accuracy. The higher value of accuracy for ODSF is justified in ODSF, as noise (false positives, false negatives, and ghost object detections, etc.) is suppressed earlier than OCSF by using tracker immediately after detection algorithm. As a result, the data fed to sensor fusion node are already filtered and the effects of noise are nullified beforehand. Thus, the fusion algorithm can operate with minimal error, thereby providing more accurate results. Even if it is computationally heavier, ODSF provides more accurate results.
In general, the errors in the output of the fusion block are observed when either sensor fails to detect the object. This is the major reason behind lower performance of the OCSF. This inconsistency in the detection or false positive detections can be referred to as 'Noise'. For ODSF, however, when an object is not detected, or falsely detected for few frames by any sensor, the respective tracker predicts its right position, and it drives the algorithm accordingly for the next frames.
Hence, if the ADAS algorithm needs to be run in situations where accuracy is of utmost importance, and cost of hardware platform is a second priority, ODSF shall provide for a better solution. It can thus be understood that OCSF is less immune to sensor noise and errors. As a result, at several instances, false negatives and false positives are observed in the sensor fusion output for OCSF. This error directly corresponds to a failure of the EBA under critical situations.
Computational Cost of EBA
OCSF can be implemented on hardware platforms with fewer resources/lesser computational prowess, while ODSF requires platforms with more resources/high computational prowess to achieve the real-time performance. As ODSF is computationally heavier than OCSF, the cost of execution of OCSF is lesser than that of ODSF if a real-time application, such as EBA, is to be implemented using these methods. When executed on a computer with NVIDIA GeForce 1080 GTX Graphics card, OCSF was seen to be running at 46 fps and ODSF at 42 fps. Thus, on higher end machines, the performance of the two algorithms is on par with each other.
The computational cost of implementation of ODSF increases proportionally with the number of sensors in the system. However, the ability of the system to tackle more noise brought in by more sensors is also strengthened. Thus, depending on the criticality, budget, and nature of application of the target ADAS system, this can either be an advantage or disadvantage of ODSF compared to OCSF. Thus, on a general hardware platform with limited resources, EBA executed using OCSF shall still provide acceptable output, while EBA using ODSF shall show degraded performance (in terms of frame rate and hardware resources consumed). To obtain a real-time performance from EBA using ODSF, more expensive hardware will be required, while the same is not necessary for EBA using OCSF.
Conclusions
While it can be seen from Table 5 that mono-sensor-driven EBA provides very high frame rate, Table 8 proves that this high execution speed comes at the price of very poor accuracy. In ADAS applications, we need to attain a balance between the speed of execution and the accuracy of the system. Thus, due to substantially degraded accuracy of the monosensor system, fusion-based systems are preferred, thanks to their higher accuracy and acceptable execution speed. Even the least accurate version of sensor fusion-driven EBA (from the two methods stated in this paper) is more reliable and worthy than EBA driven by a mono-sensor system. Fusing data from multiple sensors might add to the cost of the system; however, the accuracy, precision, and reliability of the ADAS algorithm increase manifold, which, in turn, justify the higher cost of the fusion algorithm.
Considering the accuracy, computational load, and cost of execution of the two sensor fusion methods for driving EBA, we can say that both OCSF and ODSF have their respective advantages and disadvantages. While OCSF is simpler to execute and is computationally lighter, it provides comparatively less accuracy (as seen in Table 8); ODSF is more accurate than OCSF and has a better immunity to noise; however, it is computationally heavy and, hence, has a higher computational cost. Mono-sensor systems, on the other hand, are very light computationally; however, they also provide very poor accuracy. If an ADAS algorithm needs to be run on a less-expensive embedded platform, which has lesser hardware resources (a smaller number of CPU and GPU cores and less cache and RAM), like a Cortex-M-based STM32 platform [55,56], and less accuracy of EBA is acceptable, OCSF shall prove to be a comparatively better option. However, if hardware resources and computational cost are not a concern, and the accuracy and precision of the ADAS algorithm utilising the fusion architecture is of the utmost importance, ODSF is a more favourable option for driving EBA [57][58][59][60].
In real-world on-vehicle scenarios, if EBA is executed in L1-L2 automated vehicles [61,62], where the driver shall be expected to always remain attentive and in control of the vehicle, EBA driven by OCSF might be a beneficial option; however, if the vehicle is automated to L3 or higher, where the driver is not always expected to be attentive or in control of the vehicle, EBA driven by ODSF shall certainly be a more reliable and better alternative [63].
|
2021-08-29T06:16:17.208Z
|
2021-08-01T00:00:00.000
|
{
"year": 2021,
"sha1": "8f39352f78b1b79398da6ed8277c077a6f406610",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/s21165422",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "75880ca9c75ef2e2a64c0e590501d30cbb2f213b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
128773272
|
pes2o/s2orc
|
v3-fos-license
|
Application of Geophysical Methods to Building Foundation Studies
A geophysical survey involving the electrical resistivity method utilizing the Vertical Electrical Sounding (VES) and Electrical Imaging Techniques was conducted around the premises of an area within south-western Nigeria with the aim of studying structural defects which may be responsible for future problems and characterizing the soil conditions of the site. A total of 15 VES stations were occupied using Schlumberger Configuration with AB/2 varying from 1 to 65 m. In the electrical imaging, dipole-dipole array was adopted and the two traverses were occupied in the S-N and E-W directions close to where wall cracks and subsurface problems were manifested. Five main geoelectric sequences were delineated within the study area; these include the topsoil (clay and sandy clay), lateritic clay, weathered bedrock (clay, sandy clay and clayey sand), fractured bedrock and fresh basement. A major discontinuity (fracture zone) was discovered along the S-N direction, while a weak zone was also discovered along E-W direction. The result of this research has shown that the causes of the cracks and distress on the walls within the site may have been influenced by the differential settlement resulting from the incompetent subsoil materials and the fractured bedrock on which the foundation of the building was laid.
Introduction
With the growing demand for site development and unpleasant experience of building failure, there is increasing number of necessary site investigations to reveal possible subsurface problems.Therefore, geophysical investigations are important in evaluating the physical properties of the subsurface in terms of its soil type, soil competence, soil corrosivity, depth to bedrock and lithologic sequence.
Site engineers, for reasons of cost and other considerations such as assumptions in structural design, sometimes fail to incorporate pre-construction investigations in their job schedule.A geophysical investigation is therefore necessary for the site to reveal possible future subsurface problems and proffer possible solutions before the erection of buildings.
Study Area
The Study Area, shown in Figure 1, is geographically enclosed within latitude 7˚36'95" N to 7˚37'55" N and longitude 4˚42'00" E to 4˚42'90" E, south-western, Ni geria.The climate is humid tropical type with a mean annual temperature of about 280˚C and a mean annual rainfall of about 1600 mm [1].Periods of high temperatures are recorded annually; the first period occurs in March-April and the second period in November-December, the coolest period is observed in the middle of the raining season [2].Within the Southern Ilesa area, Nigeria; the schists are the most predominant rock type.They are medium-grained with abundant biotite and strongly foliated.Lenses of granular quartz are present [3].The quartzite and quartz mica schist are probably younger than the gneisses and schists.Several belts of schists occur around the study area and range from masive, granular rocks to glassy schistose varieties.The s quarzite and quartz schists are resistant to weathering and therefore occur as distinct ridges with very steep slopes, but are highly fractured and jointed.Figure 2 is a geological map within the area [4,5].The study area lies within the region of undifferentiated gneiss and magmatite as shown on the geological map.
Two traverses were established within the premises of the study area, which runs S-N and E-W respectively (Figure 3).Eight Vertical Electrical Soundings (VES) were occupied along the traverse that runs S-N, while seven Vertical Electrical Soundings (VES) were occupied along the traverse that runs E-W, the traverse lengths are 75 m and 100 m respectively.The locations of the VES were constrained by the manifestation of failure at the investigated site.A total of fifteen (15) Vertical Electrical Soundings (VES) with electrode separations (AB/2) ranging from 1 to 65 m were conducted within the study area using DDR-2 resistivity meter.The location of each of the sounding station was recorded with the aid of ETRA F-10 (GPS) unit.The apparent resistivity measurement at each station was plotted on bi-logarithmic graph sheets.The curves were inspected visually to determine the number and nature of the layering.Partial curve matching was carried out for the quantitative interpretation of the curves.The results of the curve matching (layer resistivities and thicknesses) were fed into the computer as a starting model in an iterative forward modelling technique 1-D inversion pro-gram [6].From the interpretation results (layer resistivities and thicknesses), two geoelectric sections along E-W and S-N directions and a histogram were produced.For the combined horizontal profiling as sounding technique, the same traverses where VES were carried out, were also used.The Dipole-Dipole array was used for the data acquisition.The inter-electrode spacing (a) of 5 m was adopted while inter-dipole separation factor (n) was varied from 1 -5.The apparent resistivity values were calculated using πn(n + 1)(n + 2)a as the geometric factor.2-D inversion modeling of the Dipole-Dipole data was carried out using DIPROTM Software developed by the Korea Institute of Geoscience and Mineral Resources [7].
Discussion of Results
The results of this research work are presented as field curves, histogram, geoelectric sections, pseudosections and 2-D inversion models.
Field Curves
The interpretation of the sounding curves shows that seven curve types exist, viz: HA, KH, AA, HKH, QH, HK and KQ.The number of layers varies between 4 and 5. KH curve type is the predominant curve type (Figure 4), constituting 46.66% of the total, HA and AA constitute 13.33%, HKH, QH, HK and KQ constitute 6.67%.ome of the typical curve types in the area are shown in S Open Access IJG
Figures 5(a)-(c). The implicatio e predominant curve is that the underlying bedrock in
The geoelectric section along E-W direction (Figure 6) /geologic subsurface layers comprising the topsoil, lateritic clay, weathered bedrock, partially weathered bedrock, fractured bedrock and fresh /sandy clay/clayey n of KH curve type as identified six geoelectric th the study area is characterised by confined fractures.
Geoelectric Characteristics
basement.The compositions are clay sand topsoil (resistivity varies from 51 to 488 ohm-m and thickness ranges from 0.42 -2.69 m), lateritic clay (resistivity varies from 966 to 977 ohm-m and thickness ranges from 4.10 -5.24 m) localized at VES 2 and VES 7, clay/sandy clay/clayey sand weathered bedrock (resistivity varies from 64 to 334 ohm-m and thickness ranges from 6.77 -29.08 m), partially weathered bedrock (resistivity of 626 ohm-m and thickness of 30.62 m) beneath VES 6, fractured bedrock (resistivity of 869 ohm-m) beneath VES 4 which is a confined fracture and fresh basement (resistivity varies from 937 ohm-m to 1385 ohm-m).
The depth to rock head ranges from 7.19 m to 36.17 m.The overburden is generally thick but thinnest at VES 4 (7.19 m) and thickest at VES 6 (36.17 m) at the western flank, both along East-West direction.The basement relief is undulating; basement depression is noticed at VES 3 (E-W) and VES 6 (E-W).
The geoelectric section along S-N direction (Figure 7) identified five geoelectric/geologic subsurface layers comprising the topsoil, lateritic clay, weathered bedrock, partially weathered bedrock and fresh basement.The compositions are sandy clay/clayey sand topsoil (resistivity varies from 120 to 246 ohm-m and thickness ranges from 0.75 -3.52 m); lateritic clay (resistivity varies from 664 ohm-m to 3205 ohm-m and thickness ranges from 2.93 -16.16 m); weathered bedrock (resistivity varies from 91 ohm-m to 389 ohm-m), partially weathered bedrock (resistivity of 423 ohm-m) localized at VES 5 showing that it extends beyond the depth of study (60 m) and fresh basement (resistivity of 3937 ohm-m).
The r generally indicate five main geoelectric layers; namely the topsoil (resistivity varies from 51 to 488 ohm-m and thickness ranges from 0.42 -3.52 m), lateritic clay (664 to 3205 ohm-m and thickness ranges from 2.93 -16.16 m), weathered bedrock (resistivity varies from 64 ohm-m to 393 ohm-m), fractured bedrock (resistivity of 869 ohm-m) and fresh basement (resistivity varies from 937 ohm-m to 3937 ohm-m).The topsoil generally varies in composition from clay to sandy clay, but predominantly composed of sandy clay.The fracture zones are generally confined and extend beyond the depth of study.
Combined Horizontal Profiling (HP) and Vertical Electrical Sounding (VES)
The Dipole-dipole psudosection and the 2D resistivity structure along E-W direction are shown in Figure 8. by an oval shaped unit with a higher resistivity value than its surroundings.This could be as a result of the deposition that was left after weathering.The weathered bedrock (B) is characterized by sandy clay due to its low resistivity values but with higher portion of clay between stations 6 and 10 which falls within the weak segment of the investigated premises.The depth to bedrock is thicker between stations 2 and 10, but decreases towards the western flank (between stations 10 to 15).Between these stations, the thickness of the overburden has decreased but with higher resistivity thus, signifying a lower portion of clay when compared to the eastern flank, the overburden is directly underlain by competent bedrock
Synthesis of Results
Along E-W traverse direction, the geoelectric section and the 2D resistivity structure shows that the topsoil is generally thin except for the significant thick lateritic clay at VES 7 which shows similar characteristics between stations 15 and 16 on the 2D resistivity structure.The topsoil also varies in composition from clay to sandy clay with small portion of clayey sand.The lateritic clay beneath VES 2 (Figure 10) at depth of 2.45 m is shown between stations 4 and 6 at the same depth in form of an oval shaped unit with higher resistivity value.The weathered bedrock varies in composition from clay to sandy clay to clayey sand, but predominantly composed of sany clay.The saturated zone D1 at depth 2. d w 69 m correlates ith a low resistivity zone between VES 2 and VES 4 (Figure 10).The basement depression at VES 6 corresponds with the same basement depression between stations 14 and 15, the confined fracture beneath VES 4 corresponds with the basement depression between stations 9 and 10.The geoelectric section shows that starting from VES 1 to VES 4, the depth to bedrock is beyond 25 m, this correlates with the 2D resistivity structure between stations 3 and 10 which shows that the bedrock is deeper, i.e. beyond the depth of study (25 m).
Along S-N direction, both the geoelectric section and the 2D resistivity structure reveals that the topsoil is gen-er fracture zone (D) is underlain by partially weathered bedrock.
Subsoil Evaluation of the Study Area
From the results of the 2D resistivity structure and the geoelectric sections, the overburden is composed of clay, sandy clay, clayey sand and lateritic clay, but predominantly composed of sandy clay which has higher clay to sand ratio.Due to the incompetent nature of clayey soils, the overburden will not be able to host heavy buildings without excavating and refilling with competent materials such as sand/gravel and laterite.The underlying ally thin (Figure 11).The lateritic clay underlying the topsoil on the geoelectric section correlates with the lateritic clay (B) on the 2D resistivity structure at the same depth.The discontinuity noticed between stations 6 and 11 which show gradual decrease in resistivity values from depth 2.7 m correlates with the gradual decrease in resistivity values between VES 2 and VES 7. The outer region (C) of the oval shaped unit on the 2D resistivity structure correlates with the weathered bedrock on the geoelectric section (Figure 11) while inner region (D) correlates with the partially weathered bedrock on the geoelectric section (which extends beyond the depth of study (25 m) for the 2D resistivity structure) (Figure 11).The resistivity increases with depth from the zone of lowest resistivity (D), the partially weathered bedrock has a higher resistivity and also at a higher depth, therefore the uilding and shows as cracks on the erected walls.Along the South-North, the topsoil is predominantly composed of sandy clay which has high clay to sand ratio, which manifests as cracks on the surface of the ground.Along the East-West direction, it was noticed that the western flank has higher resistivity values than the eastern flank, this can result in uneven stress distribution i.e., one side has a stronger support than the other.A major weak zone was noticed between stations 6 and 11 on 2D resistivity structure.It was observed that a nearby stream that flows beneath a local bridge gradually seeps through the sandy clay compartments (which has higher permeability than clay) on the western flank (on top of the competent bedrock marked E) towards the eastern flank (lowest part of the competent bedrock marked E) to the weak zone observed between stations 6 and 11.This weak zone is composed mainly of clay which is porous but not permeable resulting in the saturated zone (D1) caused by trapped water in the clay compartments.
Conclusion
A geophysical investigation involving the electrical resistivity method was carried out at a study location in south-western Nigeria.The electrical resistivity method utilized the Vertical E weathered layer are composed of clay, sandy clay and clayey sand formation.A major discontinuity (confined fracture zone) was identified by the electrical imaging on bedrock along the S-N direction.This would have been the reason for differential settlement, since it was confirmed that the foundation of the structure was placed on this weak bedrock.Moreover, the topsoil along the S-N direction is predominantly composed of sandy clay which has high clay to sand ratio, this manifest as cracks on the surface of the ground.A major weak zone which has saturated zone was also discovered along the E-W direction.The result of this research has shown that the causes of the cracks and distress on the walls within the site may have been influenced by the differential settlement resulting from the incompetent subsoil materials and the fractured bedrock on which the foundation of the building was built.In conclusion, the importance of pregeophysical investigation before the erection of buildings cannot be overemphasized, since this will help in designing of such proposed buildings that will be able to withstand subsurface instability with time.
Figure 1 .
Figure 1.Sketch map of part of Ilesa showing the study area.
Figure 4 .
Figure 4. Histogram of the VES curve types of the study area.
|
2019-01-10T02:36:35.600Z
|
2013-11-12T00:00:00.000
|
{
"year": 2013,
"sha1": "8a18d06311d6eb20c78938faa12a834d4d40dfa3",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=40218",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8a18d06311d6eb20c78938faa12a834d4d40dfa3",
"s2fieldsofstudy": [
"Geology",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
}
|
118696862
|
pes2o/s2orc
|
v3-fos-license
|
Twisted modules and quasi-modules for vertex operator algebras
We use a result of Barron, Dong and Mason to give a natural isomorphism between the category of twisted modules and the category of quasi-modules of a certain type for a general vertex operator algebra.
Introduction
In the theory of vertex operator algebras, for a vertex operator algebra V , in addition to the notion of V -module one has the notion of σ-twisted V -module where σ is a finite order automorphism of V . For a V -module W , each element v ∈ V is represented by a vertex operator where these vertex operators are mutually local in the sense that for u, v ∈ V , there exists a nonnegative integer k such that Twisted modules were first introduced and used by Frenkel, Lepowsky and Meurman in their construction of the moonshine module vertex operator algebra V ♮ (see [L1], [FLM]). Let V be a vertex operator algebra and let σ be an automorphism of order N . For a σ-twisted V -module W ( [L1], [FLM], [FFR], [D]), each element v of V is represented by a twisted vertex operator where these twisted vertex operators are also mutually local.
In a recent work [Li3], to associate certain (infinite-dimensional) Lie algebras with vertex algebras, we studied what we called quasi local vertex operators (cf. [GKK]). Let W be any vector space. A subset S of Hom(W, W ((x))) is said to be quasi local if for any a(x), b(x) ∈ S there exists a nonzero polynomial p(x 1 , x 2 ) such that p(x 1 , x 2 )a(x 1 )b(x 2 ) = p(x 1 , x 2 )b(x 2 )a(x 1 ).
It was proved therein that any maximal quasi local subspace has a natural vertex algebra structure and any quasi local subset generates a vertex algebra. This particular result generalizes the main result of [Li1], which states that for any vector space W , any set of mutually local vertex operators on W generates a vertex algebra with W as a natural module. However, the space W under the natural action is not a module for vertex algebras generated by quasi local vertex operators on W , though a certain weaker version of Jacobi identity was proved to hold. This motivated us to introduce a new notion of quasi module for a vertex algebra. For a quasi module W for a vertex algebra V , each element v of V is represented by a vertex operator and the vertex operators Y W (v, x) for v ∈ V form a quasi local subspace. On twisted modules for vertex operator algebras there is a conceptual work [BDM], in which for any vertex operator algebra V and for any positive integer k, a canonical isomorphism was established between the category of V -modules and the category of twisted modules for the tensor product vertex operator algebra V ⊗k with respect to permutation automorphisms. In [BDM], a central role was played by the geometric change-of-coordinate x = z k . It has been known ( [Z], [H1-3], cf. [L2]) that for any vertex operator algebra V and for any f (z) ∈ zC [[z]] with f ′ (0) = 0, the change-of-coordinate x = f (z) gives rise to a "new" vertex operator algebra structure on V , which was proved to be isomorphic to V . A special changeof-coordinate played a very important role in the study of modular invariance of graded characters ( [Z], [DLM2]).
It has been well known (cf. [FZ]) that (untwisted) affine Lie algebras together with their highest weight modules can be naturally associated with vertex operator algebras and their modules. Furthermore, twisted affine Lie algebras together with their highest weight modules (see [K]) can be associated with twisted modules for those vertex operator algebras (cf. [FLM], [Li2]). On the other hand, it was proved in [Li3] that twisted affine Lie algebras, which are represented in a different form, together with their highest weight modules, can be naturally associated with quasi-modules for the vertex operator algebras associated with the untwisted affine Lie algebras. This suggests that there exist a natural connection between twisted modules and quasi modules for a general vertex operator algebra.
The main purpose of this paper is to give a natural connection between twisted modules and quasi-modules for a general vertex operator algebra. Indeed, the goal has been achieved by using [BDM], thanks to Barron, Dong and Mason for their beautiful work. What we have proved is that the same change-of-coordinates, used by Barron, Dong and Mason, give rise to a natural isomorphism between the category of twisted modules and the category of quasi-modules of a certain special type.
We thank Yi-Zhi and Kailash for organizing this great conference, in honor of Professors James Lepowsky and Robert Wilson. I am very grateful for having Jim and Robert as teachers and as friends as well.
Twisted modules and quasi-modules
We here present the main result, a natural isomorphism between the category of twisted modules and the category of quasi-modules of a certain type for a general vertex operator algebra.
First, we recall the definitions of the notions of twisted module and quasimodule. Let V = n∈Z V (n) be a vertex operator algebra, fixed throughout this section. For the definition and basic properties we refer to [FLM] and [FHL]. Let σ be an automorphism of V of order N (a positive integer).
, [FLM], [FFR], [D], [DLM1]) is a vector space W equipped with a linear map (the twisted Jacobi identity). Note that as a convention, for α ∈ R, the expressions (x 1 ± x 2 ) α are understood as the formal series in the nonnegative integral powers of the second variable x 2 . That is, If u ∈ V j with 0 ≤ j ≤ N − 1, the twisted Jacobi identity becomes This particular property amounts to Remark 2.1. Note that the above defined notion of σ-twisted V -module, which is the one defined in [DLM2] and [BDM], corresponds to the notion of σ −1 -twisted V -module in [DLM1].
The twisted Jacobi identity is equivalent to the following weak commutativity and associativity ( [DL], [Li2]): For u, v ∈ V , there exists a nonnegative integer k such that From now on we fix an automorphism σ of order N for the fixed vertex operator algebra V . Set and for u, v ∈ V , there exists a nonzero polynomial p(x 1 , x 2 ) such that where Z + denotes the set of positive integers, and then set As in [BDM] we shall also heavily use the expression ∆ N (x N ) −1 . For convenience we set (2.14) The following result was proved in [BDM]: . Remark 2.4. Recall from [BDM] the interpretation of the formal variable notations in Proposition 2.3. First, for any nonzero α ∈ 1 N Z, under the convention we have Then for u, v ∈ V , we have This explains the formal variable notations in Proposition 2.3. As we shall mention in the following Remark, we shall also use another (different) substitution. For this purpose, we also write for this particularly defined expression Y (u, (x + x 0 ) α − x α ). It was showed in [BDM], page 363, that for h, α ∈ 1 N Z, Warning: The following expansion is an infinite divergent sum if n < 0.
Remark 2.5. We shall need a different substitution z = (x + x 0 ) N − x N for rational powers z α , α ∈ 1 N Z. Let p(x 0 , x) = x k 0 + xq(x 0 , x), where k is a positive integer and q(x 0 , x) is a polynomial. We consider the following expansion and we use the notation z α | z=p(x0,x),x0>>x for this particular expansion. That is, We shall need the following simple result: Proof. We shall just prove the first identity, as the second will follow easily. Since [L(0), L(n)] = −nL(n) for n ∈ Z, it follows that α L(0) L(n)α −L(0) = α −n L(n).
Using this we obtain proving the assertion.
The following is the first half of our main result of this paper: Then (W,Ỹ W ) carries the structure of a (G, φ)-quasi V -module.
Third, from Lemma 2.6, for any N -th root of unity α, we have For g ∈ G, u ∈ V , we havẽ noticing that gΦ(x) = Φ(x)g. Now it remains to prove the quasi Jacobi identity.
1 , x ±1 2 , z]. (2.23) Furthermore, let w ∈ W . In view of Remark 2.5, we have There exists a positive integer l such that where k is the nonnegative integer as in (2.23). Now, we shall perform the substitution z 0 = (x 2 + x 0 ) N − x N 2 , x 0 >> x 2 on both sides. Notice that the expression on the right-hand side involves nonnegative integral powers of z 0 only, so that the substitutions z 0 = (x 2 +x 0 ) N −x N 2 , x 0 >> x 2 and z 0 = (x 2 +x 0 ) N −x N 2 , x 2 >> x 0 agree on the right-hand side. Performing the substitution z 0 = (x 2 +x 0 ) N −x N 2 , x 0 >> x 2 on both sides and using Remark 2.5, we obtain Combining (2.24) and (2.26) we obtain the following quasi Jacobi identity Therefore, (W,Ỹ W ) carries the structure of a (G, φ)-quasi V -module.
Remark 2.8. Let (W, Y W ) be any weak V -module. Then the same proof of Theorem 2.7 shows that (W,Ỹ W ) is a (G, φ)-quasi V -module.
Next we present the second half of our main result of this paper.
Then (W,Ȳ W ) carries the structure of a weak σ-twisted V -module.
Proof. For convenience, let us simply use ∆(x) for ∆ N (x) in the proof. First, Next, we prove the weak commutativity and the weak associativity, which amount to the twisted Jacobi identity. Let u, v ∈ V . As ∆(x)u, ∆(x)v ∈ V [x 1/N , x −1/N ], from the quasi Jacobi identity there exists a nonnegative integer k such that ). (2.30) It follows that there exists a nonnegative integer k ′ ≥ k such that Next, we establish the weak associativity. Let w ∈ W . Assume u ∈ V j for some 0 ≤ j ≤ N − 1, i.e., σ(u) = ω j N u. From (2.29), we have ]. Let l be a nonnegative integer such that ]. Using the commutation relation (2.31) we get In view of Remark 2.4, substituting z 0 = (x 2 + x 0 ) 1/N − x 1/N 2 , x 2 >> x 0 we get For the expression on the left-hand side, using Proposition 2.3, we have
|
2019-04-12T09:19:24.351Z
|
2006-03-06T00:00:00.000
|
{
"year": 2006,
"sha1": "ec61a27b39aab9fa26e1f73b8a7a11683274810c",
"oa_license": null,
"oa_url": "http://arxiv.org/abs/math/0603143",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ec61a27b39aab9fa26e1f73b8a7a11683274810c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
259820494
|
pes2o/s2orc
|
v3-fos-license
|
Exploring the Potential of Green Hydrogen Production and Application in the Antofagasta Region of Chile
: Green hydrogen is gaining increasing attention as a key component of the global energy transition towards a more sustainable industry. Chile, with its vast renewable energy potential, is well positioned to become a major producer and exporter of green hydrogen. In this context, this paper explores the prospects for green hydrogen production and use in Chile. The perspectives presented in this study are primarily based on a compilation of government reports and data from the scientific literature, which primarily offer a theoretical perspective on the efficiency and cost of hydrogen production. To address the need for experimental data, an ongoing experimental project was initiated in March 2023. This project aims to assess the efficiency of hydrogen production and consumption in the Atacama Desert through the deployment of a mobile on-site laboratory for hydrogen generation. The facility is mainly composed by solar panels, electrolyzers, fuel cells, and a battery bank, and it moves through the Atacama Desert in Chile at different altitudes, from the sea level, to measure the efficiency of hydrogen generation through the energy approach. The challenges and opportunities in Chile for developing a robust green hydrogen economy are also analyzed. According to the results, Chile has remarkable renewable energy resources, particularly in solar and wind power, that could be harnessed to produce green hydrogen. Chile has also established a supportive policy framework that promotes the development of renewable energy and the adoption of green hydrogen technologies. However, there are challenges that need to be addressed, such as the high capital costs of green hydrogen production and the need for supportive infrastructure. Despite these challenges, we argue that Chile has the potential to become a leading producer and exporter of green hydrogen or derivatives such as ammonia or methanol. The country’s strategic location, political stability, and strong commitment to renewable energy provide a favorable environment for the development of a green hydrogen industry. The growing demand for clean energy and the increasing interest in decarbonization present significant opportunities for Chile to capitalize on its renewable energy resources and become a major player in the global green hydrogen market.
Introduction
Green hydrogen (GH) is a clean energy carrier that can be produced by splitting water molecules into hydrogen and oxygen using renewable energy sources, such as solar, wind, or hydro power. The hydrogen produced in this way has no carbon footprint and can be used in several industries, including transport [1], manufacturing [2,3], and power generation [4][5][6]. In addition to its direct application as a fuel source, GH plays a vital
Introduction
Green hydrogen (GH) is a clean energy carrier that can be produced by splitting water molecules into hydrogen and oxygen using renewable energy sources, such as solar, wind, or hydro power. The hydrogen produced in this way has no carbon footprint and can be used in several industries, including transport [1], manufacturing [2,3], and power generation [4][5][6]. In addition to its direct application as a fuel source, GH plays a vital role as a versatile raw material to produce various synthetic hydrocarbon fuels. These synthetic fuels, commonly referred to as "electro fuels" or "E-fuels," are derived from the utilization of captured carbon dioxide or the separation of nitrogen from the atmosphere through a reaction with GH. Some examples of e-fuels include E-methanol, E-methane, and E-ammonium [7][8][9]. Chile is a country with a tremendous potential for renewable energy, particularly in solar, tidal, and wind power [10,11]. As such, the country has established a goal of attaining carbon neutrality by 2050 [12]. The production of green hydrogen is seen as a key element to accomplish this target [12,13]. Figure 1 shows a visual representation of the main energy projects currently in progress [14]. Comprehensive information about these initiatives is available in Table S1 in the supporting information. In addition, the country is also aiming to reduce its reliance on fossil fuels, especially in the mining and transportation sectors. In addition to its abundant renewable energy sources, Chile is well positioned to development a green hydrogen industry, including its advantages for accessing export markets, including those in Asia and Europe [15,16], and its strategic location as a hub for energy trade between the Americas and the Pacific [10]. In 2020, the Chilean Government released the national GH strategy, which is a long-term plan to establish a competitive hydrogen industry based on renewable resources, with the goal of becoming the world's most cost-effective GH producer by 2030, together with positioning Chile as one of the leading exporters of hydrogen by 2040 [13]. The strategy entails a three-stage plan to accelerate the deployment of GH-based technologies in multiple economic sectors and critical applications within the country [17]. The first stage of the strategy focuses on tapping the domestic market and proposes the implementation of GH in six primary applications: (i) refineries, (ii) ammonia, (iii) mining haul trucks, (iv) long-range buses, (v) heavy-duty In addition to its abundant renewable energy sources, Chile is well positioned to development a green hydrogen industry, including its advantages for accessing export markets, including those in Asia and Europe [15,16], and its strategic location as a hub for energy trade between the Americas and the Pacific [10]. In 2020, the Chilean Government released the national GH strategy, which is a long-term plan to establish a competitive hydrogen industry based on renewable resources, with the goal of becoming the world's most cost-effective GH producer by 2030, together with positioning Chile as one of the leading exporters of hydrogen by 2040 [13]. The strategy entails a three-stage plan to accelerate the deployment of GH-based technologies in multiple economic sectors and critical applications within the country [17]. The first stage of the strategy focuses on tapping the domestic market and proposes the implementation of GH in six primary applications: (i) refineries, (ii) ammonia, (iii) mining haul trucks, (iv) long-range buses, (v) heavy-duty trucks, and (vi) blending GH into the gas network [12,13]. The second stage involves the expansion of green ammonia production on a larger scale, promoting the entry of the country into international markets through the establishment of commercial agreements. This strategic approach aims to enhance the economic viability of the green hydrogen market. In the third and final stage, Chile seeks to become a leading global supplier of clean energy by expanding and diversifying green ammonia exports into new applications, such as maritime transport, as well as synthetic fuels for the aviation industry [18].
Notably, three of the six applications of the first stage are directly associated with mining activities, namely, mining haul trucks, long-range buses, and heavy-duty trucks. The mining industry is a significant contributor to the country's economy and plays a crucial role in promoting the use of GH. Copper, being a well-known commodity, is a key driver of this growth. Figure 2 shows the approximate location of GH projects across various regions of the country, organized by their corresponding application sectors. Further details of these projects are listed in Table 1. It is worth noting that a significant portion of these projects are concentrated in the Antofagasta region, which is located at the central part of the Atacama Desert. This location has been chosen strategically due to two primary reasons: firstly, the significant availability of renewable resources in the area (especially sunlight), and, secondly, the well-established and thriving mining industry in the region. As a result, the Antofagasta region offers a highly favorable location for the successful implementation of GH projects [19]. Table 1. List of H 2 projects along of Chile showed in Figure 2. This work provides a comprehensive and brief overview of the potential for hydrogen generation in the diverse locations of the Antofagasta region by synthesizing relevant data from several reports. Moreover, the study introduces a novel Green Hydrogen This work provides a comprehensive and brief overview of the potential for hydrogen generation in the diverse locations of the Antofagasta region by synthesizing relevant data from several reports. Moreover, the study introduces a novel Green Hydrogen Mobile Pilot Plant dedicated to mapping the real GH generation potential across the Atacama Desert. The mobile facility traversed the region, using sunlight to produce hydrogen, and simultaneously measuring efficiency and other crucial factors under realistic field conditions. The results of this approach reinforce the potential of the Antofagasta region for hydrogen generation. Furthermore, the paper reports on a forthcoming measuring campaign, which aims to offer policymakers and industry stakeholders valuable field data. This data will be vital for promoting the development of the hydrogen industry in the region, and, consequently, aid in meeting global climate targets.
Antofagasta as a HUB of Green Hydrogen
The National Energy Commission of Chile (Comisión nacional de Energía de Chile, CNE) has recently released the preliminary demand report for the period of 2021-2041 in the country, which projects a progressive rise of energy demand for the production of GH during that period. The report indicates that the energy demand is expected to increase from 199 GWh in 2023 to 40,636 GWh in 2041 in order to achieve global carbon neutrality [20]. To meet this demand, a gradual integration of operational projects is necessary. In this scenario, the Atacama Desert becomes a key location for the development of large-scale PV and CSP systems, owing to its status as one of the regions with the highest levels of solar radiation worldwide (see Figure 3). field conditions. The results of this approach reinforce the potential of the Antofagasta region for hydrogen generation. Furthermore, the paper reports on a forthcoming measuring campaign, which aims to offer policymakers and industry stakeholders valuable field data. This data will be vital for promoting the development of the hydrogen industry in the region, and, consequently, aid in meeting global climate targets.
Antofagasta as a HUB of Green Hydrogen
The National Energy Commission of Chile (Comisión nacional de Energía de Chile, CNE) has recently released the preliminary demand report for the period of 2021-2041 in the country, which projects a progressive rise of energy demand for the production of GH during that period. The report indicates that the energy demand is expected to increase from 199 GWh in 2023 to 40,636 GWh in 2041 in order to achieve global carbon neutrality [20]. To meet this demand, a gradual integration of operational projects is necessary. In this scenario, the Atacama Desert becomes a key location for the development of largescale PV and CSP systems, owing to its status as one of the regions with the highest levels of solar radiation worldwide (see Figure 3). The project map released by the Chilean association of renewable energies and storage (ACERA, Asociación Chilena de Energías Renovables y Almacenamiento) reveals that the Antofagasta region will witness a significant influx of 123 photovoltaic solar projects in the near future. A detailed breakdown of the number of projects and their corresponding power output for each stage is provided in Table 2. Furthermore, the geographical locations (approximately) of the solar photovoltaic projects are displayed in Figure 4. The project map released by the Chilean association of renewable energies and storage (ACERA, Asociación Chilena de Energías Renovables y Almacenamiento) reveals that the Antofagasta region will witness a significant influx of 123 photovoltaic solar projects in the near future. A detailed breakdown of the number of projects and their corresponding power output for each stage is provided in Table 2. Furthermore, the geographical locations (approximately) of the solar photovoltaic projects are displayed in Figure 4. Having an understanding of the solar spectrum is critical in designing and studying numerous technologies [22]. It is important to investigate the performance of photovoltaic modules after their manufacture. According to the literature, it is mentioned that there are two methods to evaluate the performance of PV modules: power analysis and energy analysis [23]. Generally, the power is measured under standard test conditions (STC), that is, Having an understanding of the solar spectrum is critical in designing and studying numerous technologies [22]. It is important to investigate the performance of photovoltaic modules after their manufacture. According to the literature, it is mentioned that there are two methods to evaluate the performance of PV modules: power analysis and energy analysis [23]. Generally, the power is measured under standard test conditions (STC), that is, spectral distribution with AM 1.5 air mass at a temperature of 25 • C and an intensity of 1000 Wm −2 . This approach makes necessary to assume that the modules are installed in places where STC conditions are unlikely to occur. Therefore, evaluating the performance of PV modules by power may not be a suitable option if the STC conditions are not met. On the other hand, the energy rating of the module plays a fundamental role to measure the performance in field conditions [24]. In the latter, the energy rating of the module is determined by measuring its characteristics along with the corresponding data on environmental conditions. In this regard, the in situ measurement of PV modules is imperative to accurately evaluate their operational performance under realistic environments. The adoption of such a measurement process has become a prerequisite for ensuring the reliable assessment of PV modules.
Green Hydrogen Mobile Pilot Plant
Numerous research centers are dedicated to exploring the potential applications of green hydrogen in Chile. Among these institutions, the "Centro de Investigación Científica y Tecnológica de la Minería" (CICITEM) focuses on developing innovative solutions and technologies for the mining industry. CICITEM has recently undertaken a project named the "Green Hydrogen Mobile Pilot Plant (Planta Piloto Portable de Hidrógeno Verde, P3H2V)". The aim of this project is to assess and delineate the efficiencies of the production and use of hydrogen within the context of the Atacama Desert, particularly in proximity to mining operations located in the region. These mining activities represent a significant potential market for hydrogen as an energy carrier or renewable fuel. This mobile pilot plant employs an electrolysis process to divide water into hydrogen and oxygen, storing the hydrogen in high-pressure tanks for later use. The P3H2V plant is designed to evaluate the feasibility and effectiveness of green hydrogen production under different environmental conditions and scenarios. It boasts a production capacity of up to 0.5 Nm 3 of hydrogen per hour. The main objective of this initiative is to demonstrate the viability of producing green hydrogen using renewable energy sources in a variety of settings, with a specific focus on its potential application in the mining sector. This pilot plant is part of a broader effort by CICITEM to promote the use of green hydrogen as a sustainable and clean alternative to fossil fuels. The lack of empirical investigations related to hydrogen production in the Antofagasta region has generated considerable uncertainty regarding the development of this nascent industry. To address this knowledge gap, the P3H2V plant will facilitate a comprehensive investigation into the feasibility and sustainability of hydrogen production in this region. Figure 5 displays a schematic representation ( Figure 5a) and a corresponding photograph (Figure 5b) of the P3H2V during its initial measurements conducted within the Atacama Desert. The sequence of elements showed in a, arranged from left to right, comprises photovoltaic panels, a hydrogen storage tank, a fire wall to mitigate potential flammability hazards associated with hydrogen, a rack containing three fuel cells, a water purification system, and a reverse osmosis system. Additionally, the facility also includes two racks with four electrolyzers and a hydrogen purification unit each. Auxiliary systems such as a water storage tank and a battery bank will be located outside of the container to complement the P3H2V plant. The production of GH with the P3HV plant is expected to yield a daily output ranging from 2.0 to 2.5 kg H2. The operation of the P3HV system is composed by three primary subsystems, illustrated in the flowchart shown in Figure 6. Firstly, the photovoltaic panels, with a total installed power of 31.8 kW, are set at the beginning of each measurement "campaign" for photovoltaic energy generation. 27.2 kW of them are allocated for the elec- The production of GH with the P3HV plant is expected to yield a daily output ranging from 2.0 to 2.5 kg H 2 . The operation of the P3HV system is composed by three primary subsystems, illustrated in the flowchart shown in Figure 6. Firstly, the photovoltaic panels, with a total installed power of 31.8 kW, are set at the beginning of each measurement "campaign" for photovoltaic energy generation. 27.2 kW of them are allocated for the electrolyzers in the hydrogen production system, and 5.4 kW are used for the auxiliary equipment of the plant. The production of H 2 by electrolyzers is the second subsystem, which uses water fed from the WTM-01 tank. The water in the tank has been conditioned beforehand in the reverse osmosis and deionization units, to decrease its electrical conductivity to 20 mS/cm or less (tolerance accepted by electrolyzers). The production system comprises eight anion exchange membrane (AEM) electrolyzers with a total installed capacity of 20 kW, marked EZ-01 to EZ-08. In them, the water is dissociated into hydrogen and oxygen inside of two separate chambers. While sunlight is available, the electrolyzers are powered by the photovoltaic panels. The H 2 produced contains a small fraction of water vapor, so that, it is sent to H 2 dryer-type purifiers, HPS-01 and HPS-02. This achieves a purity level of 99.999% because water is the main impurity the outcome from the electrolyzers of the facility. The H 2 is then stored in a type IV tank at 35 bar pf pressure or used directly in the fuel cell bank. The O 2 produced in the process is vented out of the container. Lastly, the H 2 fuel cells for power generation constitute the third subsystem. It is composed of proton exchange membrane (PEM) fuel cell bank, FC-01, FC-02, and FC-03, totalizing a maximum capacity of 3.3 kWp. The fuel cells are fed by H 2 coming from either the production system or from H 2 storage area, through compressors integrated inside the fuel cells. The subsystems enable a catalytic electrochemical reaction that generates useful electricity, which can be either stored in a battery bank or used as backup power to power the electrolyzers with electricity when solar radiation is intermittent, as well as for the plant's utilities such as lighting, screen, and computers, among others.
Energies 2023, 16, x FOR PEER REVIEW 9 of 13 electricity, which can be either stored in a battery bank or used as backup power to power the electrolyzers with electricity when solar radiation is intermittent, as well as for the plant's utilities such as lighting, screen, and computers, among others. The P3H2V facility operates by supplying solar electric power and desalinated water to the electrolyzers during daylight hours, resulting in the production of H2, which is then stored in a type IV gaseous storage tank. During periods of limited solar resources, such as in the afternoons and nights, or when the performance of the electrolyzers drops significantly, the fuel cells are activated to generate electricity. This innovative approach enables the P3H2V facility to leverage the surplus of solar energy for the generation of electrical power, thereby achieving sustainable and efficient energy production and management.
The sampling campaign will consist of a minimum of 16 selected points (See Figure 7), which will be chosen based on their scientific and technological interest, utilizing the The P3H2V facility operates by supplying solar electric power and desalinated water to the electrolyzers during daylight hours, resulting in the production of H 2 , which is then stored in a type IV gaseous storage tank. During periods of limited solar resources, such as in the afternoons and nights, or when the performance of the electrolyzers drops significantly, the fuel cells are activated to generate electricity. This innovative approach enables Energies 2023, 16, 4509 9 of 12 the P3H2V facility to leverage the surplus of solar energy for the generation of electrical power, thereby achieving sustainable and efficient energy production and management.
The sampling campaign will consist of a minimum of 16 selected points (See Figure 7), which will be chosen based on their scientific and technological interest, utilizing the methodological criteria described below. Table 3 provides a detailed outline of the experimental design and georeferencing of the sites. The sampling campaign was designed based on several criteria, including: 1.
Distance between points: A maximum of 100 km between consecutive points has been set.
2.
Altitude variation: The campaign will prioritize validation at different altitudes up to those relevant to the mining industry (~4000 m above sea level) in order to obtain data of the sensitivity of the PEM Fuel Cell to altitude. 3.
Solar irradiation: The Antofagasta Region has favorable irradiation conditions, but local topography may affect the performance of photovoltaic panels. Thus, this factor has been considered as well.
4.
Logistics: Diverse factors, including proximity to roads, terrain inclination, topographical flatness of the terrain, availability of municipal permits, access for transport trucks, and the public or private nature of the domain.
Energies 2023, 16, x FOR PEER REVIEW Figure 7. Map of P3H2V monitoring campaign (approximation). Table 3. Georeferencing of the initial proposal for the P3H2V sampling campaign showed in 7. Table 3. Georeferencing of the initial proposal for the P3H2V sampling campaign showed in Figure 7.
Location Latitude Longitude
By taking these factors into account for the design of the sampling campaign, it is expected to collect data that will contribute significant scientific and technical value.
Future and Perspectives
Data from the preliminary analysis (five-day campaign) shows that a generation of the theoretical total of 2.8 kg/day-H 2 is achievable, but in the first point of campaign, only an average of 1.8 kg has been achieved of H 2 generated. The reduction in daily energy generation can be attributed to the soiling of the photovoltaic and the high temperature of the panels, which negatively impacts overall efficiency. Calculations indicate that the peak capacity is attained for five hours of the day, with a notable decline in power output observed during the afternoon. These findings highlight the relevance of addressing panel soiling to optimize energy production and guarantee consistent and reliable electrical output. Further research and development in this area may result in solutions for improving the performance and longevity of photovoltaic systems.
Assessing parameters such as: Electrolyzers efficiency, hydrogen storage issues, and fuel cell utilization hold significant importance within the field of hydrogen generation, as the accurate measurement of these parameters is crucial in determining the overall efficiency of the process. In addition, the impact of seasons and day/night on photovoltaic power will be considered when generating this map. Therefore, a rigorous analysis of the measured parameters will be conducted to exclude any potential errors and establish potential of hydrogen generation in the Antofagasta region in an empirical basis and realistic conditions. The comprehensive mapping campaigns and the data obtained will be useful to develop a simulation-based regional map. The map will enable the identification of strategic points within the region that has the highest production efficiency. The simulation-based approach provides a more advanced and precise depiction of the hydrogen generation system, which enables the identification of potential bottlenecks and opportunities for optimization.
This project aims to provide a comprehensive understanding of the key factors influencing the efficiency of hydrogen production, storage, and utilization. The forthcoming publication of the full map of GH production under realistic field conditions by the end of this year is anticipated to significantly contribute to this endeavour. The growth of the GH energy industry presents various challenges that necessitate careful attention and proactive measures. These challenges include the requirement for large-scale technological industrialization, substantial investments, and coordinated efforts to meet the increasing Energies 2023, 16, 4509 11 of 12 demand for GH. Furthermore, there is an urgent need to cultivate a skilled talent pool proficient in electrolysis, hydrogen storage, fuel cell technology, and system integration. Overcoming these challenges calls for continuous research and development initiatives to drive innovation and surmount technical limitations. Collaborative actions involving governments, industry stakeholders, and research institutions are essential for overcoming these challenges and fostering a resilient and sustainable hydrogen economy.
The insights derived from this study hold immense value for policymakers and stakeholders in the energy sector and industry, offering essential information to guide decision-making processes and strategic planning. While economic considerations are not the primary focus of this project, the findings and knowledge generated will contribute to a more comprehensive understanding of the technical aspects and potential applications of hydrogen in the region. This understanding can inform future economic assessments and decision-making processes, enabling a more informed and strategic approach to the development of hydrogen-related projects in the area. With the anticipated publication of the full map of GH production under realistic field conditions by the end of this year, stakeholders will have access to comprehensive data and analysis that can inform and support their efforts in advancing the hydrogen economy.
|
2023-07-12T16:59:18.431Z
|
2023-06-03T00:00:00.000
|
{
"year": 2023,
"sha1": "fd05dd4ada2df12d6b8c2bc9c05261dd5a9cdca7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/16/11/4509/pdf?version=1685943708",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8d2da7cd3132346c07a87f862b7d8698c697b893",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
1352758
|
pes2o/s2orc
|
v3-fos-license
|
Effect of probiotic containing Saccharomyces boulardii on experimental ochratoxicosis in broilers : hematobiochemical studies
In the present investigation, the toxicopathological effects of ochratoxin A of 0.5 ppm on hematobiochemical parameters of broilers were studied with efficacy of dietary concentration of probiotic containing yeast culture Saccharomyces boulardii of 10 mg/kg of feed. One hundred twenty day old chicks were randomly divided into four groups, thirty chicks each. Groups A and C chicks were offered normal feed and that added with probiotic Saccharomyces boulardii respectively. The birds in group B were fed ochratoxin A of 0.5 ppm of feed. Where as, the birds of group D, were fed with ochratoxin A of 0.5 ppm along with probiotic Saccharomyces boulardii of 10 mg/kg of diet. Hematological studies carried revealed significant decrease in the haemoglobin and packed cell volume in birds of group B and reduced effect in birds of group D due to probiotic. Biochemical profiles revealed significant improvement in probiotic treated group D when compared with decreased values of Total protein, albumin, globulin and increased levels of serum creatinine and SGPT in birds of groups B.
Introduction
In India, poultry industry has developed leaps and bound from a small-scale backyard venture to the status of fullfledged, modernized, agro-based industry. India ranks 4th in egg production and 19th in broiler production with annual turnover of Rs. 65 billion [5]. One of the most effective ways for a profitable poultry industry is to reduce the input cost. Feed is the major input in poultry production constituting 70-75% of total cost of broiler production. Poor quality or damaged feed may results in poor production and discarding of such feed will be additional monetary loss.
The mycotoxins are considered as serious obstacle in realizing the full genetic potential of the poultry. Several species of fungi infect grain and forage crops growing in the field, during harvest, transportation and while in the storage and produces mycotoxins. More than 300 different types of mycotoxins have been identified and many more are undiscovered. One species of mould can produce different mycotoxins. Conversely, different moulds can produce the same mycotoxin [11]. Among the mycotoxins, ochratoxin and aflatoxin occupy important position in causing mycotoxicosis in poultry. Reports on ochratoxicosis are frequent in India and it is understood as an emerging problem for human, livestock and poultry, requiring proper attention [7,9]. Ochratoxicosis decrease the profitability in poultry industry by decreasing growth rate, egg production and increasing susceptibility to diseases. Several methods have been tried in past to detoxify the feed ingredients from toxic fungal metabolites, [16]. This includes physical, chemical, nutritional and biological methods. Advances made in the field of biotechnology, in last decades, have resulted in development of newer strategies for tackling the problem of mycotoxins [1,12,19] Practical and cost effective methods to prevent ochratoxicosis in poultry field are in great demand.
Studies indicate that Saccharomyces boulardii is effective against ochratoxicosis in poultry [3,4]. The same was tried against ochratoxin A to ascertain its efficiency in reducing its adverse effect in broilers.
Materials and Methods
The present research work was conducted at Department of Pathology, Bombay Veterinary College, Parel, Mumbai, India.
Production of Ochratoxin
Source of organism: Aspergillus ochraceus NRRL 3147 culture maintained at the Department of Pathology, Nagpur Veterinary College, Nagpur, India was used as source.
Overnight soaked broken wheat (50 g + 25 ml tap water) was autoclaved at 121 o C for 20 minutes and inoculated with fungal spore suspension. The inoculum was incubated for 12 days at room temperature in dark place with vigorous shaking once a day to break the brown mycelial mass. By using sterile wireloop, the mycelial growth from flask was collected and inoculated on SDA (Sabraoud Agar) plate for isolation and identification of Aspergillus ochraceus.
Colonies of Aspergillus ochraceus were observed on SDA plate. Staining with lactophenol cotton blue stain did microscopic examination. The fermented wheat was autoclaved to kill the spores and dried at 80 0 C in hot air oven, overnight. The dried material was powdered and stored in the dark place for further use.
Quantification of Ochratoxin:
The representative samples of feed were analyzed for the quantification of ochratoxin A, by thin layer chromatography (TLC) [2].
Procedure
Steps of quantification of ochratoxin A are as follows 1. Collect 40 -50 gram broken wheat (sample) in beaker. 2. Add 10 gram cellite, 2 gram NaCl, 110 ml methanol and 90 ml distil water in it. 3. Shake it for half an hour. 4. Filtrate it through Whatman filter paper No.1. 5. Collect 50 ml filtrate. 6. Put it in separating funnel. 7. Add 50 ml hexane in it. 8. Shake it for five minutes in separating funnel. 9. After shaking collect the lower feed sample layer in beaker.
|
2017-10-10T17:07:47.638Z
|
2004-12-01T00:00:00.000
|
{
"year": 2004,
"sha1": "d42ce4326a0d3a7c2d9fc2110bcd74d795460a6f",
"oa_license": null,
"oa_url": "https://doi.org/10.4142/jvs.2004.5.4.359",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d42ce4326a0d3a7c2d9fc2110bcd74d795460a6f",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
238654672
|
pes2o/s2orc
|
v3-fos-license
|
UASea: A Data Acquisition Toolbox for Improving Marine Habitat Mapping
Unmanned aerial systems (UAS) are widely used in the acquisition of high-resolution information in the marine environment. Although the potential applications of UAS in marine habitat mapping are constantly increasing, many limitations need to be overcome—most of which are related to the prevalent environmental conditions—to reach efficient UAS surveys. The knowledge of the UAS limitations in marine data acquisition and the examination of the optimal flight conditions led to the development of the UASea toolbox. This study presents the UASea, a data acquisition toolbox that is developed for efficient UAS surveys in the marine environment. The UASea uses weather forecast data (i.e., wind speed, cloud cover, precipitation probability, etc.) and adaptive thresholds in a ruleset that calculates the optimal flight times in a day for the acquisition of reliable marine imagery using UAS in a given day. The toolbox provides hourly positive and negative suggestions, based on optimal or non-optimal survey conditions in a day, calculated according to the ruleset calculations. We acquired UAS images in optimal and non-optimal conditions and estimated their quality using an image quality equation. The image quality estimates are based on the criteria of sunglint presence, sea surface texture, water turbidity, and image naturalness. The overall image quality estimates were highly correlated with the suggestions of the toolbox, with a correlation coefficient of−0.84. The validation showed that 40% of the toolbox suggestions were a positive match to the images with higher quality. Therefore, we propose the optimal flight times to acquire reliable and accurate UAS imagery in the coastal environment through the UASea. The UASea contributes to proper flight planning and efficient UAS surveys by providing valuable information for mapping, monitoring, and management of the marine environment, which can be used globally in research and marine applications.
Introduction
Habitat mapping is essential for marine spatial planning, management, and conservation of coastal marine habitats. Remotely sensed data combined with in-situ measurements are usually used for acquiring marine information. A plethora of remote sensing methods are available for marine habitat mapping; these methods differ as to the sensor, data resolution, spatial scale, expenses, repeatability, and data availability [1,2]. A variety of satellite sensors offer imagery from low (Landsat, Sentinel-2) to high resolutions (IKONOS, Quick Bird, Worldview) [3][4][5][6][7][8][9], while airborne sensors offer imagery with higher spatial resolutions at medium to large scales [10,11].
In recent years, the use of the UAS has been widespread in marine data acquisition in several coastal and marine applications [12][13][14][15][16][17]. The potential applications of the UAS in marine mapping and monitoring are constantly increasing, as they are an effective tool for acquiring high-resolution imagery [18][19][20][21] at a low cost and increased operational flexibility [22] in small to large areas [23]. The ability of the UAS to acquire sub-meter resolution imagery, which cannot be achieved using other remote sensing methods, fills the gap between satellite, airborne, and fieldwork data [16,22]. Thus, UAS allow the detection and accurate distinction of small marine features and the monitoring of habitat evolution [12,24].
However, the UAS surveys deal with many limitations related to the prevalent environmental conditions during data acquisition [25][26][27], which significantly reduce the possible acquisition times. These limitations have been analyzed extensively [28][29][30] while their effect on the quality of the UAS imagery has been reported in the recent literature [19,27,31,32]. UAS data acquisitions during non-optimal conditions lead to unreliable information and inaccurate results of marine habitat mapping [32].
The theoretical background of the parameters that affect the safety, accuracy, and reliability of the UAS surveys and aerial imagery quality in the marine environment has been presented through a UAS data acquisition protocol [33]. The UAS protocol consists of three main sections: (i) morphology of the study area, (ii) environmental conditions, and (iii) survey planning. The section on environmental conditions has been examined extensively, as the visible artifacts that are produced by them affect the sea surface and the water column, resulting in seabed visibility issues. The effect of the environmental conditions on high-resolution orthophoto-maps has been analyzed in an accuracy assessment study [32], identifying the sources of errors. The study proved that different environmental conditions result in different habitat coverages and classification accuracies. Although the optimal environmental conditions, flight paths [29], and flight altitude [31] for UAS data acquisition in the marine environment are well known, the detection of optimal acquisition times is still challenging.
In this study, we present the UASea, as a toolbox for the identification of the optimal flight times to acquire UAS data in the marine environment. The UASea calculates the optimal survey times for acquiring reliable UAS information in the coastal environment using a ruleset. The ruleset consists of forecast weather data (i.e., wind speed, wave height, precipitation prob., etc.) and adaptive thresholds that exclude the outlier values of each parameter. The result is an hourly basis prediction in a day, regarding the suggested or non-suggested UAS survey times. The toolbox suggestions were validated by the image quality estimations in both optimal and non-optimal acquisition times. The image quality estimates are derived by factors that mostly affect the UAS imagery, i.e., the sunglint, the sea surface texture, the water clarity, and image naturalness distortions.
The toolbox validation contributes to solving the crucial problem of the optimal acquisition times for efficient UAS surveys in the marine environment. This will lead to the informed selection of appropriate flight times and the collection of reliable information in the marine environment, avoiding unnecessary fieldwork hours and processing time, excessive use of the equipment, and a huge amount of inaccurate data. This study aims to propose the UASea toolbox as a data acquisition tool for efficient UAS surveys in the marine environment.
Materials and Methods
UASea is a toolbox that gives hourly positive or negative suggestions in a given day about the optimal or non-optimal UAS acquisition times to conduct a UAS survey in the coastal environment. The suggestions are derived using weather forecast data of weather variables and adaptive thresholds in a ruleset ( Figure 1). To validate the performance of the UASea, we conducted UAS surveys in both optimal and non-optimal times. The acquired aerial imagery was then evaluated as to its quality. In this study, we used an image quality equation, which consists of four parameters that have been proved to significantly affect the quality and reliability of the remotely sensed imagery (i.e., sunglint presence, turbidity levels, sea-state conditions, image naturalness distortions). As the quality of the imagery is affected by the prevalent environmental conditions (e.g., wind speed, sunglint effect, waves) during data acquisition, the image quality estimations will provide more information on the permissible limits of each parameter. Drones 2021, 5, x FOR PEER REVIEW effect, waves) during data acquisition, the image quality estimations will provi information on the permissible limits of each parameter.
UASea Toolbox
UASea toolbox is an interactive web application accessible through the inter modern web browsers (https://uav.marine.aegean.gr/, accessed on 15 December is designed using HTML and CSS scripts while JavaScript augments the user ex and user interactivity through mouse events (scroll, pan, click, etc.). It consists of ical user interface (GUI) component augmented by an app logic component (F both framed and distributed by a web server for public access. GUI is a visual rep tion of various interactive visual components that allows users to interact with th toolbox and is responsible for data input and output operations. Moreover, Java also responsible for app logic and calculations as the main core of the UASea tool app logic component utilizes user data input and asynchronously asks for weath cast data in JSON format from the weather services, and based on the forecas suggests the optimal flight times suitable for marine applications. The results sented in tabular format and additional figures for each forecast paramete Charts.js (Chart JS, https://www.chartjs.org/, accessed on 14 December 2020).
UASea Toolbox
UASea toolbox is an interactive web application accessible through the internet from modern web browsers (https://uav.marine.aegean.gr/, accessed on 15 December 2020). It is designed using HTML and CSS scripts while JavaScript augments the user experience and user interactivity through mouse events (scroll, pan, click, etc.). It consists of a graphical user interface (GUI) component augmented by an app logic component (Figure 2), both framed and distributed by a web server for public access. GUI is a visual representation of various interactive visual components that allows users to interact with the UASea toolbox and is responsible for data input and output operations. Moreover, JavaScript is also responsible for app logic and calculations as the main core of the UASea toolbox. The app logic component utilizes user data input and asynchronously asks for weather forecast data in JSON format from the weather services, and based on the forecast values, suggests the optimal flight times suitable for marine applications. The results are presented in tabular format and additional figures for each forecast parameter, using Charts.js (Chart JS, https://www.chartjs.org/, accessed on 14 December 2020).
Weather Forecast Datasets
To identify the optimal flight times for marine mapping applications, the UASea toolbox uses short-range forecast data. In this context, we use (i) Dark Sky (DS) API (Dark Sky by Apple, https://darksky.net/, accessed on 10 May 2020) for two days of forecast data on an hourly basis and (ii) Open Weather Map (OWM) API (Open Weather Map, https://openweathermap.org/, accessed on 10 May 2020) five days forecast with threehour step. Both services provide a limited free-of-charge usage of their APIs; DS allows up to 1000 free API calls per day, and OWM provides 60 calls per minute. The forecast data are provided in lightweight and easy-to-handle JavaScript Object Notation (JSON) file format on asynchronous API requests. DS API uses a great variety of data sources either globally, such as NOAA's GFS model, German Meteorological Office's ICON model, or regionally, such as NOAA's NAMM available in North America, and aggregates them to provide a reliable and accurate forecast for any given location. OWM also uses several data sources such as NOAA GFS, ECMWF ERA, data from weather stations (companies, users, etc.), as well as satellite and weather radar data. Their numerical weather prediction (NWP) model was developed based on machine learning techniques.
Ruleset
The variables and their adaptive thresholds that constitute the toolbox ruleset are presented in Table 1. The parameters that have been proven to affect the quality of UAS imagery and flight safety have been used as variables in the ruleset. The suggested thresholds have been derived considering UAS protocols [25,26] and studies that have extensively analyzed the impact of the environmental conditions in marine applications [29,[34][35][36], the UAS specifications [37], and fieldwork experience. The thresholds are used to exclude inconsistent and outlier values that may affect the quality of the acquired images as well as the safety of the survey and UAS pilot. Considering the above, the ruleset is designed in such a way that outlines the optimal weather conditions, suitable for reliable and accurate data acquisition, as well as for efficient short-range flight scheduling. Table 1. The variables and the adaptive thresholds of the ruleset for the calculation of the optimal flight times using the UASea.
Variables
Thresholds Temperature (degrees Celsius)
Weather Forecast Datasets
To identify the optimal flight times for marine mapping applications, the UASea toolbox uses short-range forecast data. In this context, we use (i) Dark Sky (DS) API (Dark Sky by Apple, https://darksky.net/, accessed on 10 May 2020) for two days of forecast data on an hourly basis and (ii) Open Weather Map (OWM) API (Open Weather Map, https://openweathermap.org/, accessed on 10 May 2020) five days forecast with threehour step. Both services provide a limited free-of-charge usage of their APIs; DS allows up to 1000 free API calls per day, and OWM provides 60 calls per minute. The forecast data are provided in lightweight and easy-to-handle JavaScript Object Notation (JSON) file format on asynchronous API requests. DS API uses a great variety of data sources either globally, such as NOAA's GFS model, German Meteorological Office's ICON model, or regionally, such as NOAA's NAMM available in North America, and aggregates them to provide a reliable and accurate forecast for any given location. OWM also uses several data sources such as NOAA GFS, ECMWF ERA, data from weather stations (companies, users, etc.), as well as satellite and weather radar data. Their numerical weather prediction (NWP) model was developed based on machine learning techniques.
Ruleset
The variables and their adaptive thresholds that constitute the toolbox ruleset are presented in Table 1. The parameters that have been proven to affect the quality of UAS imagery and flight safety have been used as variables in the ruleset. The suggested thresholds have been derived considering UAS protocols [25,26] and studies that have extensively analyzed the impact of the environmental conditions in marine applications [29,[34][35][36], the UAS specifications [37], and fieldwork experience. The thresholds are used to exclude inconsistent and outlier values that may affect the quality of the acquired images as well as the safety of the survey and UAS pilot. Considering the above, the ruleset is designed in such a way that outlines the optimal weather conditions, suitable for reliable and accurate data acquisition, as well as for efficient short-range flight scheduling. A set of mathematical rules based on Logical Conjunction and Set Theory was created using the mentioned ruleset. Every weather variable obtained by the weather APIs constitutes a distinct set, namely A for temperature, B for humidity, C for cloud coverage, D for the probability of precipitation, E for wind speed, F for the wave height, and G for sun elevation angle, while each one of the above sets is accompanied by an additional set (A', B', C', D', E', F', G') that represents the adaptive thresholds. Optimal flight conditions necessitate the intersection between the former and the latter using Equation (1). The results of the equation imply two possible outcomes (0, 1), where 1 indicates optimal flight conditions while 0 stands for non-optimal flight conditions.
UASea Screens
GUI, along with its visual components, is depicted in Figure 3. Users may navigate to the map element by zooming in/out and panning to the desired location and selecting the study area by clicking the map (Figure 3a). After clicking the map, a leaflet marker is created that triggers the "Adjust Parameters" panel and button (Figure 3b). The Adjust Parameters panel consists of an HTML form in which users can adjust the parameters and their thresholds and select one of the available weather forecast data providers. By hitting the submit button, the parameter adjustment panel disappears, and the decision panel becomes available at the bottom of the screen. At the top of the decision panel (Figure 3c), there is a date menu that is used to address the range of the available forecast data, while on the bottom of the decision panel, the results of the UASea toolbox are presented in tabular format (Figure 3d). In the "Decisions" row, green indicates optimal weather conditions, while red stands for non-optimal weather conditions. Finally, a set of figures for each one of the weather parameters is also available through the figures panel (Figure 3e).
Image Quality Estimations (IQE)
The image quality equation (Equation (2)) is a combination of four variables (sunglint, waves, turbidity, image naturalness) that correspond to the conditions that affect the sea surface conditions, the water quality, and the image quality the most. As the images are acquired using an RGB sensor, we used RGB-based indices and statistics to quantify the effect of the parameters on the image quality. The processes were performed using R and Python packages. The image quality equation converts the values of each variable to an ascending number (from 1 to Ni, where (i) is the number of images) and calculates their summary per image. The summaries constitute the overall estimations of each image, where the lower value corresponds to the best quality image.
where x 1 is the percentage of sunglint pixels, x 2 the percentage of turbid pixels, x 3 the variability of the image, and x 4 the BRISQUE image quality estimation.
Sunglint Detection
A Brightness Index (BI) [38,39] was used for the sunglint detection (Equation (3)) to isolate the brighter pixels of the images that correspond to the sunglint pixels. To detect the brighter pixels of the image, we used the interquartile range (IQR) known as the Tukey fences method [40]. BI = (Red + Green + Blue)/3, where Red is the digital number (DN) of the red channel, Green is the DN of the green channel, and Blue is the DN of the blue channel.
Turbidity Levels
The Normalized Difference Turbidity Index (NDTI) [41][42][43] was used to calculate the turbidity levels of the acquired imagery (Equation (4)). The values of the NDTI vary from −1 to 1, where higher values represent higher levels of turbidity. After the extraction of the NDTI images, an adaptive threshold was used for values ranging from 0.5 to 1 to isolate and calculate the number of high-turbid pixels in each image. NDTI = (Red − Green)/(Red + Green), where, Red is the digital number (DN) of the red channel, and Green is the DN of the green channel.
Image Texture
We used the grey-level co-occurrence matrix (GLCM) method as one of the most commonly used methods in image quality assessment [44][45][46] to measure the texture of the images [47]. A GLCM package in R was used to calculate the statistics: mean, variance, homogeneity, contrast, entropy, dissimilarity, second moment, and correlation of the red and green band of our images, derived from the grey-level co-occurrence matrices. A principal component analysis (PCA) was used to reduce the number of the texture bands where four PCs explained the 90% variability of the texture features. Statistical variability indicators (i.e., standard deviation, coefficient of variation) were used to examine the reliability of the results.
Image Naturalness
The Blind/Referenceless Image Spatial QUality Evaluator (BRISQUE) was used to provide an image quality score by comparing it to a default model based on natural-scene images as a distortion-independent measure of image quality assessment. The BRISQUE image quality score ranges from 0 to 100, where lower values refer to better quality [48]. The BRISQUE seems to perform well in measuring the naturalness of an image in cases where the exact type of image distortion is of no interest.
Correlation Analysis
We examined the correlation between the suggestions of the UASea toolbox per acquisition time to the image quality estimations, using the point biserial correlation method [49], as the one variable is dichotomous (yes/no) and the other is quantitative. The values of correlation coefficient vary from −1 to +1; positive values of r indicate the simultaneous increase or decrease of the variables, negative values of r indicate that when one variable increases, the other tends to decrease, while a zero value means that there is no linear relationship between the two variables. The results will allow us to measure the strength of the association between the two variables and to conclude on the reliability of the toolbox in the real world.
UAS Surveys
Several surveys were conducted from May of 2020 to February of 2021 to examine the performance of the toolbox in seasonal conditions. The UAS surveys were conducted at both optimal and non-optimal times on each acquisition day, according to the suggestions of the toolbox, using a DJI Phantom 4 Pro system. The flight plan was created through Pix4Dcapture in a grid mission, and the settings were identical for each survey. The surveys were automated in a flight height of 70 m above ground, with a nadir viewing angle (90 • ), and the image overlap was set at 80%. The forecast weather data were acquired through the toolbox, while the local wind speed was measured using a handheld anemometer before each flight to verify that it matches the forecast values, and sky images were acquired to evaluate the cloud coverage.
The study area is a coastal area north of the town of Mytilene in Lesvos Island, Greece ( Figure 4). The habitats of the seabed include an extended seagrass meadow of Posidonia Oceanica, one of the most significant seagrass species found only in the Mediterranean Sea. The morphology of the seabed also includes sandy areas and mixed types of algae in smaller depths, while the seabed depths range from 0.5 to 8 m. The surveyed area is easy to approach, close to a small harbor, and it belongs to the UAS free-flying zones.
We used a dataset of 20 surveys to validate the performance of the toolbox; 8 of the images were acquired in positive toolbox suggestions and 12 of them in negative toolbox suggestions. A sample of four acquisition dates is presented in Figure 5. Based on the toolbox suggestions, the first two surveys were conducted on 10 May 2020, at 4:00 p.m. and on 20 February 2021 at 2.00 p.m. at optimal acquisition times, while the surveys on 6 May 2020 at 12.00 p.m. and on 13 February 2021 at 12.00 p.m. were conducted at non-optimal acquisition times. The images acquired at optimal acquisition times are clear without sunglint presence; the first (10 May 2020) has a calm sea surface, while the second (20 February 2021) presents some wave wrinkles on the sea surface. The environmental conditions on these dates were within the adaptive thresholds of the toolbox. At both times, the wind speed was about 1 m/s, the sky was clear with a cloud cover lower than 25%, and sun elevation angles were between 25 to 45 degrees. We used a dataset of 20 surveys to validate the performance of the toolbox; 8 of the images were acquired in positive toolbox suggestions and 12 of them in negative toolbox suggestions. A sample of four acquisition dates is presented in Figure 5. Based on the toolbox suggestions, the first two surveys were conducted on 10 May 2020, at 4:00 p.m. and on 20 February 2021 at 2.00 p.m. at optimal acquisition times, while the surveys on 6 May 2020 at 12.00 p.m. and on 13 February 2021 at 12.00 p.m. were conducted at nonoptimal acquisition times. The images acquired at optimal acquisition times are clear without sunglint presence; the first (10 May 2020) has a calm sea surface, while the second (20 February 2021) presents some wave wrinkles on the sea surface. The environmental conditions on these dates were within the adaptive thresholds of the toolbox. At both times, the wind speed was about 1 m/s, the sky was clear with a cloud cover lower than 25%, and sun elevation angles were between 25 to 45 degrees. The images acquired at non-optimal acquisition times have a rough sea surface with sunglint presence that blurs the seabed information and prevents habitat distinction. At the two non-optimal acquisition times, the wind speed was 4 to 5 m/s, and the sky was partly cloudy. The negative results of the toolbox are due to the wind speed values, which The images acquired at non-optimal acquisition times have a rough sea surface with sunglint presence that blurs the seabed information and prevents habitat distinction. At the two non-optimal acquisition times, the wind speed was 4 to 5 m/s, and the sky was partly cloudy. The negative results of the toolbox are due to the wind speed values, which are higher than the suggested 3 m/s, and the cloud cover, which was higher than 25%. Although the acquisition time of both dates was 12.00 p.m., the sun elevation angle on 6 May 2020 was higher than 45 degrees, while on 13 February 2021 it was lower than 45 degrees, as the sun is closer to the earth during the winter. This means that the thresholds of the sun elevation angle must be adapted according to the acquisition season to avoid the presence of sunglint on the images.
Validation Results
The calculations of the image quality estimates per variable and the overall estimates of the images are shown in Table 2. The x 1 , x 2 , x 3 , and x 4 columns contain the calculated values of each variable, and the x 1 , x 2 , x 3 , and x 4 sort columns, the ascending order of the values. The overall estimates are calculated by the summary of the variable estimates. The overall quality estimates vary from 12 to 61, where lower estimates correspond to higher image quality. The calculated sunglint percentages (x 1 ) vary from 0.02% to 3.56%, the turbidity percentages (x 2 ) from 0.02% to 2.77%, the variability of the image textures (x 3 ) from 2.70 to 3.06, and the BRISQUE estimations (x 4 ) from 3.34 to 35.67. Eight of the overall estimates are lower than the average value and correspond to the higher-quality images. Table 2. The calculations of the image quality equation per variable and the overall quality estimates per image. The x 1 , x 2 , x 3 , and x 4 columns contain the calculated values of each variable, while the x 1 , x 2 , x 3 , and x 4 sort columns in the ascending order of the values. The overall estimates are calculated by the summary of the variable sorts. The lower the estimates, the higher the quality of the images. Most of the lower-quality images combine two to four high values of the variables. Considering the estimates of the lower-quality images per variable, it is observed that they almost equally affect the overall quality of the images. According to the toolbox forecasts, the higher-quality images were captured on wind speeds from 1 to 3 m/s, with cloud coverage from 0% to 25%, and acquisition times at morning and afternoon hours in the spring season while noon hours are included in the winter season. The lowerquality images were captured on wind speeds from 1 to 5 m/s, combined with high cloud coverages and acquisition times at morning to afternoon hours in the spring season and mostly noon hours in winter.
Image
The higher-quality image (left) and the lower-quality image (right) as calculated by the image quality equation are shown in Figure 6. The higher-quality image was captured on 19 February 2021 at 12.00, and its quality estimation is 12, while the lower-quality image was acquired on 21 March 2020 at 13.00, and its quality estimation is 61. On the first date, the sunglint and turbidity percentages are very low, the texture of the image is smooth, and the BRISQUE estimation is 4.4, the fourth lower naturalness estimation of the dataset. The sea surface is calm, and the illumination of the seabed is sufficient for the distinction of marine habitats. On the second date, the sunglint covers 2.81% of the image, the sea surface texture is rough, and the BRISQUE estimation is 28.85, one of the higher image naturalness estimates of the dataset. The sea surface conditions prevent the clear visibility of the seabed, affecting the mapping of the habitats. The biserial correlation showed a high association between the toolbox suggestions and the image quality estimations. The negative linear relationship between the two variables was significant as the coefficient r is 0.84 (df = 18, p < 0.05). The linear regression (Figure 7) shows that as the independent variable (overall estimates) increases, the dependent variable (toolbox suggestions) tends to decrease. In our study, this means that higher estimates (lower-quality images) correspond to the zero value of the toolbox suggestions (non-optimal acquisition times). The biserial correlation showed a high association between the toolbox suggestions and the image quality estimations. The negative linear relationship between the two variables was significant as the coefficient r is 0.84 (df = 18, p < 0.05). The linear regression (Figure 7) shows that as the independent variable (overall estimates) increases, the dependent variable (toolbox suggestions) tends to decrease. In our study, this means that higher estimates (lower-quality images) correspond to the zero value of the toolbox suggestions (non-optimal acquisition times).
The biserial correlation showed a high association between the toolbox suggestions and the image quality estimations. The negative linear relationship between the two variables was significant as the coefficient r is 0.84 (df = 18, p < 0.05). The linear regression (Figure 7) shows that as the independent variable (overall estimates) increases, the dependent variable (toolbox suggestions) tends to decrease. In our study, this means that higher estimates (lower-quality images) correspond to the zero value of the toolbox suggestions (non-optimal acquisition times).
Discussion and Conclusions
The effect of UAS limitations on the quality of the acquired imagery and the accuracy of marine habitat mapping is widely known and reported in the literature [27,30,32,34]. Considering these limitations, we proposed the UASea as a toolbox for improving flight
Discussion and Conclusions
The effect of UAS limitations on the quality of the acquired imagery and the accuracy of marine habitat mapping is widely known and reported in the literature [27,30,32,34]. Considering these limitations, we proposed the UASea as a toolbox for improving flight planning, the quality of the acquired data, and the mapping of marine habitats. The calculations of the UASea contribute to the detection of the optimal flight conditions for data acquisition in the marine environment. This is achieved by using the parameters that have been proved to significantly affect the quality of the acquired data, adapt thresholds on their values, and calculate the optimal survey times, influencing crucial decisions in the coastal environment. The challenges and limitations of UAS data acquisition in the marine environment are being overcome by the toolbox suggestions. The UASea is a valuable tool for efficient surveys that also contributes to the reduction of fieldwork costs, survey times, and time spent in the analysis and processing of unsatisfactory acquired data.
The dataset of images acquired in both optimal and non-optimal conditions enhanced the information on the conditions that affect the toolbox suggestions and the quality of the images. In total, 40% of the toolbox suggestions were positive, and their quality was higher than the average value, while 60% of the lower-quality images were acquired on time with high wind speeds and present high variability in their texture caused by the wavy sea surface. The 40% of the low-quality images were acquired at times with cloud coverages higher than 25% and/or sun angles lower than 25 or higher than 45 degrees, which may cause shading at the seabed, lack of adequate lighting, and sunglint presence on the sea surface [25,28,30,36,50]. Considering these results, the sea surface conditions, and the illumination of the seabed are the parameters that mostly affect the image quality.
The suggested acquisition times for sunglint avoidance while also achieving proper seabed lighting are early in the morning and late afternoon [11,34]. The higher-quality images are acquired at morning hours until 11.00 a.m. and at afternoon times from 4.00 p.m. to 7.00 p.m. at the spring-summer season, which corresponds to the suggested sun elevation angles (from 25 to 45 degrees). However, it is observed that the acquisition times differ in winter, where images acquired at noon hours have high image quality due to the sun elevation angles, which are lower during this season. This means that the suggested elevation angles may not be applicable in winter acquisition times. In general, the adaptive thresholds of the UASea ruleset seem to perform well in different seasons conditions and acquisition times. Allowing the user to change the suggested variable thresholds makes the toolbox more adaptable and its suggestions more accurate to each application.
It is important to mention that the UASea was initially developed for scientific purposes in the marine environment; however, it can be effectively used as a toolbox for implementing UAS surveys in different environments (e.g., urban, agricultural, inland waters, etc.) in environmental and ecological applications, such as detecting litter in the coastal zone [51,52] or floating marine litter [53], monitoring beach morphological changes [54,55], river habitat mapping [56], animal and wildlife monitoring [35,57], considering the respective weather parameters.
The correlation of the toolbox suggestions with the image quality estimations showed a high linear association between the two variables; most of the positive toolbox suggestions as optimal acquisition times match the images with the higher quality. The validation of the toolbox proved that UAS surveys on the suggested optimal acquisition times result in high-quality images. UAS as a widely used tool in high-resolution mapping at coastal areas, combined with the proposed toolbox, results in the acquisition of high-quality imagery. The significance of optimal UAS acquisition times advances the UASea as the optimum tool in overcoming the limitations that affect the quality of the acquired imagery. UASea is a userfriendly and promising toolbox that can be used globally for efficient mapping, monitoring, and management of the coastal environment, by researchers, engineers, environmentalists, NGOs for efficient mapping, monitoring, and management of the coastal environment, for ecological and environmental purposes, exploiting the existing capability of UAS in marine remote sensing.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author, Michaela Doukari, upon reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2021-09-27T20:47:56.831Z
|
2021-08-03T00:00:00.000
|
{
"year": 2021,
"sha1": "7bbb8c6634861037132a969210913f4fcbe590ef",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-446X/5/3/73/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2ed9242de889b9618cbfb664d52a673cfccbc45e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
10497768
|
pes2o/s2orc
|
v3-fos-license
|
Fruit consumption and physical activity in relation to all-cause and cardiovascular mortality among 70,000 Chinese adults with pre-existing vascular disease
Objectives To assess the associations of fresh fruit consumption and total physical activity with all-cause and cardiovascular mortality among Chinese adults who have been diagnosed with cardiovascular disease (CVD) or hypertension. Methods During 2004–08, the China Kadoorie Biobank study recruited 70,047 adults, aged 30–79 years, with physician-diagnosed stroke or transient ischaemic attack, ischemic heart disease, or hypertension. Information on diet and physical activity was collected using an interviewer-administered electronic questionnaire. Cox regression was used to yield hazard ratios (HRs) for the independent and joint associations of fresh fruit consumption and total physical activity with mortality. Results At baseline, 32.9% of participants consumed fresh fruit regularly (i.e. >3 days/week) and the mean total physical activity were 15.8 (SD = 11.8) MET-hr/day. During ~7-years follow-up, 6569 deaths occurred with 3563 from CVD. Compared to participants with <1 day/week fruit consumption, regular consumers had HR (95% CI) of 0.84 (0.79–0.89) for all-cause mortality and 0.79 (0.73–0.86) for CVD mortality. The HRs for the top vs bottom tertile of physical activity were 0.68 (0.64–0.72) and 0.65 (0.60–0.71), respectively, with no clear evidence of reverse causality. After correcting for regression dilution, each 100 g/day usual consumption of fresh fruit or 10 MET-hr/day usual levels of physical activity was associated with 23–29% lower mortality. The combination of regular fruit consumption with top 3rd of physical activity (>16.53 MET-hr/day) was associated with about 40% lower mortality. Conclusion Among Chinese adults with pre-existing vascular disease, higher physical activity and fruit consumption were both independently and jointly associated with lower mortality.
Introduction Despite the progressive decline in age-standardised adult mortality over the last half century, cardiovascular disease (CVD) remains a major cause of death worldwide [1]. Individuals with pre-existing CVD are at particularly increased risk of premature death. Current guidelines for secondary CVD prevention generally recommend a healthy lifestyle, particularly a diet rich in fresh fruit and vegetables and regular physical activity [2,3]. Such recommendations, however, are mainly based on data either from studies in general population who were largely free of CVD at the start of the study [4,5] or from relatively short-term rehabilitation trials [6][7][8].
There is currently insufficient high quality data showing the long-term effects of fresh fruit consumption and physical activity on mortality among individuals with pre-existing CVD, or hypertension. For practical reasons, large randomized intervention trials of lifestyle changes are difficult to conduct, particularly in low-and middle-income countries, such as China [9]. Well-performed large-scale population-based prospective cohort studies can help to assess the potential long-term health impacts of diet and physical activity among people with pre-existing vascular disease [10,11].
In the China Kadoorie Biobank (CKB) study [12], both fresh fruit consumption [4] and total physical activity [13] have been strongly and inversely associated CVD mortality in people without CVD at baseline. The current analysis explored their relationships with all-cause and CVD mortality among people who have been previously diagnosed with CVD or hypertension. Including hypertensive CVD-free participants allowed us to compare the associations between individuals with and without manifest CVD at baseline, thus to obtain more insight on the potential effect of reverse causality (i.e. individuals with CVD may be less likely to engage in physical activity due to disease) [14].
Study population
Details of the CKB design, survey methods, and participant characteristics have been reported previously [12]. Briefly, baseline survey was conducted in 10 geographically diverse regions (5 urban and 5 rural) in China, chosen to cover a wide range of risk exposures and disease patterns, all with good quality death and disease registries and local capacity. Between June 2004 and July 2008, all permanent residents aged 35-74 years with no severe disability were invited to participate in the study, and about one in three responded. Overall 512,891 were recruited, including a few slightly outside the targeted age range (30)(31)(32)(33)(34) or 75-79 years), and all provided written informed consent. Ethics approval was obtained from the Oxford University Tropical Research Ethics Committee (OXTREC), Chinese Academy of Medical Sciences Ethical Review Committee, Chinese Center for Disease Control and Prevention (China CDC) Ethical Review Committee, and the scientific review boards in each of the 10 regional centres (i.e. CDCs in Qingdao, Heilongjiang, Hainan, Jiangsu, Guangxi, Sichuan, Gansu, Henan, Zhejiang and Hunan).
Among the CKB participants, at baseline 23,129 reported having physician-diagnosed CVD (i.e. either ischaemic heart disease (IHD), stroke or transient ischaemic attack [TIA], or both) and another 48,562 participants reported having hypertension. After excluding those individuals who reported either zero physical activity (n = 1464) or being disabled (i.e. were unable to or had very limited ability to engage in physical activity, n = 180), the present analysis included 70,047 participants, of which 22,107 had CVD.
Data collection
At local assessment clinics, trained health workers administered a laptop-based questionnaire on socio-demographic status, smoking, alcohol consumption, diet, physical activity, and personal and family medical history and measured height, weight, blood pressure etc. Dietary data covered 12 major food groups (including rice, wheat, other staple foods, red meat, poultry, fish, eggs, dairy products, fresh fruit, fresh vegetables, soybean, and preserved vegetables), with frequency of intake in 5 categories (daily, 4-6 days/week, 1-3 days/week, monthly, or never/rarely) [4]. Information about type, frequency, and duration of occupational, commuting-related, household and active recreational (leisure-time) physical activities were used to calculate total physical activity in MET hours per day (MET-hr/day) [15]. Following the completion of the baseline survey, in 2008 and 2013-14 two resurveys were undertaken among randomly selected~5% surviving participants using similar procedures. In the second resurvey, in addition to the consumption frequency, information on the amount consumed was also collected, enabling the estimation of average consumption for each baseline level of fresh fruit category and to correct for regression dilution bias (S1 Table) [4,16].
Mortality follow-up
Vital status of each participant was obtained periodically through China CDC's Disease Surveillance Points (DSP) system [17], and checked annually against local residential records, health insurance records, and by active confirmation through street committee or village administrators. In each area, the DSP system provides complete and reliable death registration, in which almost all deaths were medically certified. For the few (~5%) deaths without relevant medical attention prior to death, standardized procedures were used to determine probable causes of death from symptoms or signs described by informants (usually family members). Trained DSP staff coded all diseases on death certificates and assigned an underlying cause using ICD-10. The information entered into the CKB follow-up system (including scanned images of original death certificates) was reviewed centrally by study clinicians, blinded to baseline information [12]. For the current study, the main outcome measures were all-cause mortality and CVD mortality (ICD-10: I00-I25, I27-I88 & I95-I99). Follow-up time of each participant was calculated from the date of enrollment until death, loss to follow-up (n = 436, 0.6%) or censoring date (31 Dec 2013).
Statistical analysis
Multiple linear (for continuous outcomes) or logistic regression (for binary outcomes) were used to compare age, sex, and region adjusted means (standard deviations) or percentages of various baseline characteristics by levels of fresh fruit consumption and total physical activity and by type of baseline disease. Cox regression analysis, stratified by age-at-risk (5-year intervals), sex, region (10 study areas), and baseline CVD status, was used to calculate the hazard ratios (HRs) and 95% confidence intervals (CIs) for mortality by fruit consumption or physical activity, adjusting for education, annual household income, smoking, alcohol, consumption of meat, dairy products and preserved vegetables (used as a proxy marker of salt consumption), survey season, family history of CVD, use of CVD medication, and poor health status (defined as either poor self-rated general health or usually become short of breath or have to slow down due to chest discomfort if walking on level ground). Fruit consumption and physical activity were also mutually adjusted for each other. The proportional hazard assumption was fulfilled, as similar HRs were observed in the first and second half of follow-up. In order to investigate their joint associations with mortality, participants were classified into 6 categories according to fruit consumption (> 3 days/week or not) and physical activity (in tertiles), and same as above described Cox regression models were used. In all these analyses, the floating absolute risk method was used to provide variance of log risk for each group (including the reference group) to facilitate comparisons between different exposure groups [18]. Using the fruit consumption data collected at baseline and 2 resurveys, we estimated mean usual fruit consumption for each baseline consumption group (S1 Table) and assigned these mean values to each individual participant in order to estimate the regression dilution bias-corrected HRs (95% CIs) for mortality per 1 daily portion [4,19,20]. The regression dilution ratio for physical activity was derived from the correlation coefficient between physical activity estimated at baseline and the first resurvey, which was 0.54. The linear associations of each 10 MET-hr/day physical activity with mortality were corrected for regression dilution bias by dividing the log HRs and 95% CIs by this regression dilution ratio.
In order to investigate the potential influence of reverse causality on the associations of fruit consumption and particularly physical activity with mortality, stratified analyses by baseline CVD status were performed. Statistical significance of effect modifications by baseline CVD status was examined through including an interaction term in the Cox regression analyses. In order to further explore the impact of reverse causality and assess the robustness of the findings, sensitivity analyses were performed by excluding the first 2 years of follow-up, participants with poor health status at baseline and those with prevalent cancer (n = 180), and participants with prevalent diabetes (either self-reported physician-diagnosed or screendetected, n = 10,074). Moreover, additional adjustments were also done for other dietary variables (e.g. fresh vegetables and whole-grain staple foods); participants (n = 109,682) who had no self-reported prior history of hypertension or CVD at baseline but had measured SBP/DBP of >140/90 mmHg were also included; and analyses were also conducted on non-fatal CVD hospitalizations (collected through linkages with disease registries and health insurance databases [4]). All statistical analyses were performed using SAS (version 9.2), and figures were created using R version 3.0.2.
Results
The mean (SD) baseline age was 58.9 (9.3) years, 60.3% of the participants were women, and 54.4% came from urban areas ( Table 1). The mean (SD) total physical activity level was 15.8 (11.8) MET-hr/day and 32.9% consumed fresh fruit regularly (>3 days/week).
Fruit consumption and physical activity were inversely related to each other. Participants with higher fruit consumption were more likely to be younger, women and urban residents, had higher education and income, and less likely to be current smokers and alcohol drinkers.
In contrast, participants with higher levels of physical activity were more likely to be rural residents, current smokers, and current drinkers. Comparing to participants with <1 day/week fruit consumption, regular consumers had 0.6 kg/m 2 higher body mass index (BMI), 4.3 mmHg lower systolic blood pressure (SBP), and 1.1 mmHg lower diastolic blood pressure (DBP). Physical activity was inversely correlated with BMI (0.4 kg/m 2 lower in the top vs bottom tertile) but had no clear association with blood pressure. Participants with stroke or TIA were more likely to be men, have lower education, income, fruit consumption and physical activity, and higher prevalence of diabetes and poor health status (S2 Table). During the~0.5 million person-years follow-up, 6569 participants died between age of 35-79 years, mainly from CVD (~54%), cancer (~24%) and respiratory disease (~7%) (S3 Table). Both fruit consumption and physical activity were significantly and inversely associated with all-cause mortality and CVD mortality (Fig 1). Overall, regular fruit consumption was associated with 16% (HR 0.84, 0.79-0.89) lower risk of all-cause mortality and 21% lower CVD mortality (HR 0.79, 0.73-0.86), with 1 daily portion (100 grams) usual consumption associated with HR of 0.77 (0.70-0.86) and 0.71 (0.61-0.83), respectively. For physical activity, the top tertile associated with 32% lower all-cause mortality (HR 0.68, 0.64-0.72) and 35% lower CVD mortality (HR 0.65, 0.60-0.71), as compared with the bottom tertile. Each 10 MET-hrs/day of usual physical activity was associated with HRs of 0.75 (0.71-0.80) and 0.71 (0.65-0.76), respectively.
In stratified analyses, fruit consumption showed a similar association with mortality in participants with and without baseline prevalent CVD (Fig 2a & 2b). Compared to participants who consumed fruit <1 day/week, regular consumers had 13% lower all-cause mortality (HR = 0.87, 95% CI: 0.80-0.94) and 14% lower CVD mortality (HR 0.86, 0.77-0.96) in those with baseline CVD, whereas the corresponding HR differences among those without baseline CVD were slightly larger, as 16% (HR 0.83 vs 0.67) and 20% (0.72 vs. 0.52), respectively. For physical activity, its associations with mortality were stronger in participants with baseline CVD, with the top third having 38% lower all-cause mortality (HR 0.62, 0.57-0.67) and 46% lower CVD mortality (HR 0.54, 0.48-0.60). The corresponding HR differences in those without baseline CVD were only 22% (HR 0.50 vs. 0.72) and 19% (HR 0.39 vs. 0.58), respectively. However, the regression lines of usual physical activity with mortality for these two participant groups converged in a log-linear manner (Fig 3a & 3b). Fig 4 shows the joint associations of fruit consumption and physical activity with mortality. Higher fruit consumption was associated with 7-12% lower risk of all-cause mortality and 4-17% lower risk of CVD mortality at each level of physical activity. For both all-cause and CVD mortality, the HR differences between regular and non-regular fruit consumers seemed slightly larger among people with lower physical activity. Compared to the least healthy group which included those participants in the lowest tertile of physical activity and also did not consume fruit regularly, any increase in fruit consumption or physical activity was associated with somewhat lower risk of mortality; and the risk was 41% (HR 0.59, 0.52-0.67) lower for allcause mortality and 40% (HR = 0.60, 0.50-0.72) lower for CVD mortality in the most healthy group, i.e. those regular fruit consumers who also had the highest levels of physical activity. The differences in usual levels of physical activity and fruit consumption in these two extreme groups were approximately 11 (9 vs. 20) MET-hr/day and 60 grams per day respectively.
Interestingly, the association of fruit consumption with mortality tended to become stronger with increased level of SBP but the converse was found for the association of physical activity with mortality, which became weaker, although remained statistically significant (S1 and S2 Figs). The association between fruit consumption and mortality tended to be stronger in rural than urban areas, but the associations of physical activity were largely consistent among subgroups. As shown in the S4 Table, the HRs differed little between people with IHD and stroke or TIA at baseline, although latter group had much lower levels of both fruit consumption and physical activity. None of the sensitivity analyses materially altered the observed associations (S5 Table). The associations with non-fatal CVD hospitalization were also concordant with the results from main analyses on mortality (S6 Table).
Discussion
In this prospective investigation of over 70,000 Chinese adults with prior history of CVD or hypertension, both fresh fruit consumption and total physical activity were associated with Adjusted HRs for all-cause and CVD mortality by usual levels of fresh fruit consumption and physical activity. a) and b) are the associations of fresh fruit consumption with all-cause and CVD mortality, and c) and d) are the associations of physical activity with all-cause and CVD mortality. Analyses were stratified by age-at-risk, sex, region, and baseline CVD status, and adjusted for education, income, smoking, consumption of alcohol, dairy products, meat and preserved vegetables, survey season, diabetes status, family history of CVD, CVD medication, poor health status, and fruit consumption or physical activity, lower all-cause and CVD mortality. These associations were broadly consistent across various subgroups of participants. Moreover, the observed inverse associations did not appear to be due to reverse causality. Jointly the combination of 60 gram/day more usual fruit consumption and 11 MET-hrs/day higher usual physical activity was associated with 40% lower mortality.
In the context of primary prevention, both fresh fruit consumption and physical activity have been associated with lower risk of CVD incidence and mortality in our [4,13] and other mainly Western studies [21][22][23][24]. Very few observational data, however, have demonstrated such associations in people with prevalent vascular disease such as CVD [11,[25][26][27][28] and hypertension [29]. For example, in a secondary analysis of data from a trial involving more than 11,000 Italians with myocardial infarction, more than 1 day per week of fruit consumption was associated with 27% (2-46%) lower risk of all-cause mortality as compared to those who never or almost never consumed fruit [25]. This association would close to the 16% difference observed in our study if the lowest two consumption groups were combined together, given the never/almost never consumption group included only 55 out of 1658 deaths. In the where appropriate. The boxes represent hazard ratios, with the size inversely proportional to the variance of the logarithm of the hazard ratio, and the vertical lines represent 95% confidence intervals. The numbers above the vertical lines are point estimates for hazard ratios, and the numbers below the lines are numbers of events. The x-axis location of each box corresponds to the group average of usual fruit consumption or usual physical activity for each category of participants.
https://doi.org/10.1371/journal.pone.0173054.g001 EPIC-elderly study of 2671 participants with myocardial infarction, consumption of fruits and nuts was significantly and inversely associated with mortality, with each 180 g/day associated with 12% lower mortality [28]. Within the CKB, higher fresh fruit consumption has been associated with lower mortality among people with prevalent diabetes [30]. Consistent findings in the current study (even after excluding participants who also had diabetes at baseline, S5 Table) reinforce the potential health benefit of fresh fruit consumption in people with cardiometabolic diseases.
For physical activity, most of the previous studies have tended to focus on leisure time exercise rather than total physical activity that also includes activities related to work, commuting and household chores. For instance, in a US cohort study of~4000 IHD participants, participating in physical exercise at least 4 times per week was associated with 29% (14-41%) lower risk of mortality as compared to those did not exercise [11]. To the best of our knowledge, no previous study has examined the long-term health effects of total physical activity among people living with vascular disease. Compared to people in high-income countries, people in lowand middle-income countries, such as Chinese, participate in much less leisure time exercise, with occupational and household activity accounting for a much larger proportion of total physical activity [31]. Although numerous rehabilitation trials have confirmed the benefits of structured aerobic exercise in people at post-acute stage of CVD [8,32,33], there is a lack of data on unstructured or other types of physical activity. The dose-response relationship Adjusted HRs for all-cause and CVD mortality by total physical activity, stratified by baseline CVD status. Analyses were stratified by ageat-risk, sex and region, and adjusted for education, income, smoking, consumption of alcohol, dairy products, meat and preserved vegetables, survey season, diabetes status, family history of CVD, CVD medication, poor health status, and fruit consumption. Convention as in Fig 1. Black boxes were for participants with baseline prevalent CVD and the grey boxes were for those without prevalent CVD at baseline. between total physical activity and mortality observed in the present study accords with the consensus that greater health benefits could be achieved by increasing physical activity among people who are physically less fit [34,35]. In other words, the steeper inverse associations in participants with baseline CVD, as compared to the associations in those participants with hypertension only (CVD-free) at baseline, should be attributed to their relatively lower level of usual physical activity, rather than reverse causality.
Including a group of participants with hypertension but not manifest CVD was a unique strength of the current study, that afforded us the opportunity to investigate the potential influence of reverse causality [36]. Few previous publications investigated this important issue. Other major strengths of our current study include a larger number of community-dwelling patients who have been diagnosed with vascular diseases; high completeness of follow-up; detailed information on general health status at baseline, allowing us to perform detailed adjustment and sensitivity analyses to further explore the potential impacts of reverse causality; repeated measures on exposures during follow-up in a random sample of surviving participants enabling us to correct for regression dilution bias [37,38].
This study also has several limitations. First, the information on fruit consumption and physical activity was collected using a general questionnaire, which has not been validated against objective measures. However, our previous work has provided some indirect evidence of validity [4,15,31]. Second, baseline prevalent disease status was self-reported and we have no further information to confirm/refute or sub-classify these diseases. However, a high specificity of such self-reported CVD and hypertension status could be expected [39][40][41]. Third, there may be some selection bias because our baseline survey did not include people who were unable to attend the assessment clinics (e.g. due to severe health conditions caused by CVD). Fourth, although we have attempted to deal with all potential confounders in our analyses, our results may still be subject to residual confounding from unknown and unmeasured factors.
In summary, our findings concur with previous data from mainly general populations with regard to the potential benefits of fresh fruit consumption and physical activity in preventing overall and cardiovascular death [42]. As the population ages, the prevalence of vascular disease will greatly increase in China and elsewhere. Although these high risk individuals may have received health education messages encouraging lifestyle changes, the prevalence of unhealthy behaviours such as smoking, alcohol, overweight, and uncontrolled hypertension are still high, as seen in the current study and previously results [43]. This poses a major challenge to public health professionals as well as clinicians and health-care systems. In addition to pharmacological therapy, guidelines to this high risk population should also integrate advice on diet and physical activity, while at the same time pay attention to other key CVD risk factors, such as smoking, diabetes and uncontrolled hypertension.
Supporting information S1 Fig. Adjusted HRs of 1 daily portion of fresh fruit consumption associated with allcause and CVD mortality by subgroups of participants. Analyses were stratified by age-atrisk, sex, region, and baseline CVD status, and adjusted for education, income, smoking, consumption of alcohol, dairy products, meat and preserved vegetables, survey season, diabetes status, family history of CVD, CVD medication, poor health status, and physical activity, where appropriate. The black boxes represent HRs and the horizontal bars represent their confidence intervals. The open diamonds represent the overall estimates of HRs and their confidence intervals. 1. the HR after correcting for regression dilution bias; 2. the HR before correcting for regression dilution bias. (DOCX) S2 Fig. Adjusted HRs of 10 MET-hr/day physical activity associated with all-cause and CVD mortality by subgroups of participants. Analyses were stratified by age-at-risk, sex, region, and baseline CVD status, and adjusted for education, income, smoking, consumption of alcohol, dairy products, meat and preserved vegetables, survey season, diabetes status, family history of CVD, CVD medication, poor health status, and fruit consumption, where appropriate. The boxes represent hazard ratios and the horizontal bars represent their confidence intervals. The open diamonds represent the overall estimates of HRs and their confidence intervals. 1. the HR after correcting for regression dilution bias; 2. the HR before correcting for regression dilution bias. (DOCX) S1 Table. Calculation of usual fruit consumption using data from the 1 st and the 2 nd resurvey (n = 2690). à The mean daily portion number came from the 2nd resurvey data, used as a proxy of baseline mean daily portion. †Usual intake amount for each group was estimated by taking into account changes in consumption frequency between baseline and 1st resurvey using this formula Un ¼ Table. Baseline characteristics of participants by baseline prevalent disease à . Values are either percentage or mean (SD) and were adjusted for age, sex, and study area where appropriate. à Stroke group included all participants with self-reported physician-diagnosed stroke, among which 1142 also had IHD; IHD group included those with self-reported IHD, but not stroke; Hypertension group included participants with self-reported hypertension, but without stroke or IHD. † In men, the proportion of current smokers was 47.9% and the proportion of current drinkers was 27.0%; the corresponding proportions in women were 2.5% and 1.5% respectively. ‡ Regular consumption means consuming food products for at least 4 days per week. ¶ Overweight was defined as BMI!24 kg/m 2 and uncontrolled hypertension was defined as SBP!140 mmHg or DBP!90 mmHg or both. § Includes aspirin, statins, calcium antagonist, beta-receptor blockers, ice-inhibitors, diuretics or other unspecified drugs. ¥ Either self-rated poor health or reported having a low capacity of walk. Table. Separate associations of fresh fruit consumption and physical activity with allcause and CVD mortality in people with prevalent stroke and those with IHD at baseline. IHD: Ischemic heart disease; TIA: transient ischaemic attack. à 1142 participants also had IHD Analyses were stratified by age-at-risk, sex, region, and baseline CVD status, and adjusted for education, income, smoking, consumption of alcohol, dairy products, meat and preserved vegetables, survey season, diabetes status, family history of CVD, CVD medication, poor health status, and fruit consumption or physical activity, where appropriate. (DOCX) S5 Table. Results from sensitivity analyses. Analyses were stratified by age-at-risk, sex, region, and baseline CVD status, and adjusted for education, income, smoking, consumption of alcohol, dairy products, meat and preserved vegetables, survey season, diabetes status, family history of CVD, CVD medication, poor health status, and fruit consumption or physical activity, where appropriate. (DOCX) S6 Table. Fresh fruit consumption and physical activity in relation to non-fatal CVD events. Analyses were stratified by age-at-risk, sex, region, and baseline CVD status, and adjusted for education, income, smoking, consumption of alcohol, dairy products, meat and preserved vegetables, survey season, diabetes status, family history of CVD, CVD medication, poor health status, and fruit consumption or physical activity, where appropriate. (DOCX)
|
2018-04-03T00:11:02.949Z
|
2017-04-12T00:00:00.000
|
{
"year": 2017,
"sha1": "67cddc8a1022ae8c3985189af5b74a4c8a12fca8",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0173054&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "67cddc8a1022ae8c3985189af5b74a4c8a12fca8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
238694645
|
pes2o/s2orc
|
v3-fos-license
|
Comparative analysis of differentially expressed miRNAs in leaves of three sugarcanes (Saacharum officinarum L.) cultivars during salinity stress
Sugarcane is an important industrial plant cultivated mostlyin the arid and semiarid regions. Due to climate change and anthropogenic activities, the sugarcane fieldsare prone to be damagedas a result of salt deposition. The consequence of such phenomena is turning to become a major thread in sugarcane cultivation. To address this issue, the identification of salinity tolerant cultivars would be a suitable strategy to minimize yield loss in the area. It is well known thatthe expression of abiotic stress-responsive genes including noncoding microRNAs (miRNAs) and their codingtargetscould lead to enhancement of stress tolerance in crops. Therefore, the expression study of those noncoding and coding genes under stress conditions is an appropriate approach to screen the tolerant cultivars. In addition, the examination of the expression of miRNA’s target genes could provide deeper insight into the molecular stress mechanism and facilitate the identification of tolerant cultivars. We aimedto assess the expression of nine candidate miRNAsand their corresponding targeted genes among the studied sugarcane cultivars under high salinity conditions, leading to the identification of the salt-tolerant cultivar. To achieve our goal, a two-factorial experiment with three sugarcane cultivars (CP-48, CP-57, CP-69) and two salinity levels (0 and 8 ds/m) was conducted. The result indicated significant differences in expression with in miRNAs and also their target genes. The highest reduction of miRNAs expression occurred in miR160 while the lowest oneappeared in miR1432. The data also indicated that the higher and the lowest expression of targeted genes occurred in miR160 and miR393 respectively. Among studied cultivars, the CP-57 showed poor performance while CP-69 expresses a superior tolerance to salt stress. Taken together, these results suggested that the monitoring of microRNA expressioncould provide a new approach forthe screening of well-adapted cultivars under salt conditions. Such an approach would be the appropriate solutionto combat plant stress inhigh salinity region/soil. Our result indicated that the miR160 generates sugarcane tolerant to salt stress, can be potentially be used as a biomarker to salt stress.
untreated sewage which adds ahuge quantity of salt ions. Nowadays, because of all the above-mentioned problems, salinity is an ever-increasing in the sugarcane field resulting in a reduction of sugarcane production in respect of both dry matter and sucrose content in the region. The desalination of farming soils required a significant amount of time, labor and energy inputs which might create serious economic and social damage in the region [3]. It is known from extensive studies that there is ample variation for tolerance to salinity among cultivars of the same species [4]. Therefore, the development of high salinity tolerant cultivars is an efficient way to tackle the salinity problems in such regions. Such salt-tolerant plants are capable of changing morphological, physiological, biochemical and anatomical mechanisms, in order to adapt to a high salinity environment [5].
Recentstudies have indicated that manygenes are involved in the expression and synthesis of proteins related to abiotic stresses [6]. A large body of studies in recent years has proved that plants can trigger regulator genes network, which consistsof expression of certain genes involved with transcription and translation regulation to activate the protection mechanisms to defend the plant under harsh environment [7]. In the protection mechanisms, the posttranscriptional is a vital process for recovering and keepingplant cell homeostasis during and after stress [6]. Recent researches have shown that plants implement miRNAs as gene expression regulator at a post-transcriptional level to minimize the growth and development of plant stress conditions [8].
The microRNAs (miRNAs) are small RNAs of 18-25 nucleotides in lengththat function in RNA silencing and post-transcriptional regulation of gene expression [6]. In plants, miRNAs are involved in multiple processes including organ development and plant responses to environmental stresses. It has been reported that sugarcane can activate certain complex network mechanisms which enable the plant to respond to environmental changes [9]. Notably, several miRNAs have been reported to have higher expression rates in the samples treated with salt treatment. In sugarcane miR166III, 168II, 396II, 398II, 528I, 156 V,167 V, 169III, 397II, 398I, and 159XVI have been found with different expressionsin response to moderate salt stress [10]. The most of identifiedmRNA, targeted by the miRNAs, are transcription factors involved in plant development mainly. Previous works have described that GAMyB, HAP12 and GRF transcription factors have been validated as targets of miR159XVI, miR169III and miR396II, respectively [11].
The main aim of this research is to identify the salt-tolerant cultivar for the region by the monitoring expression profile of candidate miRNAs in the studied sugarcane cultivars under high salinity conditions and also to recognize the best miRNAs for screening of salt-tolerant cultivar in sugarcane.
Plant materials, growth conditions, and salt stress treatments
Three Iranian commercial cultivars of Saccharum officinarum L. (CP-48, CP-57 and CP-69) were used in this study. Seeds of CP-48 namely the salt-sensitive, CP-57 namely the salt-semi-tolerant sensitive and the salt-tolerant CP-69 were purchased from Sugarcane and By-products Development Company (Khuzestan, Ahvaz, Iran). The seedswere cultivated at the Experimental Research Station of College of Agriculture, Shahid Chamran University of Ahwaz, Iran, in 50 × 50 m 2 pots with 2/3 soil with low EC and 1/3 sand in the glasshouse. A two-factorial experiment with three sugarcane cultivars (CP-48, CP-57, CP-69) and two salinity levels (0 and 8 ds/m) based on completed block design with three replications was conducted. Total Dissolved Solid (mg/L) ≈ Electrical Conductivity (EC) (deciSiemens / meter) x 800 was applied to calculate the amount of NaCl required to reachdesired EC. An amount of 6.4 g NaCl (Sigma-Aldrich, USA) was dissolved in a liter of distilled water. Six true leaf seedlings were subjected to salt stress by adding NaCl solution gradually to achieve final EC value of 8ds/m.After 24 h treatment, leaves with or without NaCl treatment were collected from CP-48, CP-57, CP-69 seedlings and stored at -80 ℃ until RNA extraction.
RNA extraction
Total RNA wasextracted from the leaves of the threevarieties of sugarcane (two independent biological replicates) using RNeasy Plus Mini kit (Qiagen, Germany), followed by DNase (Pars tous Company, Iran) treatment to remove the genomic DNA. RNA concentration was quantified using Nano Drop equipment (Nano Drop Technologies Inc., Wilmington, DE) and the quality was examined using 1% agarose gel electrophoresis.
Analysis of miRNA expression by Stemlooped qRT-PCR
For the miRNA, cDNA was synthesized according to the protocol developed by Varkonyi (2007) [14] and RT primers (long stem-loop extension primers) (Table S1), according to the instructions of the manufacturers. The expression level of nine miRNAs (miR160, miR164, miR172, miR390, miR393, miR408, miR529, miR827, miR1432) was determined using the SYBR Green PCR Master Mix. The qRT-PCR was performed in three biological and two technical replicates. The thermal conditions for miRNAs were included an initial denaturation step at 95 0 C for 10 min, then 32 cycles of 95 0 C for 15 s, 58 to 60 0 C for 30 s and 72 0 C for 30 s. To validate the absence of primer dimer, the melting curve analysis was addedin the range of 60 to 95 °C, after the amplification step. The designed primers were listed in Table S2.
Analysis of target gene expression by quantitative RT-PCR
For the potential target genes, the cDNA synthesis was performed according to the instruction of the cDNA synthesis kit (Pars tous Company, Iran) according to the manufacturer's instructions. Expression of nine target genes was assayed with qRT-PCR according to the protocols mentioned above gene-specific primers for qRT-PCR. The primers were designed using Primer Premier6 software (http:// www.premierbiosoft.com/primerdesign/) and listed in Table S3. Reactions were performed at 95 0 C for 10 min, then 32 cycles of 95 0 C for 15 s, 55 to 60 0 C for 30 s and 72 0 C for 30 s. GAPDH was used as a reference gene for normalizing the target gene expression. The data of qRT-PCR was calculated using 2 −∆∆CT method. qPCR was carried out byusingan AB Step One Plus real-time PCR thermal cycler machine and the obtained data wereanalyzed using the associated AB Step One Plus Software v2.3 (Applied Biosystems, Carlsbad, California, USA).
Experimental designs and statistical analysis
The experiment was conducted as a 3 × 2 factorial design using three cultivars of S. officinarum (Factor A) and two levels of stress (Factor B) in a completely randomized design (CRD) withthree replicates. The qRT-PCR results were compared by one-way analysisof variance (ANOVA). .
Result
The results showed that the transcripts of the nine miRNAs and their target genes in the salt stress were significantly dependent on genotype (Pvalue < 0.01) (Table S4 A and B).
A comparison of miRNA expressions showed that a total of nine miRNA were differentially expressed during salinity stress in three cultivars. (Pvalue < 0.01) (Fig. 1). According protein (EBF) genes [16]. The expression of the miR160 target gene ARF18showed a significant upregulation (7-fold change) following three sugarcane cultivars. However, the increase in the miR160 target gene of CP-48 was lower than the expression level of the CP-69 cultivar (Fig. 3). It should be mentioned here that, plus, the above-mentioned target genes, the evaluated miRNAs can regulate several target genes and are involved in various biological processes. In this study, the most acknowledged target genes of studied miRNAs including ARF17, NAC080, AP2, EBF1, LAC3 and SPL9 genes were selected for co-expression network analysis (Fig. 4). In addition, aco-expression network of NAC080 with LAC3 and SPL9 genes was generated (Figure S1 and Table S5).
to previous reports, miR160 is involved in the auxin response by targeting auxin response factors (ARF) genes [15]. In this study, the greatest degree of down-regulation (9-fold change) in response to salt stress was shown by miR160 in CP-57. In addition, the miR164 and miR1432 that encode transcription factors and transporters were shown the lowest degree of down-regulation in response to salt stress ( Figs. 1 and 2). Furthermore, it was found that the miR390, miR393 and miR408 was found in CP-48, CP-57 and CP-69 with a similar response of down regulation. qRT-PCR revealed that the expression levels of target genes of miRNAs levels variedin sugarcane during salt stress treatments. Among the targets, the lowest expression was shown by the miR393 target geneEB_F box. The miR393 is involved in the transport inhibitor response (TIR) and EIN3-binding F-box of miRNAs showed significant difference profiles depending on the cultivar [18]. The miR390 targeting ARF transcription factors [19] has been found with different expressions in sugarcane. ARF is one of the targeted TFs involved in rooting, responding to drought and salinity stress, plant development, response to auxin and auxin signaling [20]. This microRNA is preparing the background for Aux/IAA protein degradation by regulating the activity of SCF E3 ubiquitin ligase. In the current study, the expression of miR160 is extremely decreased while the expression of the ARF gene is increased. The possible reason for this expression pattern is providing an appropriate condition for keep on the growth of plants under salinity stress. This expression pattern show increase in the length of lateral roots which make absorption of water more easily in these limitation conditions.
In sugarcane, NAC TFs and ARFs TFs have been validated as targets of miR164, miR160, respectively [21]. It is now clear thatthe NAC TF family involves in response to abiotic stresses including salinity and drought stresses [22].
Discussion
High salinity is an increasingly important agricultural problem. The metabolism of plants is affected by salt stress and in recent years many studies have been devoted to understanding the molecular mechanisms of plant salt tolerance. Sugarcane cultivars differ in their responses to salt stress. There are several miRNAs that have been identified in different species, but only a few studies have been performed to analyze their expression in response to salt stress in sugarcane.
In the present study, the results indicated that miR160, miR164, miR172, miR390, miR393, miR408, miR529, miR827, and miR1432, have been implicated in stress caused by salt [15,17]. Analysis of the relative expression of all these miRNAs (Fig. 1) showed that miR160 had the highest expression in NaCl treatment. Others showed small differences in expression compared with the control. The result indicated that there were significant differences in the expression of miRNAs and their targets. However, some miRNAs had significant expression. Low expression Fig. 4 Co-expression network for the ARF17, NAC080, EBF1, AP2, LAC3 and SPL9 gene targets response to salinity treatment. 21 genes with the highest weight are in dark purple control oxidative stress. The activation of the SOD enzyme is regulated by the BBP gene [28].
The miR408 targets the mRNAs of the BBP gene and Laccase (LAC). The results had shown that expression of miR408 under salinity stress is decreased and the expression of the BBP gene is increased. Squamosa Binding Protein (SBP) has roles in plant leaf development, vegetative to a reproductive phase transition, fruit development and gibberellin signal transduction. When plants encounter a harsh environment such as abiotic stress in their life cycle, they are accelerating the development of leaves and vegetative to reproductive transition. This event is controlled by molecular mechanisms and changes in genes expression like the SBP gene. The expression of the SBP gene is regulated by miR529 [29]. We also found opposite expression patterns of these three miRNAs (miR529, 827 and 1432) and their target gene (SPL9, SDP and CBP).
Under salinity stress, the nutrition of plants is disrupted and cause extreme damage to plant growth. The phosphorus is very important for plant growth and when salinity stresses its absorption is disrupted. The plant has developed many physiological and molecular mechanisms to absorb P ions under salinity stress. One mechanism is activated SPXdomain-containing protein (SDP) under this condition. This protein is involved in the adjustment of P homeostasis and plants by activating these proteins in salinity stress confront with P starvation [30]. It has shown that this gene is regulated by miR827 in plants. The result of this study showed decreasing in miR827 expression and increasing SDP expression under salinity stress, which match with previous studies in other plants.
Plants are sessile and when they confront the stresses, they adapted tochanges within their cells and organelles. Identification of stresses and responses to them is regulated by many signal pathways where numerousproteins are involved. One known protein that playsa role in signal transduction under stress is CBP (Calcium Binding Protein) wheremiR1432 is targeting this protein in plants. Under stress, the cytosol filled by Ca 2+ and then CBP bind to Ca 2+ and provides conditions to trigger many signal pathways involved in regulating and responding to stresses such as salinity stress [31]. MiR1432 by decreasing its expression and increasing CBP expression, play a role in salinity response. In this study, the expression of miR1432 is decreasedwhile CBP expression is increased.
Conclusions
The results have shown significant differences between nine miRNAs expression and their targets genes under salinity stress compared to control conditions. The results to tolerance towards abiotic stresses such assalinity and drought. The studies have shown that the NAC TF family is part of plants signal transduction which can induce many physiological mechanisms under stress. The TF can directly bind to promoters of genes that are involved in salinity and drought stress and induce their expressions [23]. Recent studies have proved the interaction between NACTF and miR164. Under salinity stress the expression of miR164 is decreased while the expression of NAC is increased. This expression pattern has represented the effect of plant signaling network and ionic adjustment of homeostasis leading to stabilize of plant growth under salinity stress [24].
The miR172 targeting AP2-like ethylene-responsive transcription factors [25] have been found with different expressions in sugarcane. TFAP is involved in many cellular aspects such as controlling growth factors, development and apoptosis [26]. Key approaches to respond to abiotic stress, such as salinity are to decrease growth and development and activate the apoptosis mechanism. In this study, the expression of miR172 is decreased and the expression of TFAP is increased. This expression pattern is happening in order to limit the growth and development and the activate apoptosis mechanism.
Protein kinase genes (PLPRKs) are potential targets for miR390. Previous studies have shown that the expression of miR390 is decreased under salinity stress and its target has increased in expression [27]. In this study, as in previous studies, the expression of miR390 decreased whiles the expression of PLPRK increased. This cause activation of many protein kinases and proteins that areinvolved in salinity stress.
In plants, there are many proteins that cause sensitivity under certain conditions like abiotic stress. Tolerant plants have mechanisms that induce the degradation of these kinds of proteins. EBF gene is one of the genes that involves proteins degradations. This gene is a potential target of miR393 [28]. In this study, the expression of miR393 strongly is induced by NaCl treatment, while EB-Fbox gene has increasedin expression under this stress. One can say that this increase in EB-Fbox is for degrading the proteins that cause sensitivity in sugarcane cultivars under salinity stress.
Many studies generally have shown that under salinity stress, absorb of micro element, such as Copper ions in plants is increased. Since the presence of Copper ions is poisonous for the plants' cells, this ion is extremely controlled by molecular mechanisms. BBP gene (Basic Blue Protein) is part of this molecular mechanism that is involved in Copper ion control. This gene has an important enzymatic role that especially controls the level of Copper ions under salinity stress. On other hand, when plants are under abiotic stress like high salinity stress, oxidative stress is induced. In this condition, enzyme-like superoxide dismutase is active and also indicated significant differences between three cultivars under salinity stress compared to control conditions. All miRNAs are down regulated under salinity stress while all target genes are up-regulated. However, due to the distinguishable expression level of miR160 across the studied varieties, the miR160 can be potentially be used for generating sugarcane tolerant to salt stress. The present results obtained from this study may be used for manipulating the pathways that are related to salinity tolerance and generating tolerant plants for environments with saline soils.
|
2021-09-27T19:58:30.023Z
|
2021-08-09T00:00:00.000
|
{
"year": 2022,
"sha1": "38c38d6edf977d7d492f08f2e0d299362692c411",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-769558/v1.pdf?c=1637263346000",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "942aac0b702dd3f4897aaa11b839c3121872f3c6",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
118295134
|
pes2o/s2orc
|
v3-fos-license
|
Classical solutions and higher regularity for nonlinear fractional diffusion equations
We study the regularity properties of the solutions to the nonlinear equation with fractional diffusion $$ \partial_tu+(-\Delta)^{\sigma/2}\varphi(u)=0, $$ posed for $x\in \mathbb{R}^N$, $t>0$, with $0<\sigma<2$, $N\ge1$. If the nonlinearity satisfies some not very restrictive conditions: $\varphi\in C^{1,\gamma}(\mathbb{R})$, $1+\gamma>\sigma$, and $\varphi'(u)>0$ for every $u\in\mathbb{R}$, we prove that bounded weak solutions are classical solutions for all positive times. We also explore sufficient conditions on the non-linearity to obtain higher regularity for the solutions, even $C^\infty$ regularity. Degenerate and singular cases, including the power nonlinearity $\varphi(u)=|u|^{m-1}u$, $m>0$, are also considered, and the existence of classical solutions in the power case is proved.
Here (−∆) σ/2 = F −1 (| · | σ F ), where F denotes Fourier transform, is the usual fractional Laplacian with 0 < σ < 2 and N ≥ 1. The constitutive function ϕ is assumed to be at least continuous and nondecreasing. Further conditions will be introduced as needed.
The existence of a unique weak solution to the Cauchy problem for equation (1.1) has been fully investigated in [16,17] for the case where ϕ is a positive power. The solution in that case is in fact bounded for positive times even if the initial data are not, provided they are in a suitable integrability space. Such theory can be easily extended to the case of more general functions ϕ; see Section 8 at the end of the paper for some details.
When ϕ(u) = u the equation is the so-called fractional heat equation, that has been studied in a number of papers, mainly coming from probability. Explicit representation with a kernel allows to show in this case that solutions are C ∞ smooth and bounded for every t > 0, x ∈ R N , under the assumption that the initial data are integrable. In the nonlinear case such a representation is not available. Nevertheless, we will still be able to obtain that bounded weak solutions are smooth if the equation is "uniformly parabolic", 0 < c ≤ ϕ ′ (u) ≤ C < ∞.
The precise regularity of the solution is determined by the regularity of the nonlinearity ϕ; see Section 5 for the details. Notice that the condition ϕ ′ > 0 together with the boundedness of u implies that the equation is uniformly parabolic.
The idea of the proof is as follows: thanks to the results of Athanasopoulos and Caffarelli [2], we already know that bounded weak solutions are C α regular for some α ∈ (0, 1). In order to improve this regularity we write the equation (1.1) as a fractional linear heat equation with a source term. This term is in principle not very smooth, but it has some good properties. To be precise, given (x 0 , t 0 ) ∈ Q, we have (1.2) ∂ t u + (−∆) σ/2 u = (−∆) σ/2 f, where f (x, t) := u(x, t) − ϕ(u(x, t)) ϕ ′ (u(x 0 , t 0 )) , after the time rescaling t → t/ϕ ′ (u(x 0 , t 0 )). It turns out, as we will prove in Sections 4 and 5, that solutions to the linear equation (1.2) have the same regularity as f .
Next, using the nonlinearity we observe that f in the actual right-hand side is more regular than u near (x 0 , t 0 ); see formula (4.1). We are thus in a situation that is somewhat similar to the one considered by Caffarelli and Vasseur in [7], where they deal with an equation, motivated by the study of geostrophic equations, of the form where v is a divergence free vector. Comparing with (1.2), we see two differences: in their case σ = 1, and the source term is local. These two differences will significantly complicate our analysis.
In order to obtain the above-mentioned regularity for the solutions u to (1.2), we will use the fact that they are given by the representation formula where P σ is the kernel of the σ-fractional linear heat equation; see Section 3 for a proof of this fact, that falls into the linear theory. Therefore, we are led to study the singular kernel A σ (x, t) := (−∆) σ/2 P σ (x, t). Unfortunately, P σ , and hence A σ , is only explicit when σ = 1. However, using the self-similar structure of P σ , we will be able to obtain the required estimates and cancelation properties for A σ ; see Section 2.
Singular and degenerate equations. The hypotheses made in Theorem 1.1 excludes all the powers ϕ(u) = |u| m−1 u for m > 0, m = 1, since they are degenerate (m > 1) or singular (m < 1) at the level u = 0. Nevertheless, a close look at our proof shows that we may in fact get a "local" result, Theorem 7.1. Therefore, we get for these nonlinearities (and also for more general ones) a regularity result in the positivity (negativity) set of the solution that implies that bounded weak solutions with a sign are classical; see Section 7.
Higher regularity. If ϕ is C ∞ we prove that solutions are C ∞ . The result will be a consequence of the regularity already provided by Theorem 1.1 plus a result for linear equations with variable coefficients, Theorem 6.1, which has an independent interest. The case σ < 1 is a little bit more involved since we first have to raise the regularity in space exponent from σ to 1. See more in Section 6, where higher regularity results depending on the smoothness of ϕ are given.
As a direct precedent of the present work, let us mention the paper [18], where we consider the nonlinearity ϕ(u) = log(1 + u) in the case σ = 1 = N, and prove that solutions with initial data in some L log L space become instantaneously bounded and C ∞ . Notice that in this case ϕ ′ (u) = 1/(1 + u), and hence the equation is uniformly parabolic.
We expect some of these ideas to have a wider applicability. We point out several possible extensions, together with some comments and applications of equation (1.1) in Section 9.
Let us remark that Kiselev et al. give a proof of C ∞ regularity of a class of periodic solutions of geostrophic equations in 2D with C ∞ data [13]. Their methods are completely different to the ones used in the present paper.
Kernel properties
In this section we consider two issues for the kernel A σ = (−∆) σ/2 P σ which play an important role in the study of regularity, namely some estimates and a cancelation property. Before doing this, it will be convenient to introduce certain Hölder space adapted to equation (1.1), together with appropriate notations. For simplicity, we will omit the subscript σ in what follows when no confusion arises. It will also be convenient to use the notation Y = (x, t) ∈ R N +1 .
The σ-distance and the associated Hölder space. The self-similar structure of P motivates the use of the σ-parabolic "distance" This is not really a distance unless σ ≥ 1, since the triangle inequality does not hold if σ < 1. However it is a quasimetric, with relaxed triangle inequality This will be enough for our purposes.
The σ-parabolic ball is defined as B R := {Y ∈ R N +1 : |Y | σ < R}. Performing the change of variables In particular, the volume of the ball B R is proportional to R N +σ . In the same way, The estimates.
Using formula (2.1) we deduce that A = (−∆) σ/2 P has the self-similar expression . This is the basis for the estimates.
Proof. We observe that Φ(ξ) = e −|ξ| σ , hence Ψ(ξ) = |ξ| σ e −|ξ| σ . Using the expression of the inverse Fourier transform of a radial function, putting Ψ(z) = Ψ(|z|), The estimate for the time derivative is a consequence of which follows immediately from the equation satisfied by P , and (2.6). Indeed, In order to estimate the spatial derivative ∇ x A(Y ), we consider the equation relating the profiles Φ and Ψ, which follows from (2.7). It implies that Since ∇Ψ is bounded, we deduce the estimate Finally Let us point out that further derivatives may be estimated in a similar way.
Cancelation. We now show that the function A has zero integral in the sense of principal value adapted to the self-similar variables: we take out a small σ-ball and integrate, and then pass to the limit. Proposition 2.2 For every R > ε > 0, Proof. From the equation for the profile Φ, we get an alternative expression for the profile of A, Hence, using the change of variables (2.4) and the behaviour of Φ at infinity, we get
The linear problem
As we have said in the Introduction, the solution u to equation (1.1) will be analyzed by writing it as a solution of a linear problem with a particular right hand side. This leads to the representation of u by means of a variation of constants formula. We give a proof of this independent fact and then proceed to establish the regularity of this linear problem.
, σ}, and u 0 ∈ L p (R N ) for some 1 ≤ p ≤ ∞, there is a unique very weak solution of problem (3.1), which is given by representation using Duhamel's formula: Proof.
Step 1. Uniqueness. We may assume u 0 = f = 0 and then apply the results of [3] where a wider class of data and solutions is treated.
Step 2. u is well defined. We only have to take care of the last term in (3.2), which can be written as In order to prove that this integral is well defined we decompose The cancellation property (2.8) allows us to estimate the inner integral, The outer integral is bounded by using estimate (2.5).
Step 3. u is a very weak solution. In order to justify the representation formula we proceed by approximation. Let t > 0 and take f ∈ C ∞ c (Q) with f (x, t) = 0 for t ≥ t − r, r small, thus avoiding the singularity. In that case the integral in the ball B − r vanishes identically. Since moreover u given by (3.2) is a bounded classical solution, hence a very weak solution, the assertion holds.
Next, for any f ∈ C ∞ (Q) and compactly supported in space (in a uniform way), we use approximation with functions f n as before by modifying f in the time interval t−r n ≤ t ≤ t. Using the fact that the fractional Laplacian can be applied to f n instead of P , it is easy to see that we can pass to the limit u of the solutions u n , which is still a bounded classical solution. Moreover, the formula as it is written holds for functions f of this class by integrating by parts and the integrability estimate from Step 2.
Finally, for general f as in the statement, we use approximation of f in a compact set by functions f n ∈ C ∞ (Q) compactly supported in space. Passing to the limit in the very weak formulation, which is again justified thanks to Step 2, we obtain that u = lim u n is a very weak solution.
Regularity of the linear problem
The first term in the right-hand side of the representation formula (3.2) is regular. Hence, u has the same regularity as We start by proving that g has the same σ-Hölder regularity as f .
and see if we get that it is O(h α ). We decompose Q into four regions, depending on the sizes of |x − x 1 | and |t − t 1 |, see Figure 1.
is a constant to be fixed later. We take h small enough (̺h < min{t 1 , 1}) so that, on the one hand B ̺h ⊂ Q, and, on the other hand, we can use the relation (2.3). The difficulty in this region is the non-integrable singularity of A(Y ) at Y = 0. Integrability will be gained thanks to the regularity of f . We first have, repeating the computations in Step 2 of the proof of Theorem 3.1, To estimate the second term in (3.4), we consider the ball B h (Y 2 ). To be sure that Using again the cancelation property (2.8), we get The first integral I 1 satisfies, as before, |I 1 | ≤ ch α . As to I 2 , since we are far from the singularity of A, Notice that α < σ, so that the last integral is convergent.
The required estimate is obtained here using the fact that Thus we are integrating a difference of A's, so there will be some cancelation. Indeed, by the Mean Value Theorem, assuming α < ν so that the last integral is convergent.
Remark.
Notice that if some derivative (even a fractional one) of f belongs to C α σ (Q)∩L ∞ (Q), then a computation analogous to that in the above lemma shows that the convolution of this derivative against the kernel A also belongs to C α σ (Q)∩L ∞ (Q). We conclude that g has the same regularity as f .
As a corollary of Lemma 3.1, we obtain a maximal regularity result for the linear equation with a standard right hand side that has independent interest.
The result now follows noticing that ∂ t w = f − u.
Improving σ-Hölder regularity
We now return to the nonlinear equation (1.1). For bounded weak energy solutions the equation is neither degenerate nor singular. Hence, the results from [2] guarantee that they are C α σ for some α ∈ (0, ν). The aim of this section is to improve this regularity showing that the solutions belong to C α σ for all α < ν. Further regularity, showing that the solution is classical, will be obtained in Section 5.
The idea is to use that the solution u to the nonlinear equation (1.1) is a solution to the linear equation Since u ∈ C α σ (Q), ϕ is uniformly parabolic, and ϕ ′ ∈ C γ (R), applying the Mean Value Theorem we get where θ is some value between u(Y 1 ) and u(Y 2 ). This gives not only that f has the same regularity as u, namely f ∈ C α σ (Q), but a bit more that will be enough to improve the σ-Hölder regularity of u by a constant factor. Lemma 4.1 Let f ∈ L ∞ (Q) and let g be the function defined in (3.3). Assume that there exist c > 0, δ 0 > 0 and ǫ > 0, α + ǫ < ν, such that Proof. The fact that f is C α+ǫ σ at Y 0 (with α + ǫ < ν) implies that all the estimates used to prove Lemma 3.1 work and yield terms which are O(h α+ǫ ), except that for the integral I 1 in (3.5). To estimate this term take ̺h < δ 0 and observe that (4.3) gives Applying this lemma a finite number of times we obtain the desired regularity.
We must remark that the restriction α + ǫ < ν in Lemma 4.1 is only needed to make the outer integrals convergent; the estimate in the ball B ̺h (Y 0 ) is true for any α ∈ (0, ν), ǫ ∈ (0, 1]. This observation turns to be of great importance in obtaining further regularity in the next section.
Classical solutions
Our next aim is to go beyond the C ν σ threshold of regularity. We encounter here an additional difficulty, steaming from the nonlocal character of the fractional Laplacian operator, which is not present in the work [7], namely that For that reason we must treat the second order estimates in the time and space variables separately. We begin by improving regularity in space, to obtain u(·, t) ∈ C α (R N ) uniformly in t for some α > ν depending on the regularity of the nonlinearity ϕ. We then use equation (1.1) to get Lipschitz regularity in time, which is later improved to get u(x, ·) ∈ C ν(1+γ)/σ (R + ) uniformly in x. The last step is to reach the desired smoothness in space, u(·, t) ∈ C ν(1+γ) (R N ) uniformly in t.
Lemma 5.1 Let f ∈ L ∞ (Q) satisfy (4.2) and (4.3) with 0 < α < ν, 0 < ǫ < 1, and let g be the function defined in (3.3). Then, for every e ∈ R N , |e| = 1, Proof. Put Y = Y 0 + (he, 0) and let Y * = 2Y 0 − Y be its symmetric point with respect to Y 0 . We have to estimate the second difference As in the proof of Lemma 3.1, we consider separately the contributions to the integral in several regions, though here we only need to consider the ball B ̺h (Y 0 ) and its To estimate the contribution in the complement of the ball we use Taylor's formula. We have, by Proposition 2.1, where θ is as before some intermediate point. This gives We have used that α + ǫ < 2, and so the integral converges. This completes the desired estimate.
Proof.
For each given Y 0 ∈ Q we define a function g and, as in the proof of Lemma 4.1, we deduce estimate (5.1) with ǫ = αγ at that point, which is translated into the same estimate for the solution u. Since the constants do not depend on the particular point chosen, we get that u satisfies with constant uniform in Q. We can thus prove that (−∆) δ/2 u is bounded in R N for every t > τ > 0 and every δ ∈ (0, α(1 + γ)). Indeed, The result now follows from [20, Proposition 2.9].
Lemma 5.3 Under the hypotheses of Theorem
Proof. We first show that |(−∆) σ/2 ϕ(u)| is bounded in Q. For that purpose we estimate the second differences in x of ϕ(u) in terms of second differences in x of u and use the previous result. If Z = (he, 0), e ∈ R N , |e| = 1, for every α < ν. Since ν(1 + γ) > σ we get, analogously to how we obtained (5.2), that |(−∆) σ/2 ϕ(u)| ≤ c in Q. Now, using the equation we get that |∂ t u| ≤ c in Q, that is, u is Lipschitz continuous in time, uniformly in space. This means u ∈ C σ σ (Q). With this information we now try to repeat the above calculations of Lemma 5.2 with x 0 fixed and varying t. To this end we consider the point Y = Y 0 + (0, h), h > 0 (for simplicity), and we replace h by h 1/σ in the regions of integration, see the proof of Theorem 3.1.
First, the integral in the ball B ̺h 1/σ (Y 0 ) is estimated as in Lemma 3.1, taking note of (4.1), which holds with α = ν.
Now consider the region
The idea here is that the characteristic functions take all the value one. Thus, by using Taylor's expansion, since only t varies, we have We now turn our attention to the difficult part, the small slice S h 1/σ = {Y ∈ B c ̺h 1/σ (Y 0 ), |t − t 0 | < h}, where we have to look more carefully at the possible cancelations. We have First, by the Mean Value Theorem applied to A in the time variable, together with the regularity C ν σ of u and Lemma 3.1, we have As to the second integral J 2 , performing the change of variables Y → Z 1 = Y * ≡ (x, 2t 0 − t), symmetric in time, in the second term (and writing again Y instead of Z 1 ), we have
Using the worst case we can write the joint regularity in the form with both variables playing the same role. We also have that the solution is classical since it has continuous derivatives in the sense required in the equation. Proof. We point out that both sides of the equation are bounded functions and equal almost everywhere. We also know that ∂ t u is Hölder continuous as a function of t for a.e. x, and the Hölder continuity is locally uniform. On the other hand, we easily conclude that (−∆) σ/2 ϕ(u) is Hölder continuous as a function of x for a.e. t, and this happens again locally uniformly. Hölder continuity everywhere in both variables follows.
Let us recall that under our assumptions σ < ν(1 + γ), so that we are getting Hölder regularity for ∂ t u in all cases.
Higher regularity
We have already proved that solutions of (1.1) are differentiable in time. However, in view of Lemma 5.4 at this stage they are only known to be differentiable in space if σ(1 + γ) > 1, where γ is the Hölder exponent of ϕ ′ . This assumption can we weakened. Proposition 6.1 Under the assumptions of Theorem 1.1, if σ < 1 and γ + σ > 1, then u ∈ C 1,α (Q) for some α ∈ (0, 1).
We consider the function z = ∂ t u, which belongs to C α σ (Q) for all α < σ. Let Y 0 = (x 0 , t 0 ) ∈ Q be fixed and denote a(Y ) = ϕ ′ (u(Y )), z 0 = z(Y 0 ), a 0 = a(Y 0 ). Then z is a distributional solution to the inhomogeneous fractional heat equation We decompose z as z 1 + z 2 , where z i is a solution to The term z 2 inherits the regularity of F 2 , that is, the regularity of a(Y ). As to z 1 , we use the fact that the function F 1 = (a − a 0 )(v − v 0 ) satisfies conditions (4.2) and (4.3), which implies, thanks to Lemma 4.1, that z 1 is more smooth than a, hence more smooth than z 2 . Therefore, we concentrate on the 'bad' term, z 2 .
The regularity of F 2 , that is, the regularity of ϕ ′ (u), coincides with the minimum between the regularities of ϕ ′ and u. Therefore, F 2 (x, ·) belongs to C γ (R) uniformly in x. As for spatial regularity, at this stage we know that F 2 (·, t) is C α (R N ) uniformly in t for all α < min{σ(1 + γ), γ}.
Using the equation we conclude that u(·, t) ∈ C γ+σ (R N ) uniformly in time. Since we have assumed that γ + σ > 1, this means that u is differentiable also in x.
Proof.
The proof of C 1,α regularity is done by considering the linear equations satisfied by the derivatives. Boundedness for the derivatives then immediately follows, This linear result is used now to obtain further regularity for the nonlinear problem, which covers in particular Theorem 1.2.
Assume that the result is true for derivatives of order
It is easily checked that v β satisfies an equation of the form for some α ∈ (0, 1). Since u is bounded, a = ϕ ′ (u) ≥ δ > 0. Hence we may apply Theorem 6.1 to conclude the result.
Nonlinear degenerate and singular equations
A careful inspection of the proof of Theorem 1.1 shows that the result has a local nature, and this will be exploited here to treat more general equations. Hence u is a classical solution of (1.1) in that set.
Proof. We have to revise the proofs of all the results in Subsection 3.2 and Sections 4 and 5. For instance, in the proof of Lemma 3.1, we have to replace the assumption f ∈ C α σ (Q) by f ∈ C α σ (Ω) to conclude that g belongs to the same space, and this is true since f is also bounded. The same holds for Lemma 4.1.
In order to apply this result we need to make sure that the solution is C α σ in some set Ω ⊂ Q. This has been proved under certain conditions in [2]: for some A, B ∈ R, A < B, there exists a constant C = C(A, B) > 0 such that Indeed, in this case every bounded weak solution u to the equation in (1.1) satisfying belongs to C ε (Ω) for some ε = ε(C), and thus u ∈ C α σ (Ω), α = νε. Application to the fractional porous medium equation.
When ϕ(u) = |u| m−1 u, m ≥ 1, hypothesis (7.1) is satisfied with a constant C = m which does not depend on A, B. Therefore, bounded weak solutions are uniformly Hölder continuous in R N × (τ, ∞), τ > 0. However, the equation degenerates when u = 0. Hence the application of Theorem 7.1 only yields the regularity stated there in the set {u = 0}.
In the fast diffusion case m < 1 hypothesis (7.1) only holds if A > 0 or B < 0. Thus, C α regularity is only guaranteed in the positivity set (or negativity set) of a solution. Nevertheless, the application of our result leads to the same conclusion as in the case m > 1 in the set {u = 0}.
On the other hand, in our paper [17] we prove, for all m > 0, that when the initial value is nonnegative the solution is strictly positive everywhere for positive times. We obtain that the solution belongs to C α in this case, and the application of the results of the present paper then imply that the solution is classical. The positivity property holds for all m > 0, which is in sharp contrast with the nonlinear theory with the standard Laplacian and m > 1, where the existence of free boundaries is well-known [22].
Theory of existence and basic properties
As a complement to the previous regularity theory, we devote this section to provide a survey of the main facts of the existence and uniqueness theory for the Cauchy problem for equation (1.1), Such a theory has been developed in great detail in the paper [17] for the case where ϕ is a power function. As in the case of the standard (local) porous medium equation, many of the basic features of the theory can be extended to more general nonlinearities ϕ, as long as they are continuous and nondecreasing, cf. [9]. Therefore, we will outline here how such extension can be done in the fractional case σ ∈ (0, 2), with special attention to the points where the arguments differ.
Let us recall the concept of weak solution to the Cauchy problem (CP): a function holds for every ζ ∈ C ∞ c (Q); and (iii) u(·, 0) = u 0 almost everywhere. The (homogeneous) fractional Sobolev spaceḢ σ/2 (R N ) is the space of locally integrable functions ζ such that (−∆) σ/4 ζ ∈ L 2 (R N ). We point that this is a convenient choice among other possible notions of weak solution, and it can be described as a weak L 1 -energy solution to be specific.
Solutions with bounded initial data
We will start by considering the theory for initial data Existence and uniqueness are proved by using the definition of the fractional Laplace operator based in the extension technique developed by Caffarelli and Silvestre [6], which is a generalization of the well-known Dirichlet to Neumann operator corresponding to σ = 1. Thus, if g = g(x) is a smooth bounded function defined in R N , its σ-harmonic extension to the upper half-space, v = E (g), is the unique smooth bounded solution v = v(x, y) to Then it turns out, see [6], that where µ σ = 2 σ−1 Γ(σ/2)/Γ(1 − σ/2). In (8.1) the operator ∇ acts in all (x, y) variables, while in (8.2) (−∆) σ/2 acts only on the x = (x 1 , · · · , x N ) variables.
Using this approach, problem (CP) can be written in an equivalent local form. If u is a solution, then w = E (ϕ(u)) solves where β = ϕ −1 . Conversely, if we obtain a solution w to (8.3), then u = β(w) y=0 is a solution to (CP).
We use the concept of weak solution for problem (8.3) obtained by multiplying by a test function ζ, We then introduce the energy space X σ (R N +1 In order to solve the evolution problem, which is our concern, we use the Nonlinear Semigroup Generation Theorem due to Crandall-Liggett [8]. We are thus reduced to deal with the related elliptic problem x ∈ R, y > 0, − ∂w ∂y σ + β(w) = g, x ∈ R, y = 0, with g ∈ L 1 + (R) ∩ L ∞ (R). As in the case treated in [17], in order to get a solution by variational techniques, it is convenient to replace the half space R N +1 + by a half ball B + R = {|x| 2 + y 2 < R 2 , x ∈ R N , y > 0}. We impose zero Dirichlet data on the "new part" of the boundary. Therefore we are led to study the problem with g ∈ L ∞ (D R ) given. Minimizing the functional we obtain a unique solution w = w R to problem (8.5). Moreover, if g 1 and g 2 are two admissible data, then the corresponding weak solutions satisfy the L 1 -contraction property D R (β(w 1 (x, 0)) − β(w 2 (x, 0))) + dx ≤ R (g 1 (x) − g 2 (x)) + dx.
The passage to the limit R → ∞ uses the monotonicity in R of the approximate solutions w R . We obtain a function w ∞ = lim R→∞ w R which is a weak solution to problem (8.4). The above contractivity property also holds in the limit. Moreover, β(w ∞ (·, 0)) L ∞ (R) ≤ g L ∞ (R) , and w ∞ ≥ 0, since g ≥ 0.
Now, using the Crandall-Liggett Theorem we obtain the existence of a unique mild solution w to the evolution problem (8.3). To prove that w is moreover a weak solution to problem (8.3), one needs to show that it lies in the right energy space. This is done using the same technique as in [16], which yields the energy estimate Hence the function u = β(w(·, 0)) is a weak solution to problem (CP). In addition, β(w(·, 0)) L ∞ (R×(0,∞)) ≤ u 0 L ∞ (R) , and w ≥ 0. Recalling the isometry betweeṅ H σ/2 (R N ) and X σ (R N +1 The Semigroup Theory also guarantees that the constructed solutions satisfy the Uniqueness follows by the standard argument due to Oleinik et al. [15], using here the test function in the weak formulation for the difference of two solutions u and u.
Summarizing, we have proved the following existence and uniqueness result.
Theorem 8.1 Let ϕ ∈ C(R) be nondecreasing. Given u 0 ∈ L 1 (R N ) ∩ L ∞ (R N ) there exists a unique bounded weak L 1 -energy solution to problem (CP).
Solutions with unbounded data. Boundedness and decay
If the (nondecreasing) nonlinearity ϕ satisfies ϕ ′ (u) ≥ C|u| m−1 for some m ∈ R and |u| ≥ C, then weak solutions with initial data in L 1 (R N ) ∩ L p (R N ), where p ≥ 1 satisfies p > p(m) = (1 − m)N/σ, become immediately bounded, hence, thanks to our results, classical.
The idea is to take as test function in the weak formulation ζ = (|u| − 1) p−1 + sign(u). Though u is not differentiable in time a.e. for a general ϕ, this is not needed for the proof, since a regularization procedure, using some Steklov averages, allows to bypass this difficulty; see for example the classical paper [1] for the case of local operators. Hence, we only have to check that ζ ∈ L 2 loc ((0, ∞) :Ḣ σ/2 (R N )) for every p ≥ 2. This will follow from the following result applied to v = ϕ(u).
Proof. Using the extension technique we have Remark. If moreover f is convex then, noting that If we use the above test function and apply the generalized Stroock-Varopoulos inequality, proved in [17, Lemma 5.2], we obtain Now using the Hardy-Littlewood-Sobolev inequality [12], [21] if N > σ, we get for every p ≥ 2. If in addition p > (1 − m)N/σ, this inequality is enough to apply a standard Moser's iteration technique to obtain an L p -L ∞ smoothing effect. We can then weaken the restriction p ≥ 2 to p ≥ 1 using interpolation; see [17]. Take note that in the case N = 1 ≤ σ < 2 we must replace the Hardy-Littlewood-Sobolev inequality by a Nash-Gagliardo-Nirenberg inequality; see [17,Lemma 5.3]. We omit further details.
Let us state precisely the smoothing result thus obtained for future reference. Remark. If the function ϕ satisfies the condition ϕ ′ (u) ≥ C|u| m−1 for every u ∈ R and some fixed m > 0, a classical scaling argument allows to obtain a decay estimate for every t > 0; see for instance [23]. Indeed, the function v(x, t) = λ γp u(λ pγp/N , λt) is a solution to the equation ∂ t v + (−∆) σ/2 ϕ(v) = 0 with ϕ(s) = λ mγp ϕ(λ −γp s), which satisfies the same condition on the derivative. Thus, applying (8.6) at t = 1 and putting λ = t we get Existence for data which are unbounded is proved by approximation; see [17] for the details in the case where the nonlinearity is a pure power. As for uniqueness, continuity in L 1 guarantees that two solutions with the same initial data do not differ more than ε in L 1 norm for some small enough time. Since for positive times solutions are assumed to be bounded, we may use the L 1 contraction property to prove that the distance between the two solutions stays smaller than ε for any later time. Since ε is arbitrary, uniqueness follows.
Extensions and comments
Some applications. Equation (1.1) appears in the study of hydrodynamic limits of interacting particle systems with long range dynamics. Thus, in [11], Jara and coauthors study the non-equilibrium functional central limit theorem for the position of a tagged particle in a mean-zero one-dimensional zero-range process. The asymptotic behavior of the particle is described by a stochastic differential equation governed by the solution of (1.1).
In several space dimensions, equations like (1.1) occur in boundary heat control, as already mentioned by Athanasopoulos and Caffarelli [2] , where they refer to the model formulated in the book by Duvaut and Lions [10], and use the extension technique of Caffarelli and Silvestre.
For a more thorough discussion on applications see [5].
Regularity for unbounded solutions. In our proofs we are requiring the solutions to be bounded in order to make the integrals on unbounded sets convergent. However, this requirement may be not needed to this aim. It may be enough that the solutions belong to C([0, T ] : L 1 (R N , ρ dx)). It would be interesting to explore this possibility, since this may be helpful in the study of higher regularity.
Higher regularity for the fractional porous medium equation. The main difficulty to obtain further regularity in this case is that, since the equation is not uniformly parabolic at infinity (it is not true that 0 ≤ c ≤ ϕ ′ (u) ≤ C < ∞), we do not know the derivatives to be bounded. Hence, we cannot apply Theorem 6.1 directly. However, as mentioned in the previous paragraph, this might be circumvented by substituting the boundedness requirement by some less restrictive condition. The precise quantitative statements of the positivity property obtained in [5] might be helpful to this aim.
The fractional porous medium equation with sign changes. Our results only give that the equation is satisfied in a classical sense where the solution is different from 0. It remains to determine what is the optimal regularity for changing sign solutions. A first step would be to study whether solutions are strong, i.e., whether ∂ t u (and hence (−∆) σ/2 u) are functions, and not only distributions.
The very fast fractional porous medium equation. The nonlinearities ϕ(u) = (1+u) m −1 m , m = 0, are uniformly parabolic if we restrict ourselves to nonnegative solutions. Moreover, they fall within the hypotheses of Theorem 8.2, if we modify the nonlinearity suitably for u < 0, which does not matter if we only consider nonnegative solutions. Therefore, we obtain existence of C ∞ solutions for all nonnegative initial data in L 1 (R N ) ∩ L p (R N ) with p large enough. If σ > 1 − m and N = 1 we can even take p = 1.
The nonlinearity ϕ(u) = log(1 + u) is also uniformly parabolic if we restrict to nonnegative solutions. In addition, after a suitable modification for u < 0, it satisfies the hypotheses of Theorem 8.2 with m = 0. Thus, if N = 1 and σ = 1 we are in the critical case where we need a bit more than integrability to have existence. In [18] we proved that it is enough for u 0 to belong to some L log L space. The solution is then guaranteed to be C ∞ .
The singular nonlinearities ϕ(u) = u m /m, m < 0, and ϕ(u) = log u (with u > 0) cannot be treated in the same way, and require new ideas.
The fractional Stefan problem. For the Stefan nonlinearity ϕ(u) = (u − 1) + , hypothesis (7.1) holds if A > 1. Hence bounded weak solutions are C α in the set where u > 1 and our main result proves that they are C 1,γ , hence classical, in that set for all γ ∈ (0, 1). Let us mention that u is known to be continuous everywhere, though not C α . It would be interesting to determine what is the optimal regularity for this problem.
We recall that, if the operators are acting on smooth enough functions, then the half-Laplacian (−∆) 1/2 can be written in terms of the Hilbert transform Hf (x) = 1 π P.V. H(v(y, τ )) = H(ϕ(u(x, t))) = 1 π P.V. If instead of H we had the standard Hilbert transform H, and we take m = −δ, we would have an equation in Morlet's family (9.1). The connection also works for the case m = 0, if we take ϕ(u) = log(1 + u); see [18].
|
2013-11-28T21:12:09.000Z
|
2013-11-28T00:00:00.000
|
{
"year": 2013,
"sha1": "67271dc5d98481e18442fff77e7f92a2d84a49d8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1311.7427",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "67271dc5d98481e18442fff77e7f92a2d84a49d8",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
70012102
|
pes2o/s2orc
|
v3-fos-license
|
Anomaly Detection and Localization for Cyber-Physical Production Systems with Self-Organizing Maps
. Modern Cyber-Physical Production Systems provide large amounts of data such as sensor and control signals or configuration parameters. The available data enables unsupervised, data-driven solutions for model-based anomaly detection and anomaly localization: models which represent the normal behavior of the system are learned from data. Then, live data from the system can be compared to the predictions of the model to detect anomalies and perform anomaly localization. In this paper we use self-organizing maps for the aforementioned tasks and evaluate the presented methods on real-world systems.
Introduction
Modern Cyber-Physical Production Systems (CPPS) evolve rapidly, become modular, can be parameterized and contain increasingly more sensors due to increasing product variety, product complexity and pressure for efficiency in a distributed and globalized production chain they become modular, can be parameterized and contain a growing set of sensors [1]. This also means it becomes more and more difficult to monitor the systems. Human operators often struggle to diagnose faults or anomalous behavior in the system in time, leading to system break down, unexpected downtime or degradation in product quality.
A common approach to detect the aforementioned scenarios is to construct models for a given system and compare the predictions of the model to the real system. Anomalous behavior is detected when the real system's behavior deviates from the model's predictions. Manual construction of system models by experts is usually time consuming, expensive and also difficult in today's evolving complex systems. Experts with the necessary knowledge are usually scarce and often times some of the necessary knowledge is not available at all. Modern CPPS often provide large amounts of data such as control signals, sensor signals and configuration parameters [10]. This allows the use of data-driven methods: models are learned from data and then used for various tasks such as anomaly detection and anomaly localization.
Live data from the system is compared to the predictions of the learned model. Deviations from the normal behavior are classified as anomalous. Once anomalies are found, the anomalous samples are presented to a reverse model to localize the anomalies. This provides a starting point for plant operators and experts to restore the system to normal working order, ideally before production losses occur. In this paper we use self-organizing maps (SOM) to learn a systems normal behavior in an unsupervised manner. The learned SOM's are then used for both anomaly detection and anomaly localization. The contents of this paper are structured as follows: First, section 2 explains the general concept of self-organizing maps. Second, section 2.1 presents an approach to detect and localize anomalies within the signal domain of a system. Third, section 2.3 introduces an approach where timed automata are used to track the working point on top of the selforganizing map in order to detect anomalies in the time domain. Furthermore, the aforementioned approaches to anomaly detection and localization are applied and explained on the Institute Industrial IT's OPAK demonstrator [11] in section 3. Section 4 presents the conclusion and future points of research.
Self-Organizing Map
The self-organizing map (SOM) [5], also referred to as self-organizing feature map or Kohonen network, is a neural network that can be associated with vector quantization, visualization and clustering but can also be used as an approach for non-linear, implicit dimensionality reduction [17]. A SOM consists of a collection of neurons which are connected in a topological arrangement which is usually a two dimensional rectangular or hexagonal grid. The input data is mapped to the neurons forming the SOM. Each neuron is essentially a weight vector of original dimensionality but provides additional information such as its coordinates within the grid. All experiments in this paper use a two dimensional, non-toroidal rectangular lattice and the Euclidean distance measure as shown in Definition 1. each neuron n ∈ M has a weight vector w n ∈ R m , m ∈ N.
-G is a two-dimensional rectangular lattice in which the neurons n ∈ M are arranged. -d(x, y) is the distance measure to calculate the distance between two vectors x and y which can for example be weight vectors and/or vectors in the input space. The euclidean distance is used for all models in this paper. -an input sample o i ∈ R m , i ∈ N, m ∈ N is mapped to the SOM through its best matching unit (BMU). The BMU is given by bmu One way to learn a SOM from data is a random batch training approach: the initial values of the neuron's weight vectors for the training can be randomly initialized or sampled from the training data to provide a diverse starting point for the training process. Training takes place over a chosen amount of epochs. All samples from the training data are presented to the algorithm within one epoch. A best matching unit (BMU) is calculated for each input sample from the training data by finding the neuron which has the smallest distance to the sample. The BMU and all of its neighboring neurons, assigned through the topology and neighborhood radius, are shifted towards the input sample ( Figure 1). Both the size of the neighborhood and strength of the shift decrease over time to help with convergence. In the end, each neuron of the SOM represents a part of the training data. Areas in the training input space with few examples are represented by few neurons of the SOM while dense areas in the input space are represented by a larger number of neurons. Usually, the number of neurons is chosen much smaller than the number of samples in the training data, effectively discretizing and reducing the training data to the most important samples.
Usually, the number of neurons is chosen much smaller than the number of samples in the training data, effectively discretizing and reducing the training data to the most important samples. The compact representation of the training data provided by the SOM is used in section 2.1 to detect anomalies by calculating the quantization error. The unified distance matrix (u-matrix) [14] of the SOM is great for visual identification of clusters in high-dimensional data. The distance to neighboring neurons according to the SOM's topology is computed and plotted as an image: the X and Y coordinates of the neurons represent the first two dimensions. The third dimension is given by the sum of distances to neighboring neurons as in definition 2. It calculates the sum of distances to neighbouring neurons according to the SOM's topology and visualizes clusters contained in the, usually high dimensional, training data. The neurons located on the borders of the non-toroidal SOM have fewer neighbours than the remaining neurons. Therefore, the summed distance is divided by the number of neighbours of the corresponding neuron to account for the different amounts of neighbouring neurons. A color gradient can be used instead of a third dimension as shown in Figure 3: valleys are represented by the color yellow, indicating a low distance between neighboring neurons. Ridges are represented by the color red, indicating a high distance between neighboring neurons. The u-matrix can be defined as follows in Definition 2. Definition 2. for each neuron n ∈ M and its associated weight vector w n , the u-matrix height is given by U (n) = k∈NN (n,G) d(w n , w k ), where NN (n, G) is the set of neighboring neurons of n defined by grid G and d(x, y) is the distance used in the SOM algorithm.
The u-matrix representation illustrates why SOM's were chosen: SOM's tend to keep neurons with similar signal weights closely together, which results in a topographic landscape with valleys, where weights of neighbors are similar, and ridges, where weights of neighbors are not similar. Valleys represent regions where the contained neurons weight vectors are very similar. These valleys are separated by ridges which mark transitions between the different feature spaces. In section 2.3 we further explore this matter to detect anomalies within the time domain.
Anomaly detection with quantization error
The SOM can be used to detect anomalies by calculating the quantization error: small errors below a threshold are considered normal, while errors above are considered anomalous. Quantization error based approaches were already used for tasks such as network monitoring [6] and anomaly detection in industrial processes [12][3] [13]. These works however, did not perform an anomaly localization and only [13] used the quantization error as a measure for system degradation.
The quantization error (Definition 3) of each sample is calculated by mapping it to the SOM to get its BMU. The distance of the sample to the BMU's weight vector is the quantization error. Definition 3. Using the notation from definition 1, the quantization error qe of an input sample o i ∈ R m , i ∈ N is given by the distance of the input sample to its BMU of the SOM: The quantization errors for data that is not anomalous are usually greater than 0 due to the discretization of the SOM. A threshold for the quantization error above which an input sample is classified as anomalous is required. Manual selection of the threshold works but is usually unfeasible for practical applications. It is far more convenient to estimate the threshold from data: the quantization errors of the training data can be seen as a probability distribution and quantiles can be used to retrieve the threshold for the anomaly detection. The quantile can be adjusted and we will use the parameter τ with τ ∈ R and 0.0 ≤ τ ≤ 1.0 within this paper. This can be adjusted to optimize the outcome of the anomaly detection: when labels are present τ can be used to fine tune the anomaly detection score. When the training data is perfect, meaning it contains only normal behavior, no sampling errors, no glitches in the sensors and no noise, then a τ of 1.0 is fine as this results in the maximum error for the threshold. However, training data is never perfect when working with real systems and data might contain a small portion of samples affected by noise and/or other effects. The maximum error might be too large to effectively find anomalies. Setting τ to a value slightly smaller than 1.0 can increase the true positive rate of the anomaly detection at the cost of some false positives, depending on the use case and desired outcome.
Localization of anomalies
An additional step after the anomaly detection is to calculate the signal or sensor which is most likely to cause the anomaly. Anomaly localization is performed after an anomaly is detected: the observation found to be anomalous is fed through a reverse model to obtain the expected values for the signals. The deviations from the expected values can then be used to identify the signals related to the anomaly. The weight vector of each neuron of the SOM has the same dimensions as the input data and each element of the weight vector contains the value of its corresponding signal. Once an anomaly is detected, the input sample is again mapped to the SOM to retrieve the BMU. Now, the distance of each signal to the weight vector is calculated and the resulting signals and their distances are sorted in a descending order according to their distance. Since real world systems usually provide a large number of signals it is necessary to reduce the number of displayed signals. Therefore only the first n signals are displayed giving plant experts and operators a starting point to locate the anomaly and possible fault in the system and ultimately restore the system's normal working order. For the experiments in this paper we only consider the signal with the largest deviation (n=1).
SOM trajectory tracking with timed automata
Another way to utilize self-organizing maps for anomaly detection in industrial production processes is to track the trajectory of the working point on top of the SOM. Other works such as [7], [12] and [2] already performed a visual anomaly detection on different processes by tracking the trajectory of the working point on the SOM: the observations are mapped to their corresponding best matching units as soon as they are recorded. Over time, the path or trajectory of the BMU can be observed and deviations from the known path indicate anomalous behavior.
However, these works do not attempt to track the trajectory through the use of a mathematical model and only pursue a visual anomaly detection by plotting the trajectory on top of the SOM's u-matrix. In this section we use discrete timed automata to learn the trajectory during normal production and detect deviations from it afterwards. This provides explicit modeling of time which the SOM is unable to do alone.
Timed automata have proven to be a great tool to learn the normal behavior of a system and detect deviations from it. Discrete events are required to learn an automaton. They often cause mode changes within the system and the timing of these events is an important indicator for the health of the system. Timed automata are used to separate the system's modes and model the transitions and timing between the identified modes.
Discrete events can be directly extracted from changes in the binary control and sensor signals of the system. It is also possible to obtain discrete events through thresholds for real-valued signals such as temperature <19 • C [4]. However, setting the thresholds and combinations of conditions for the continuous signals requires expert knowledge which is usually not available for real world automation systems. For unsupervised learning of these automata only binary control signals are used to obtain the discrete events, such as HeaterOn = true. Algorithms such as the bottom-up timed learning algorithm (BUTLA) [8] work in an unsupervised manner, and do not require additional expert knowledge.
A timed automaton generated by the aforementioned algorithm can be defined as described in Definition 4. The learned automaton can then be used to detect a variety of different classes of anomalies. This can for example be done using the anomaly detection algorithm (ANODA) [8] which can detect the following types of anomalies: -Unknown event / Wrong event sequence: an event occurred which was not observed in the current state. -Timing error: a transition occurred outside of the learned time bounds. -State remaining error: when more time passed than for the latest event and the state is not a final state, then we have a state remaining error. -Probability error: the probabilities of transitions for the new data are calculated and compared to the previously learned probabilities and an error is generated when deviations are too large.
As mentioned before, discrete events are needed to learn an automaton. One way to obtain these from the SOM is to interpret each neuron of the SOM as a binary signal: a neuron is active (or true) when the observation is mapped to it and false otherwise. This mapping can result in a large number of signals, depending on the size of the SOM. This usually leads to a large number of states in the automaton. Also, some neurons might never be activated by the training data leading to an unknown state detection in the automaton. Again, this can lead to a large number of false positives when new data is mapped to neurons which were previously not active but are direct neighbors to neurons previously activated by the training data.
To counteract these effects and ultimately reduce the number of states we group the neurons into a smaller number of clusters. The transitions between the clusters are then learned using a timed automaton.
SOM's tend to keep neurons with similar signal weights closely together, which results in a topographic landscape with valleys, where weights of neighbors are similar, and ridges, where weights of neighbors are not similar. This landscape can be visualized through the aforementioned u-matrix representation. Valleys represent regions where the contained neurons weight vectors are very similar. These valleys are separated by ridges which mark transitions between the different feature spaces. The valleys can be interpreted as stationary process phases while the ridges represent transient process phases [3].
Clustering algorithms from the image processing domain, such as the watershed transformation [9], can be used on the u-matrix representation of a SOM to identify the clusters in a mathematical way. This works analogous to rain falling on top of the u-matrix. The water runs from higher regions to the lower regions and consequently flooding the basins. When the water level gets high enough so two basins merge, a ridge forms which separates them. The watershed transformation dissects the u-matrix into different clusters, separated by the so-called watershed lines. Watershed lines separate the different basins and do not belong to any of the clusters. The implementation used here is the Vincent-Soille watershed algorithm which performs the watershed transformation in a non-recursive manner [15].
Subsequently, the samples of the training data are mapped to the SOM to get the corresponding cluster. The clusters are encoded using a one-hot encoding resulting in a binary vector with one element for each cluster. The value of the active cluster is set to true, while all other values are set to false. The time-stamps of the original samples and the binary vectors are then used to learn an automaton with the aforementioned BUTLA.
Experiments
In this section we apply the aforementioned approaches for anomaly detection and anomaly localization to one of our demonstrators. The Genesis demonstrator of the Institute Industrial IT sorts two different materials (conductive and nonconductive) from a magazine into their corresponding target locations (Figure 2). It is portable and uses an air tank to supply all the gripping and storage units. The 4 different modules can switch places and the program for the programmable logic controller (PLC) automatically adjusts for the change in location. A linear drive with a pneumatic gripper transports the materials between the different stations. Five real-valued signals are available from the demonstrator: current, position, speed, acceleration and force. Data samples were taken through an OPC connection with a resolution of 50 milliseconds for a total of 42 production cycles. The first 38 production cycles contain only normal behavior and were used to train the selforganizing map for both experiments shown in this section. Two of the 4 remaining cycles contain anomalous behavior and are used for the anomaly detection.
Quantization error anomaly detection and anomaly localization
A self-organizing map is trained over 100 Epochs using a 60x60 square, non toroidal topology on the training data using the Eulidean distance measure. Its unified distance matrix representation can be seen in Figure 3. The training data only contains only normal behavior. The anomaly detection is performed in an unsupervised way.
To estimate the threshold for the anomaly detection, tau was set to 1.0 which gives a value of ∼0.274 as threshold for the anomaly detection as shown in Figure 4. The observations of the evaluation data are then mapped to the SOM and the distance is calculated as seen in Figure 5. The observation is marked as an anomaly when the distance is larger than the previously estimated threshold.
The final result of the anomaly detection is shown in Figure 6. Anomalies in this data set are labeled which allows to calculate the quality of the anomaly detection: in this example anomalies were detected with an accuracy of 99.63% and F1 score of 94.34%. Sensitivity was 100% which means all anomalies were identified correctly as true positive. Detailed results are shown in Table 1. Figure 7 shows an excerpt of the anomaly localization of the anomalies detected above. Only the two most likely signals estimated to be the cause of the anomaly are shown. The localization results match the predictions made by the experts on the related signals of the system perfectly. The two anomalous cycles shown in Figure 6 contain the same anomalies: the first part of each anomaly, labeled '1' in the data set, is a jam in the linear drive resulting in a standstill and a higher than usual motor current. The second phase of the anomaly, labeled '2' in the data set, is the linear drive overcoming the jam, trying to correct its lag error. This results in a much higher than usual speed of the linear drive which is also correctly identified by the anomaly localization.
Trajectory tracking with automata
Again, a SOM is trained on 38 production cycles containing only normal behavior. The SOM has a size of 60x60 neurons. The resulting u-matrix shown in Figure 8 is then dissected into 6 clusters by the watershed transformation in Figure 9. Subsequently, the samples of the training data are mapped to the SOM to get the corresponding cluster. The automaton which represents the trajectory across the different clusters during normal operation of the system is learned and shown in Figure 10. The one-hot encoding is easy to read the cluster transitions from the automatons transitions: C0 = 1; C5 = 0; (5.25−10.36)(7.11s) describes a transition from cluster 5 to cluster 0 with a timing of 5.25-10.36 seconds after entering the state. The mean time for the transition was 7.11 seconds.
Other encodings than the one-hot can be used so less binary signals are needed for the amount of clusters to describe but they might be harder to read and follow when looking at the automaton.
An example mapping for a single production cycle from the evaluation data is shown in Table 2. Not all states from the state machine can be found in the SOM, as the SOM uses only real-valued signals. Binary signals, such as the storage ejecting material and the gripper closing are not known. The linear drive which provides the real-valued signals does not move during these operations and therefore, these internal states appear in the same cluster of the SOM. Figure 11 shows examples from the output of the ANODA: first, on observation 482 a state remaining error occurs. During the production cycle it took more time to move the gripper to the storage position. This triggers a timing error when the transition finally occurs and normal operation continues. Second, at the end of the data set it was detected that the demonstrator remains in its idle state longer than in the training cycles. Both of these errors are not detectable using the quantization error method presented in the previous section because the SOM itself does not model time in an explicit manner. The automaton adds explicit modeling of time by modeling and tracking the trajectory of the SOM's working point.
Conclusion
This paper presented approaches to data-driven anomaly detection and localization in Cyber-Physical Production Systems. Data provided by the system is used train a self-organizing map to represent the systems normal behavior.
The first option shown in this paper uses the quantization error to detect anomalies in the systems signal domain. Manual adjustment of a threshold above which an anomaly detected is not easy so we estimate the threshold from the data. When an anomaly is found, the SOM is used as a reverse model to compute the differences between expected and actual value of each signal. The signals are sorted from largest to smallest deviation and the first n signals are provided as a starting point for experts to restore the system to a normal working order.
This anomaly detection and localization can be applied to a wide variety of systems and produces good results across the board. However, time is not modeled and deviations in the timing can not be found.
The second option shown in this paper adds modeling of time: discrete timed automata are used to learn the trajectory of the SOM's working point. The automaton keeps track of the timing between the different process phases. This approach detects anomalies in the systems timing and observed sequence, both of which can not be detected by the SOM alone.
The second approach can be extended to use hybrid timed automata instead of discrete timed automata: a separate model is learned for each state of the automaton to model the different stationary process phases and detect anomalies within them [16]. Yet another approach could replace the discrete timed automata with variable order Markov models. The automaton only uses its current state and long term deviations from the trajectory might not be detectable, especially when the automaton contains cycles. In general, higher order Markov models also use a number of previous states to predict the following state and might be able to better deal with cycles and deviations which happen over a long period of time.
|
2019-02-19T14:06:47.087Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "f903f1befcc72986b4745a79c13c4dab275f2037",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-662-57805-6_4.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "95aac3b9a8c6fb5614c9a8c593509e8859943c43",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
150309757
|
pes2o/s2orc
|
v3-fos-license
|
Personal resources in coping with stress among paramedics part 2
Published: 15 March 2018 The purpose of this project was to determine the relationships between the sense of coherence and paramedics’ coping with stress styles. Owning such resources as high sense of coherence or taskoriented coping with stress style does not imply triggering them in encountering a critical situation. However, if triggered, they become an important variable acting as an intermediary between stressful events and coping. Two concepts serve as a theoretical basis: R. Lazarus’ transactional theory of stress and Antonovsky’s salutogenic theory.
The purpose of this project was to determine the relationships between the sense of coherence and paramedics' coping with stress styles. Owning such resources as high sense of coherence or taskoriented coping with stress style does not imply triggering them in encountering a critical situation. However, if triggered, they become an important variable acting as an intermediary between stressful events and coping. Two concepts serve as a theoretical basis: R. Lazarus' transactional theory of stress and Antonovsky's salutogenic theory.
Introduction
The continuous increase in the pace of development of society brings with it an increase in sudden health and life hazards. This requires creating a modern health care system, which is based on current knowledge in the field of emergency medicine, the rise of public awareness and the ability to take advantage of this system. Nowadays the priority is security. It is important not only for individuals but also for the whole country. The effectiveness of health structures is a working prehospital emergency system, hospital base of emergency medicine, hospital emergency departments and hiring qualified medical personnel [16].
New threats that arise change the world, and thus, medical emergency. Civilizational development is modified by dangers of natural origin, thus creating a new kind of threats. The literature distinguishes the main threats of the 21st century, which can have a huge impact on the safety and human health. Among other things there are: population growth, environmental degradation, new diseases, hazardous materials, chemical weapons, nuclear threats, cultural conflicts, or economic changes.
The nature of work of a medical lifeguard in medical rescue teams
A paramedic profession is a relatively young one, hence it does not always have a prestige and respect it deserves. It appeared on Polish labour market in the 90s of the twentieth century. It is a response to the professional training of medical personnel prepared to provide assistance to patients in the states of health emergency. The concept of prehospital care, which was adopted in Poland, assumed the departure from the profession of an orderly and passing the current role to a new profession of a paramedic [1]. The first and only legal act that constitutes a paramedic profession in Poland is the Act of September 8, 2006 on the State Emergency Medical Services.
The rescue operation is teamwork, discipline and ability to cooperate is required. The rescuer must be independent and able to make quick decisions, to control the emotions that prevent quick and effective action. It is often a rush, a race against time, physical and mental effort, a drive with the siren on, carrying patients down the spiral staircase. Totally different is the nature of Medical Rescue Teams' work in the city and in rural areas where the access to the place is associated with longer commuting time. In such cases first aid provided by the witnesses of an incident is priceless. Sometimes another difficulty is finding the address. Many properties are unmarked or numbers are outdated and sometimes additional pressure is exerted by dispatcher's voice calling by the radio station that the family urges due to deteriorating condition of the patient, or the patient stops breathing. Among other problem may be that buildings are scattered over a big area and the crew must ask locals for guidance, which can be difficult at night. There are often unfavourable terrain conditions, e.g. mud, deep snow, tree branches [15]. If the team needs support of police or firefighters, the time is also extended. If the team S needs to be present at the place of accident, quick access can also be impossible, because there is only one specialist team that is currently performing other job in the region. In such conditions the rescuers have to manage on their own and deal with, e.g. several people injured in need of immediate help, or with an aggressive patient. The legislator allows two-person P Medical Rescue Teams, which seems to be a big misunderstanding in the case of rural areas. It is impossible to secure the patient positioning on an orthopedic board by two rescuers, to conduct effective resuscitation, or to provide transport and performing concurrent medical activities of the signs of life. In such situations it is a tough piece of work. Working by twos includes distribution of equipment and patient's weight among smaller number of people. These are only a few examples of hard work of two-person teams in a rural area. This may result in more frequent injuries, staff overload or doctor's leaves [8].
The responsibility imposed on members of medical rescue teams, for the life and safety of other people, may accumulate tensions and traumas that have a destructive impact on a professional and personal life of a rescuer [9]. By working in various weather conditions, shifts, they are often at risk of aggression (psychological and physical hazards), chemical and biological hazards. This makes their work stressful and health weakening, and the earnings are low and do not compensate for the negative sides of the profession [4].
The psychology of work stress among members of medical rescue teams
The work of rescuers is inevitably associated with the exposure to severe stressors, which can sometimes exceed their adaptive capacity. Such stressors are often found in occupations related to helping and saving people during traumatic events, such as threat to life, gruesome scenes, death, and extensive damage. There may also appear chronic stressors of medium intensity but lasting permanently. An example is the shift work, conflicts at work, unpredictability of events during on-call duty, etc. [3].
To minimize the negative consequences of stress, one should possess the necessary knowledge about stress and how to cope with it. As it was written previously, stress induces emotional symptoms (anxiety, fear, tension, nervousness, worry), somatic symptoms (muscle tension, increased heart rate, dry mouth) and psychological ones (seeking information or help to solve the problem). Acute stress reaction is associated with significant decrease in efficiency, and symptoms usually have a variable nature. The initial state is bewilderment, narrowing awareness and attention, not understanding the incoming stimuli and disorientation. The consequence may be withdrawal from the situation or agitation and hyperactivity. These symptoms usually occur in the first minutes after the impact of the stress stimulus and disappear after a few hours or days [10].
A typical reaction to stressful situation is the coping mechanism in the form of avoiding conversations about the event. Denial is an unconscious defense mechanism in the form of feeling as if nothing had happened. All this leads to anxiety relief, and by a process of 'working through', there is a gradual withdrawal of symptoms. It also happens that anxiety is alleviated by using wrong methods, such as: alcohol, drugs, transferring anger on others, or regression which manifests itself as childish helplessness towards stress [5].
The awareness and understanding of the mechanisms underlying the behavior and reactions of victims is the effective performance of rescue operations. At the same time it should be remembered, that the same mechanisms and reactions also apply to members of the Medical Rescue Teams.
Supporting a person in stress is based on relating to them, which helps to regain the internal balance. One should remember to avoid rush and impulsive actions. Patience and calm inspire trust and give a sense of security. Definitely, active listening and understanding the emotional state help to resolve the issue.
The main mistakes which make it difficult to make contact with a victim are: slope plain of contact, which is the prevention from conscious experiencing unpleasant emotions called 'looking down'. Despite everything, difficult emotions unconsciously evoke reactions in the form of tensions, irritability, headache, nausea, diarrhoea, etc.; artificiality, the mask in the form of an artificial smile, meaningless consolation, or impersonation of a ruler who knows all best; attitude of the judge intensifies the feeling of guilt escalating the victim's stress [11].
The correct treatment of a victim affected by stress is based on individual assessment of the case. In order to do this, one should: ensure comfortable conditions as much as possible; not impose oneself, but be available; enable to express feelings; make it possible to contact the family; offer help in solving current problems; be prepared for expressing strong emotions [14].
Rescuers involved in difficult situations are exposed to similar stress reactions to that of victims and witnesses of the incident. In order for the rescuer to function effectively in such an environment, he or she should be properly trained in terms of stress reaction, to get to know their emotions, relaxation methods and they should participate in trainings to mitigate the consequences of job stress.
Stress as a natural phenomenon can be constructive, useful or it can improve efficiency, the so-called 'eustress' as a 'good stress'. Excessive 'distress' interferes with the functioning of an individual. The symptoms of an excessive reaction tend to be difficulty with concentration, irritability, sleep disturbance, unjustified feeling of guilt, loss of interests and appetite, or isolation. Then one must take actions to mitigate the effects of stress in the form of more rest, physical exercises, changes in the organization of time, reducing drugs, relaxation, or ask for help from a psychotherapist or psychologist [17].
Unfortunately, sometimes there appears post-traumatic stress disorder reaction to a traumatic situation characterized by a delayed (a few weeks) or prolonged over a month. It manifests itself by re-experience of secondary trauma, which recurs in the memories, nightmares, emotional dejection, rejection of interpersonal contacts, the ability to experience the pleasure, avoiding situations that may remind of injury. In addition, there is excessive vigilance and sensitivity to stimuli, sleep disorders, anxiety and depressive mood.
To prevent the occurrence of acute stress response, and thus reduce the efficiency of rescuer's activities, it is necessary to discuss his or her reactions to severe stress with an experienced person. The ability to master ways of relaxation improves the efficiency at work. Common methods are: yoga, meditation, breathing trainings or visualization techniques [14].
Psychological trainings help release negative consequences. The classes are conducted in two types: defusing -during 10 to 30 minutes the rescuer's thoughts and feelings related to the action are expressed; debriefing -24 to 48 hours after extremely difficult rescue actions. This is a longer group discussion on experiences, with the purpose of regaining inner balance. This is carried out in several phases: the leading person collects the most important information from the rescuers; rescuers' introduction, discussing the principles of confidentiality and freedom; comparison of various facts presented by participants; time to express one's own feelings, thoughts and experiences; description of emotional reactions during the action; descriptions of stressful situations; revision of information about stress, reactions, ways of coping with stress; summary and conclusions [16].
To sum up the above process, it should be noted that debriefing is the concept of learning by experience. On the basis of experiences and analysis of events, the theories of possible behavior that generates new experiences are introduced. A new meaning is given to them in the context of knowledge, thus new solutions and skills are introduced [13].
The rescue action is a difficult situation. It is connected with high emotional burden for both, the victims and aid workers. Teams work mostly in conditions of severe stress, experiencing many difficult situations that create a variety of configurations. Human reaction to stimuli of a strong and negative impact is diverse and complex, biologically and socially conditioned [14].
Of key importance in difficult situations, requiring intense distress, is the sense of coherence. People with a high sense of coherence are able to cope in the most difficult circumstances. They can cope with the stressor and their own reaction. They believe in the survival and create positive outlook for the future [9]. They constantly learn the ability to use existing resistance resources, thus shaping mental toughness. The impact of traumatic events allows them to see positive changes in themselves. This is expressed in a greater emotional maturity, in getting richer life experience, an increase in the sense of strength, better cope with the difficulties, confidence and competence. At the same time family relationships deepen, together with sensitivity, openness to others, as well as appreciation of life and reevaluation of priorities [8].
The research mehodology
The purpose of the undertaken research was to identify and describe the character of relationships between the sense of coherence, which are displayed by the employees of Medical Rescue Teams, and styles preferred by them in order to cope with stress. The realization of the research purpose started from establishing the research problem: What is the sense of coherence and styles of coping with stress among employees of Medical Rescue Teams?
Research methods
As an indicator of a sense of coherence and its components the following issues were adopted: Life Orientation Questionnaire, (SOC-29) by Antonovsky, consisting of 29 items. The tool includes three scales: comprehensibility (11 items), resourcefulness (10 items), reasonableness (8 items) (Antonovsky, 2004). The research in original and Polish version indicate, that it is characterized by a satisfactory level of reliability and accuracy.
The indicators of styles of coping with stress were the results of Questionnaire CISS (Endler and Parker, 1999). The authors included in the questionnaire three scales appropriate to the category of distinguished styles of coping with stress. The first style focuses on the task (16 items), the second is the style focused on emotions (16 items), and the third style focuses on avoiding (16 positions). The last one includes two subscales: engaging in substitute activities and seeking social contacts. The questionnaire consists of 48 items [7].
To describe variables, descriptive statistics and distributions of variables were used. Differences estimates were based on one-way analysis of a variance of a test by F. Fisher. The test of relationships was performed using the Pearson correlation coefficient. The condition that p<0,05 was considered statistically significant. The analysis was performed using the statistical package STATISTICA 12.0.
The study was conducted in the period from November 2014 to February 2015 on a group of 65 members of Medical Rescue Teams in one of Lower Silesian emergency stations. The survey was conducted among physicians, nurses and paramedics.
Research analysis
The analysis of data shows that the age of the respondents ranged from 34 to 67, the average age of the respondents was just over 38 (38,36). Secondary education 52% dominated in the study group. The remaining group 48% are the people with higher education. Most of the respondents are residents of small towns -48%, 36% come from cities, and the remaining 16% are the people who declare residency in rural areas. The study was conducted in three occupational groups: system doctors, paramedics, and system nurses. The group of doctors comprised of 15 people representing 23% of all respondents, 27 rescuers (42%), and the group of nurses consisted of 23 people, that is 35%.
Assessment of the sense of coherence level and its components
The values of the applied life orientation SOC questionnaire suggest getting an average of 144,16, spread in this respect developed between the levels: the lowest 116 and the highest 168. Based on the research and calculations in different scales, one can notice that 11 people showed the highest score in the sense of resourcefulness (SOR) scale, whose average was 50,36. The sense of comprehensibility (SOCM) was slightly bigger than the sense of purpose (SOP) and the average result was 47,2. The lowest average of 46,6 proved to be the sense of meaningfulness. I also noticed that among 4 re-spondents the received results in the sense of comprehensibility and meaningfulness scale were equal.
Source: own elaboration
The average standards presented by Antonovsky fall within the 130-160 points. The employees in most of the experimental group, that is 68%, fit into the average standards. The two remaining groups presented different standard. One had a low sense of coherence (they were people comprising 12%), and the second group was characterized by a high degree (20%).
Data results based on the questionnaire of coping with stress CISS
Most people pointed to the style focused on the task (SFT), whose average was 63.64, and the spread between the lowest and the highest score was between 51-74. The style focused on emotions (SFE) -its average score was 35.15 (the spread ranged between 19-54), and the style focused on avoiding (SFA) 40.08 (range 27-77). Consequently, most people chose the style focused on the task, and then the style focused on avoiding. 24 people placed themselves in SFT, and one person in the SFA. It should be also noted that the style focused on avoiding is divided into two subscales, in which, after the calculations, the following adequate averages were received: engaging in substitute activities (ESA) -the average was 16.44, and the search for social contacts (SSC) -the result was 16.24. As can be seen, these two subscales are roughly balanced. The questionnaire for coping with stress has another scale called -others, whose average was the lowest and amounted to 7.32.
Secondly, the more likely chosen style was the style focused on emotions. 15 people chose it. For the remaining 10 people the right style was the style focused on avoiding.
It was also noted that regardless of education this style was chosen the most often.
Analyzing the obtained results it can be concluded that the subjects do not differ in the case of style focused on the task, but they differ in the case of style focused on emo-tions and in the case of subscales of the style focused on avoiding, that is, substitute activities and social contacts.
Source: own elaboration
The conducted analysis of correlation between variables of the sense of coherence and variables of the styles of coping with stress revealed the occurrence of statistically significant (p<0,05) strong correlation between the style focused on the task and the sense of meaningfulness and global sense of coherence. The results are presented in Table 3. The correlation ranged from 0.35 to 0.25, which means that the higher sense of coherence the examined person had, the more frequently he or she used the style of coping with stress focused on the task. According to the theory by Antonovsky, people with a high sense of coherence tend to make an effort aiming at solving the problem, changing the situation or cognitive transformations [2].
However, in the case of the style focused on emotions, a strong negative correlation was obtained with the level of the sense of coherence, they ranged from -0.51 to -0.64. The lower the sense of coherence, the more often the studied person applied the style focused on emotions. People with a low sense of coherence tend to focus on themselves, on their own emotional experiences, often feel anger, guilt, tension, or think wishfully [11].
It results from the conducted analysis of styles of coping with stress that among almost all the people the dominant is the style focused on the task, which an optimal outcome, because it is a style most desired (concentrated less on avoiding and even less on emotions). Therefore, such factors as the occupation, age and education had no importance. People problem-oriented can handle or remove the effect of stress. They undertake direct action, focusing on solving the problem. Such attitude can be described as coherent and logical, taking into considerations the stressors refered to be-fore. The tension resulting from a situation that took place does not change into a permanent and strong stress.
Interpretation of the overall results on the basis of analysis of Life Orientation Questionnaire indicates the achievement of the average level of standards by the staff of Medical Rescue Teams. Being divided into particular scales, the sense of resourcefulness turned out to be the most preferred feeling, followed by the sense of understanding and the sense of meaningfulness. Such style preference may suggest that individuals with a lower sense of meaningfulness do not have enough motivation to be able to see the event as a challenge worth the commitment. However, among these three scales significant differences were not noticed. One could say that they are placed on an approximate level. According to the author of the questionnaire Aaron Antonovsky, people characterized by equal level of three scales show a stable type of experiences [2]. The pattern of perceiving the world is stable or, as Antonovsky calls it -coherent. In other words, the relationship between generalized receptive resources and coherence is consistent. The higher the resources, the higher the sense of coherence. Repeatability of life experiences, which are characterized by consistency, creates the balance between underload and overload, and thus develops a sense of coherence. The respondends showed confidence of having enough resources to cope with difficulties.
The choice of such a strategy has a positive effect on physiological reactions, which maintain health in good condition.
People with such a sense of coherence treat stressors not as a threat, but as a challenge, which can be faced. Whereas those with a low sense of coherence are focused on negative emotions and take anti-health actions. Focusing on oneself and on the negative experiences causes the tensions to maintain and perpetuate, which leads to stress.
The results gathered during the survey analysis allowed learning the relation between coherence and the styles of coping. The scope of the project ignored the other elements of the stressful transaction, but it can be a framework for further research among the employees of Medical Rescue Teams.
Conclusions:
among almost all members of Medical Rescue Teams the dominant was the style of coping focused on task; the result is optimal and desirable due to the nature of the work; education, place of residence, profession and age in the study group has no connection with the styles of coping; members of medical rescue teams are placed in medium norms of a sense of coherence and are characterized by a stable type of life experiences focused on the task style.
Conclusions
The work of members of Medical Rescue Teams, like hardly any professional group, is more prone to stress. An inherent aspect of this work is the contact with traumatic events affecting people. Rescuers are witnessing pain, despair, mutilation or death. Social expectations towards them are clear, they have to be brave, strong and resilient. During the rescue action, Medical Rescue Team focuses on the most important tasks. Responsibility for life and safety of other people can accumulate resentment and tensions that have destructive impact on their professional and personal life. Also, it should be realized that a rescuer work involves a risk of exposure to identify with a victim waiting for help. Then the rescuer has a sense of an enormous commitment. If the final of a rescue operation is not successful, the feelings of failure and guilt appear, which are independent of the contribution to the effort and commitment involved in the rescue. Then the only thing left is to cope with one's own emotions. This group of rescuers, characterized by emotional attitude, experiences more ethical dilemmas in their work.
There are many books, publications and research on stress and ways to cope with it but there is no effective method that could reduce the resulting tensions. This is due to biology and individual psyche, which is a subjective matter. Therefore, every person must develop a method for stress reduction. The methods that will be doable, satisfying, giving mental and physical relief. First of all, one must realize that with the things, which we have no influence on, we need to reconcile, but one can also do something that is within our capabilities. What I mean here is at least the care about the psychophysical culture in the form of physical activity, diet, avoiding stimulants, etc. However, to be able to effectively help oneself, at least the basic knowledge of biology and psychology of stress is needed. Sometimes the best methods are not able to replace the greatest resource, which can be support from the family, relatives or friends.
One's view of stress depends on personal and social resources. Coping with stress is combined with the involvement of positive emotions. Each of us is looking for satisfaction and pleasure in life. However, the perception of difficult situations by a man largely depends on the efficient coping with stress. Everyone has psychological skills to positively perceive and analyze difficult situations.
Modifying negative beliefs, reaching for the resources that lie within us, changing lifestyle, care and self-respect can facilitate effective coping with stress.
|
2019-05-12T14:22:12.725Z
|
2018-03-01T00:00:00.000
|
{
"year": 2018,
"sha1": "9270df614367d783d8f6a18003bd0216e327042a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5604/01.3001.0011.7362",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fa00cd13f3b585c9d7c1f68b1e271178ada53e1c",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
26539110
|
pes2o/s2orc
|
v3-fos-license
|
STAT3 deficiency prevents hepatocarcinogenesis and promotes biliary proliferation in thioacetamide-induced liver injury
AIM To elucidate the role of STAT3 in hepatocarcinogenesis and biliary ductular proliferation following chronic liver injury. METHODS We investigated thioacetamide (TAA)-induced liver injury, compensatory hepatocyte proliferation, and hepatocellular carcinoma (HCC) development in hepatic STAT3-deficient mice. In addition, we evaluated TAA-induced biliary ductular proliferation and analyzed the activation of sex determining region Y-box9 (SOX9) and Yes-associated protein (YAP), which regulate the transdifferentiation of hepatocytes to cholangiocytes. RESULTS Both compensatory hepatocyte proliferation and HCC formation were significantly decreased in hepatic STAT3-deficient mice as compared with control mice. STAT3 deficiency resulted in augmentation of hepatic necrosis and fibrosis. On the other hand, biliary ductular proliferation increased in hepatic STAT3-deficient livers as compared with control livers. SOX9 and YAP were upregulated in hepatic STAT3-deficient hepatocytes. CONCLUSION STAT3 may regulate hepatocyte proliferation as well as transdifferentiation into cholangiocytes and serve as a therapeutic target for HCC inhibition and biliary regeneration.
INTRODUCTION
interleukin (il)-6 family cytokines are indispensable to liver regeneration. The Janus kinase/signal transducers and activators of transcription (JAK/STAT3) pathway is thought to play the central role in signal transduction mediated by il-6 family. Several target genes of STAT3 have been identified; these genes include cyclin D, c-myc, bcl-xL, and mcl-1, which are essential for cell proliferation and survival. in murine models of liver injury, ablation of STAT3 in liver results in enhanced liver injury and reduced liver regeneration [1] . These findings suggest the protective and proliferative roles of STAT3 in liver regeneration. in addition, STAT3 has been implicated in cellular differentiation, and its activation is associated with differentiation of T cells and macrophages [2] . Studies have demonstrated that STAT3 is essential for the maintenance of pluripotency of embryonic stem (ES) cells [3,4] and self-renewal of tumor-initiating cells in the liver [5] . However, the differentiative role of STAT3 in liver regeneration is questionable.
Both cholangiocytes and hepatocytes are derived from hepatoblasts generated in the foregut endoderm during liver development. The bipotential hepatobiliary stem/progenitor cells, such as small hepatocytes and oval cells, have been shown to exist in adult liver and are thought to contribute to liver regeneration [6] . On the other hand, previous reports have revealed direct transdifferentiation between hepatocytes and cholangiocytes in chronic liver damage [7,8] . Cholangiocytes transdifferentiate into hepatocytes and replenish massive hepatocyte loss [9] . in reverse, cholangiocytes were shown to be generated from transdifferentiating hepatocytes during biliary injury [10] . These evidences suggest that both mature cholangiocytes and hepatocytes play complimentary roles to promote liver regeneration through transdifferentiation. Sex determining region Y-box9 (SOX9), required for the normal differentiation of the biliary tract, is primarily expressed in mature cholangiocytes. However, weak expression of SOX9 was also observed in a small population of hepatocytes, which may display the potential to transdifferentiate into cholangiocytes [11] . Otherwise, cellular fate in the liver is regulated by Yes-associated protein (YAP), an important effector of Hippo pathway [12,13] . YAP depletion in the liver is characterized by cholangiocytes proliferation after biliary obstruction [14] . in contrast, YAP activation in hepatocytes results in their transdifferentiation into cholangiocytes, leading to over-growth of atypical ductal cells -a process termed as atypical ductular reaction [12] . By interacting with TEAD transcription factor, YAP induces the expression of target genes, including Notch2 [15] . NOTCH2 signaling, in turn, induces the expression of its target genes SOX9 [12] . Therefore, YAP/SOX9 axis promotes transdifferentiation of hepatocytes into cholangiocytes. However, the mechanism underlying the regulation of YAP/SOX9 axis activation in hepatocytes needs to be elucidated.
To investigate if STAT3 is implicated in hepatocyte proliferation and differentiation in liver injury, we evaluated thioacetamide (TAA)-induced hepatocarcinogenesis and hepatocyte transdifferentiation in mouse with STAT3deficient liver. STAT3-deficient liver showed profound ductular proliferation with upregulated YAP/SOX9 expression in hepatocytes. Our study indicates that STAT3 regulates transdifferentiation of hepatocytes into biliary cells.
Mice
Mice were hosted in the cage whose dimension is 218 mm × 320 mm × 133 mm, in which recycled paper bedding and enrichment that stimulates natural instincts were put, and had free access to water and a standard rodent diet during the experimental period. All mice were maintained under specific pathogenfree conditions, and were monitored every other day. We observed any signs of abnormalities (i.e. marked weight loss or behavior change), but no mice became ill or died prior to the experimental end point. All mice experiments were conducted in strict accordance with the NiH Guidelines for the Care and Use of laboratory Animals and were approved by the University of Kurume institutional Animal Care and Use Committee. Albumin-Cre (Alb-Cre) mice on a C57/ Bl6 background expressing Cre recombinase under the control of the mouse albumin gene regulatory region and STAT3 flox/flox mice on a C57/Bl6 background were prepared previously described [16] .
Quantitative real-time PCR
Total RNA was isolated using the TRiZOl Reagent l (invitrogen, Carlsbad, CA, United States) and reverse transcription was performed using Superscript Ⅲ (invitrogen). Real time PCR was carried out using SYBR TAA-induced liver injury ( Figure 2A). PCNA positive hepatocytes were significantly decreased in STAT3 Δhep mice ( Figure 2A and B), indicative of the proliferative role of hepatic STAT3 in TAA-induced liver injury. Compensatory proliferation triggered by hepatocyte loss leads to not only liver regeneration but also development of HCC. Therefore, we next examined whether TAAinduced HCC development was inhibited in STAT3 Δhep mice. TAA treatment for thirty-week developed liver tumors which were histologically consistent with welldifferentiated HCC with no evident cholangiocellular carcinoma ( Figure 2C). The liver tumor formations of STAT3 Δhep mice were significantly lesser than those of control mice ( Figure 2D). These findings suggested that STAT3 was required, at least in part, for compensatory hepatocyte proliferation leading to HCC development.
Hepatic STAT3 deficiency accelerated TAA-induced biliary ducts and ductular structure formation
Hepatic necroinflammatory change induces atypical ductular reaction, which is characterized by proliferation of ductular cells. TAA treatment caused the biliary duct/ductular structure formation especially around periportal area ( Figure 3A). To confirm the characteristics of bile duct/ductular cells, we performed immunohistochemical staining for KRT19, a marker of mature cholangiocytes. Most of ducts/ductular cells were immunoreactive for KRT19 ( Figure 3B). The number of bile duct/ductular structures around periportal area of STAT3 Δhep mice was higher than that of control mice ( Figure 3C left). The bile duct/ ductular formations in centrilobular area were rarely observed in control mice, but those of STAT3 Δhep mice were significantly increased ( Figure 3C right). KRT19positive bile ducts/ductular structures were significantly expanded at both periportal and centrilobular area in STAT3 Δhep mice ( Figure 3C and D).
STAT3 deficient hepatocytes showed upregulated SOX9 expression following TAA-induced liver injury
The formation of ducts/ductular structures has been thought to be attributable to both proliferation of mature cholangiocytes and biliary transdifferention of hepatocytes. To examine the origin of bile ducts/ductular structures, we next performed immunohistochemical staining for SOX9, because SOX9 is expressed in cholangiocytes and bipotential hepatobiliary cells [11] . As shown Figure 4A, the bile ducts/ductular structures were frequently composed of SOX9-positive cells. The number of SOX9-positive bile ducts/ductules of STAT3 Δhep mice was higher than that of control mice ( Figure 4B). Furthermore, immunoreactive intensity for SOX9 was increased in the bile ducts/ductular cells of STAT3 Δhep mice compared to control mice. TAA treatment upregulated SOX9 expression in periportal hepatocytes. interestingly, SOX9 expression was also Green (life Technologies) and TaqMan Assays (life Technologies)
Statistical analysis
Statistical significance was assessed using the Mann-Whitney U-test. P < 0.05 was considered statistically significant. The data were presented as mean ± SE.
STAT3
Δhep and control mice were given water with TAA.
Metabolites of TAA, which are covalently bound to proteins and lipids, caused hepatocytes necrosis and periportal infiltration of inflammatory cells. As shown Figure 1B, TAA-induced inflammatory infiltrates were augmented in STAT3 Δhep mice compared to control mice. Moreover, TAA-induced necrotic changes of hepatocytes were also enhanced in STAT3 Δhep mice ( Figure 1B and C), indicating that hepatic-STAT3 deficiency promoted TAA-induced hepatocytes injury. As a consequence of enhanced inflammatory change, STAT3 Δhep mice exhibited significantly severe liver fibrosis compared to control mice ( Figure 1D and E).
Hepatic STAT3 deficiency suppressed hepatocyte proliferation and HCC development
To elucidate the proliferative role of STAT3, we next examined whether TAA-induced hepatocytes regeneration was attenuated in STAT3 Δhep mice. immunohistochemical staining for proliferating cell nuclear antigen (PCNA) showed compensatory proliferation of hepatocytes in augmented in STAT3-deficient hepatocytes ( Figure 4C).
YAP was activated in STAT3-deficient hepatocytes following TAA-induced liver injury
We next performed quantitative RT-PCR analysis to determine whether SOX9 expression was regulated at transcriptional level. TAA treatment upregulated the level of SOX9 mRNA expression, which was significantly increased in the liver of STAT3 Δhep mice compared to that of control (TAA 16 wk) ( Figure 5A). immunoblot analysis confirmed higher levels of SOX9 expression in the liver of STAT3 Δhep mice ( Figure 5B). in accordance with the marked bile duct/ ductular cell proliferation, the liver of STAT3 Δhep mice showed robust up-regulation of KRT19 expression at transcriptional and protein level. Although il-33 is a Figure 5A). On the other hand, AFP expression of the liver of STAT3 Δhep mice was decreased as compared to that of control mice ( Figure 5A and B). Whereas YAP expression was restricted in normal liver, TAA treatment resulted in upregulation of YAP expression.
YAP mRNA expression in the liver of STAT3 Δhep mice was slightly higher than that of control mice. However, YAP protein expression was markedly increased in the liver of STAT3 Δhep mice. Previously, it was shown that src-family kinase induced-YAP tyrosine (Y357) phosphorylation leads to its stabilization and nuclear localization, activating its transcriptional property [17] .
DISCUSSION
in this study, we show that STAT3-deficient liver displayed reduced HCC development and promoted biliary ductular proliferation during TAA-induced liver injury. STAT3-deficient hepatocytes exhibited YAP activation and upregulation of SOX9 expression, indicating transdifferentiation of hepatocytes into cholangiocytes. This study is the first report that STAT3 is implicated in both proliferation and differentiation of hepatocytes after liver injury. il-6 family cytokines are obviously essential for liver repair. il-6-deficiency impairs liver regeneration and causes liver failure characterized with blunted DNA synthesis in hepatocytes but not in nonparenchymal cells [18] . The role of JAK/STAT3 pathway has been extensively evaluated and thought to be essential for il-6-mediated liver repair, because STAT3 regulates many of genes associated with cell survival and proliferation. During liver regeneration after partial hepatectomy, hepatocytes-STAT3 activation was observed in periportal area [19] , which might harbor putative stem cell nische [20] . Recently, periportal hepatocytes were shown to have extensive proliferative property for liver repair [11] . These reports suggested that STAT3 activation in periportal area was crucial for liver regeneration. indeed, TAA-induced liver damage was augmented in liver specific STAT3-deficient mice compared to control mice. However, hepatic STAT3 deficiency did not cause lethal liver injury after TAA treatment. Consistently, previous reports showed that hepatic STAT3 deficiency, unlike il-6 deficiency, caused modest reduction of liver regeneration with no liver failure after partial hepatectomy [1] . These findings suggest that hepatic STAT3 plays a part of IL-6-mediated liver repair property. Because il-6 family cytokines activate ERK/MAPK and Pi3K/Akt pathways as well as JAK/STAT3 pathway through its receptor gp130, these pathways may lead to compensation of proliferation in STAT3-deficient hepatocytes. Until recently, YAP has been shown to regulate cell proliferation in some organs including liver [21] . By binding to TEAD transcription factor in nucleus, YAP activates expression of multiple genes including CTGF, CCND1 and BCL2L1 responsible for cell proliferation, antiapoptosis and survival. in Hippo/lats pathway, YAP serine (S127) phosphorylation results in cytoplasmic sequestration and inhibition of its transcriptional coactivator activity [22,23] . in contrast, src-family kinase induced-YAP tyrosine (Y357) phosphorylation leads to its stabilization and nuclear localization, activating its transcriptional property [17] . Recent report revealed that il-6-mediated YAP tyrosine (Y357) phosphorylation via a gp130-src family kinase module promotes intestinal epithelial proliferation in vivo [24] . Therefore, YAP and STAT3 cooperatively might promote hepatocytes proliferation and survival for il-6-mediated liver regeneration. Compensatory YAP activation probably due to il-6 upregulation might restore the property of STAT3-deficient hepatocytes proliferation and survival. Recently il-22 has also been shown to be critical for liver regeneration [25] . Although il-22 does not interact gp130, il-22 receptor bind SHP2 leading to activation of both src family kinase and STAT3 [26] . YAP and STAT3 might be involved in not only il-6-mediated but il-22mediated liver regeneration.
in addition to the role of cell proliferation and survival, YAP sustains undifferentiated state and pluripotency through regulating expression of stemnessassociated genes such as Oct4, Sox2 and Nanog [13] . YAP is overexpressed in cultured ES cells and may be required for self-renewal and suppression of differentiation [27] . YAP activation was shown to dedifferentiate mature hepatocytes into hepatobiliary progenitor cells [12] . SOX9-positive periportal hepatocytes display high regenerative property and were shown to self-renew and transdifferentiate into cholangiocytes [11] . it is interesting that SOX9 expression is upregulated by YAP activation through NOTCH pathway [12] . Therefore, activation of YAP/SOX9 axis is critical for hepatocyte dedifferentiation to accomplish liver repair. However, the regulation of YAP/ SOX9 axis activation in hepatocytes is still unknown. in this study, we found that STAT3 deficiency enhanced YAP tyrosine (Y357) phosphorylation and SOX9 expression through Src activation during TAA-induced liver injury. These findings suggest that STAT3 may inhibit dedifferentiation and transdifferentiation of hepatocytes by preventing the activation of YAP/SOX9 axis.
Demetris et al [28] previously reported that KRT19 positive/AFP negative hepatocytes, called ductular hepatocytes, appeared to exhibit ductular reaction in submassive liver necrosis. The ductular hepatocytes were thought to contribute to biliary repair through transdifferentiation into cholangiocytes. As AFP synthesis and secretion were derived from differentiated hepatocytes in liver injury, TAA treatment upregulated AFP expression reflecting hepatocytes repopulation. The TAA treatment-induced AFP upregulation was significantly inhibited in hepatic-STAT3 deficient mice, whereas the expression of KRT19 was upregulated compared to control mice. interestingly, AFP expression is suppressed by YAP in undifferentiated ES cells, and YAP inhibition results in AFP expression with differentiation of ES cell [27] . These findings suggest that YAP activation in STAT3-deficient hepatocytes might direct their transdifferentiation into cholangiocytes instead of their dedifferentiation into progenitor cells. An important question is whether YAP activation in STAT3-deficient hepatocytes contribute to development of liver cancer, as YAP has been found to be involved in cholangiocarcinoma [29] . However, hepatic-STAT3 deficient mice did not cause cholangiocarcinoma, suggesting that STAT3 deficiencyinduced YAP activation is insufficient for oncogenic transformation at least in the model of TAA-induced liver injury. Otherwise, hepatic STAT3 depletion promoted fibrotic regeneration in TAA-induced liver injury model. Recent report showed that YAP expression was positively correlated with liver fibrosis in non-alcoholic steatohepatitis [30] . Because YAP has also been found to promote epithelial-mesenchymal transition [31] , the possibility is not excluded that YAP activation might promote transdifferentiation of hepatocytes into myofibroblast leading to liver fibrosis.
in conclusion, STAT3 is not only involved in HCC development but also prevents biliary ductular formation in TAA-induced liver injury. Our study highlights hepatic STAT3 as a plausible target for biliary repair during liver injury.
ACKNOWLEDGMENTS
We thank Yasuko imamura and Masako Hayakawa for excellent technical help, and Taeko Narisawa for secretarial assistance.
Background
JAK/STAT3 pathway plays the central role in signal transduction mediated by
|
2018-04-03T00:45:02.255Z
|
2017-10-07T00:00:00.000
|
{
"year": 2017,
"sha1": "e6688689e06582f9f63995aa20410a855ef8e81c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v23.i37.6833",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e6688689e06582f9f63995aa20410a855ef8e81c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
152423090
|
pes2o/s2orc
|
v3-fos-license
|
Incentive Effects in Tournaments with Heterogeneous Competitors – an Analysis of the Olympic Rowing Regatta in Sydney 2000 **
A large part of the theoretical tournament literature argues that rank-order tournaments only unfold their incentive effects if the contestants all have similar prospects of winning. In heterogeneous fields, the outcome of the tournament is relatively clear and the contestants reduce their effort. However, empirical evidence for this so-called contamination hypothesis is sparse. An analysis of 442 showings at the Olympic Rowing Regatta in Sydney 2000 gives evidence that oarsmen spare effort in heterogeneous heats. This implies that competition among staffs with heterogeneous skill levels does not bring about the intended effort levels. However, a separate subgroup analysis shows that only the tournament favourites hold back effort whereas underdogs bring out their best when competing against dominant rivals. A heterogeneous tournament could then be enriched by absolute performance standards to increase incentives of the favourites.
Introduction
neity measure we use the ordinal variable tournament stage, i.e. heat, repechage, semifinal, final. The analysis shows that with progression in the tournament, i.e. decreasing competitor heterogeneity, the oarsmen row significantly faster times. This confirms undoubtedly that heterogeneous line-ups have smaller incentive effects than close competition. Therefore, principals in internal labour-markets should strive for homogeneous competitor fields when setting up internal rank-order tournaments.
Furthermore, we present the first field data analysis of differences between efforts shown by favourites and by underdogs. So far, this has only been studied in experiments with students (e.g. Schotter/Weigelt 1992). The analysis of the single sculling events in Sydney 2000 shows that only favourites hold back effort whereas underdogs predominantly row sports-physiologically optimal race strategies. As a result, firms organising heterogeneous tournaments have to find ways to reinstall incentives for the favourites. One alternative would be to handicap favourites to make competition more even (e.g. Meyer 1991). Handicaps, however, entail serious problems. They may, for instance, be at odds with regulations from labour law forbidding worker discrimination. Presumably, a better way to keep favourites' incentives high is to enrich the tournament with absolute performance standards (Clark/Riis 2001). To be more concrete, the size of the winner prize may depend on the winner's absolute performance (i.e. on whether the winner's performance is above some standard). Then, favourites have an incentive to put forth effort even if they are far ahead of their competitors since slacking off may come at the risk of not meeting the performance standard.
In our interpretation, for underdogs already the participation in a tournament of prime importance -here the Olympic Games -has a very high incentive effect. This implies that in internal labour markets the organiser has to point out the relevance of the tournament; not surprisingly, management attention is expected to be a key motivational factor. Furthermore, considering the specific incentive of participating in Olympic Games, rank-order tournaments will only unfold their positive incentive effect if they are not carried out too often. If a homogeneous competitor field can not be achieved, other incentive schemes are more likely to affect employees. In a heterogeneous internal labour market, rank-order tournaments should only be held if a minimum incentive through selection for participation in the tournament is guaranteed.
The remainder of the paper is organised as follows. In the next section we introduce our empirical setting, the operationalisation of the contamination hypothesis, and the available data. Section 3 presents the estimation models and empirical results. In section 4 we analyse separately the subgroups of favourites and underdogs. The paper ends with a discussion of the results and an outlook on future work.
Hypotheses and empirical setting
The following analysis focuses on the effort competing rowing teams show depending on the heterogeneity of the field. As a heterogeneity measure we use the achieved tournament stage. Because of the regional qualification modus of the Olympic Games, the fitness and skill levels among the contenders vary between multiple world champions and starters who would not qualify for a national final in a strong rowing nation like e.g. Great Britain. Similar to other sporting contests, in the first round (heats), medal contenders compete against underdogs in heterogeneous fields. In each following tournament stage the line-ups are selected by the results in the preceding stage. The aim of this regulation is to form homogeneous line-ups for the final tournament stage (Olympic Rowing is a full rank tournament with finals A, B, C, and D); the final A consisting of the best six teams.
The effort of the rowing teams is measured by the end time to finish the Olympic 2.000m distance. Rowing times are strongly affected by weather conditions, and to a lesser extent by water temperature and water depths. Therefore, the FISA does not recognise world records, but only world best times rowed on courses that fulfil the FISA requirements. Most of these best times have been achieved with a strong tailwind, warm water temperature, and in deep water. Therefore, rowing experts only discuss absolute times in the context of the local conditions. However, for the Olympic Rowing Regatta in Sydney 2000 the weather conditions have been documented by the Australian Institute for Sports as favourable and stable over all days of the competition (Kleshnev 2001). This allows to specify the contamination hypothesis for Olympic Rowing as follows: Hypothesis 1: Rowing teams row faster times with every progression in the tournament.
For a team that has qualified for the final this hypothesis indicates that the time in the final is faster than the time in the semi-final; the time in the semi-final is faster than the time in the heat. The data used in this study has been compiled from the results and athletes databases hosted by the FISA (www.fisa.org). In order to avoid a distortion by inferior contenders from non-rowing nations, we focus on crews that finished in the top 12 ranks, having rowed in the final A or final B. The information analysed here comprises bibliographic data on 317 male and 183 female athletes from 44 nations. Race information covers results of 173 teams (103 male, 70 female) competing in 14 different events, rowing in 6 different boat types (single sculls, double sculls, quadruple sculls, pair, four, and eight). The performance was measured by the finishing times to complete the 2.000m rowing course, and split times for each four 500m quarters.
Because of the specific tournament structure (full rank-order tournament), for each team data is available for different tournament stages (heat, repechage or semifinal, final). Hence, the data could be ordered in form of a "balanced panel", where the unit of analysis is the progression level of the different teams. Note in this respect that in some events heat winners are directly qualified for the final, i.e. not all teams had to row semi-finals. All in all, this results in a total number of cases of N = 442 (173 teams, each rowing 2 or 3 tournament stages). For each race we know the respective end time, split times, and tournament stage. This allows a straight test of the above derived hypothesis: Teams row faster finishing times with their advancement in the tournament. Furthermore, the panel character of the data allows the use of estimation methods accounting for unknown team specific variables (e.g. boat quality, team coordination, physical fitness, etc.) that may affect the dependent variable (Kahn 1993).
In addition to the tournament stage (HET) we control for further covariates that may explain the variance of our endogenous parameter (finishing times). Known to be of importance in endurance sports are variables that describe the physical strengths of the athletes. As a first approximation, we use a team's average age (AGE), and its average race experience (EXP) as indicators. Race experience is measured by the number of years between their first participation in a world championship, a world-cup regatta, or Olympic Games, and the Sydney 2000 regatta. Therefore, an athlete who never competed at one of these international regattas before the Sydney Games is coded as inexperienced (=0). The positive effect of experience may be reduced by an aging component (Fair 1994;Maxcy 1997;Hübl/Swieter 2002). Therefore, we additionally include a squared experience term (EXP_2) in the estimation model. Probably the best estimate of team quality is the rank achieved in the previous 1999 world championship event (WM99); this term is also included in the estimation model. Since a better rank at the 1999 world championship indicates a stronger team, we expect some "path dependence" and hence a positive effect on the finishing times.
One drawback of our database might be, that it is comprised of pooled information on boat categories and athlete sex. Both variables are however expected to have an effect on the dependent variable. For example, given the coordination necessary in crew boats, we expect quicker and easier observable changes in racing strategy in the single sculls events. Similarly, because of comparable physiological capacity of the athletes, we expect smaller differences in speed in the lightweight categories. Therefore, we include categorical variables (SEX, BOAT, LW) in the estimation model to control for these important effects.
Last but not least, end time is primarily determined by the number of oarsmen in the boat; eights are faster than singles. Originating from calculations in the former German Democratic Republic, the sport of rowing has a long tradition to account for these speed differences in comparing relative times across events. The absolute end time is set in relation to a reference time, the so-called "gold standard". Rowing coaches calculate these "gold standards" as extrapolations of preceding world championships and world-cup regattas (Kleshnev 2001;Teti/Nolte 2005). It is called the "gold standard" because it is the end time expected to win the gold medal at the next Olympic Games. 1 Gold standards allow to compare boats of different categories. If funding is not available for all categories, this is important for selecting boats for international competitions like the Olympic Games. Therefore, the subsequently used variable relative end time (REL_ENDTIME) is defined as follows (A_ENDTIME denoting absolute end time): REL_ENDTIME = A_ENDTIME / gold standard in the respective event This standardisation allows a direct comparison of the dependent variable across events. Table 1 shows descriptive statistics for all variables introduced above. 1 On inquiry the German Rowing Association stated that the teams for the Beijing 2008 Games were selected using the times published in Kleshnev (2001). Only for two events the "gold standard" was adjusted to account for speed developments since Sydney 2000. Comparing the mean values already indicates that end times improve with progression in the tournament. 2 Despite the fact that the fastest mean times were rowed in the semi-finals, the difference between heats and finals (8.06 sec.) is statistically significant at the 5%-level (t = 0.047**).
Estimation methods and empirical results
As discussed before, the data can be set in form of a balanced panel based on the tournament level. Therefore, the estimation is carried out using a random-effects linear regression model that controls for unobservable team-specific effects. The decision whether to use a random-or a conventional OLS model was taken based on the Breusch-Pagan Lagrange Multiplier Test (Breusch/Pagan 1979). Given that almost all of the independent variables are time-invariant (no variance within), it is not possible to apply an alternative fixed-effects specification, since all of them have been automatically dropped during the estimation process (Frick et al. 2009). This decision is confirmed by a significant ( 2 = 64,08***) Lagrange-Multiplier Test (OLS vs. Random-Effects). Furthermore, a random-effects model also accounts for potential individual effects resulting from a variety of other non-observable and random variables (Matyas/Sevestre 1996: 94).
Hence, the estimation model takes the following form: REL_ENDTIME ij = 0 + 1 EXP + 2 EXP_2 + 3 AGE + 4 AGE_2 + 5 WM99 + 6 LW + 7 HET + 8 HET_2 + 9 BOOT 3 + 2 We only consider teams that have qualified for the final A or final B. Therefore, the quicker times in the finals cannot be the result of slower teams being excluded from the tournament. 3 BOOT is a vector of six different boat types. Boat category 1 is 1x = single sculls; category 2 is 2x = double sculls; category 3 is 4x = quadruple sculls; category 4 is 2-= pair; category 5 is 4-= coxless four; and category 6 is 8+ = eight with coxswain. Table 2 shows the estimation results of four different specifications; the calculations vary by the used estimation model and absolute vs. standardised dependent variable. Model 2 presenting the random-effects (RE) estimation for relative end-time is the preferred version; these results are the basis of the subsequent discussion. The other three model specifications give evidence for the robustness of our findings. All independent variables have the expected effect on end times; all coefficients possess the expected sign and lie within the statistical confidence intervals. The explanation of variance in absolute end-time is higher than 95%; this is in accordance with findings from other endurance sports analysing end times (Frick/Klaeren 1997). On the other hand, this result should be interpreted with caution. 58% of end-time variance is explained solely by the number of rowers in the boat (variables BOAT). Controlling for the categorical variables sex and lightweight, the eight is faster than all other boat types; singles and pairs are the slowest boats. In other words: more rowers make the boat faster. This dominant effect may bias or cover up the hypothesised effect of a heterogeneous competitor field. Therefore, standardising for boat types by gold standards, as it is common in rowing, is a useful measure for our investigation. This is confirmed by the results of model 2; the adjusted R 2 decreases by more than 50% (Adj. R 2 = 0,34), but all coefficients keep the expected algebraic sign. Our analysis focuses on the hypothesised effect of heterogeneity (HET). Model 2 shows a significant negative coefficient; this indicates faster times in later, more homogeneous stages of the regatta. Statistically, the positive sign of the squared heterogeneity term (HET_2) counteracts this positive effect. However, this can be explained through exhaustion of all athletes at the end of the tournament (Prinz 2008). After all, our results confirm the contamination hypothesis prominent in the tournament literature: in heterogeneous competition the available price mechanisms do not have the same incentive effect on participants as in homogeneous competition. On average, contestants hold back effort in tournaments with heterogeneous line-ups.
The observed effects of our control variables are intuitively explained. In Model 1 (random-effects specification) the variable SEX indicates on average 40 sec faster races for the men's events compared to respective women's events. Similarly, because of their lighter physique, lightweight rowers (LW) are significantly slower than their heavyweight counterparts. Initially, more experienced crews (EXP) row faster times. With increasing age this effect diminishes. Hence, at later career stages additional experience does not outweigh deterioration in fitness. The positive coefficient of the squared experience term (EXP_2) yields a convex experience-power-profile with its minimum (i.e. maximum strengths) at the experience of 11 years, after that, athletes slow down (again). The rank at the 1999 world championship (WM99) has the expected positive coefficient. Each better rank yields -ceteris paribus -a 0.1% faster performance at the Olympic tournament. Hence, the rank achieved at the 1999 world championship is a good indicator for the skills and fitness of rowing teams at the Sydney games in 2000.
In an attempt of offering further evidence of our heterogeneity variable we reestimate the random effects version of Model 2 (table 2) and substitute our linear experience parameter (EXP) by a simple binary variable (EXP_Dummy; EXP_D) taking on the values 0 for inexperienced and 1 for experienced athletes (Random-Effects-Alternative Model). This is advisable since too many inexperienced rowers (0-values) might bias our findings. Moreover, we use a "de-pooling" strategy by presenting the influence of our heterogeneity variable on the rowers' finishing times by splitting the sample into the six boat type categories.
Taken the results together, we contend that the hypothesised effect is undoubtedly confirmed in our data. Although the findings regarding the six different boat types of table 3 should be interpreted cautiously due to some dropped out variables (multicollinearity; less variance as well as number of cases) we find that -on averageheterogeneous competitions are rowed with less intensity.
Incentive effects of heterogeneous line-ups on favourites and underdogs
Rowing is an aerobe endurance sport that has been studied by scientists and coaches for a long time. Optimal racing strategies to complete the course in the fastest possible time have been developed and are well known among the athletes (Garland 2005;Teti/Nolte 2005). Hence, by comparing the split times for each of the 500m quarters, rowing experts can determine whether a team rowed the physiologically and psychologically optimal racing strategy or whether they held back effort during the course of the race. Accelerating the boats from standstill to racing speed requires the highest effort level. However, because of the glycogen stored in the muscles, athletes can exceed the aerobic threshold at the beginning of a race without suffering an oxygen debt. Furthermore, rowers sit in their boats facing the stern; they see neither the finish line nor rivals ahead of them. Vice versa, the leader can observe his competitors without having to turn around; even for experienced rowers this would slow down the boat. Hence, despite the high effort required to accelerate the boat, rowers have good reason to start the race with the fastest split time. In the second quarter of the race, athletes must slow down to cruising speed in order to guarantee sufficient oxygen supply; otherwise lactic acid production would set in, the rowers would "die a slow death" on the course. Crews are advised to continue the same rhythms in the third quarter of the race. Neglecting the specifics of the human body's energy supply system, even splits are the fastest race strategy from a pure hydrodynamic point of view. For the final sprint as part of the last quarter of the race, crews make use of anaerobic lactic energy supply and row higher speeds again. Hence, the ranking of splits in the "optimal racing strategy" is: first quarter, fourth quarter, second quarter, third quarter. Whenever the split for the fourth quarter is the slowest, the athletes have either deliberately slowed down or they misjudged their capacity. The latter is very unlikely to happen for experienced crews competing at Olympic Games. However, if it happens (as could be observed for New Zealand contender Mahe Drysdale in the Bejing 2008 men's single scull final) it is accompanied by extraordinary fast splits in the second or third quarter. However, none such case was observed in the 2000 Sydney competition. Therefore, the subsequent analysis is based on the assumption that rowers deliberately hold back effort if the last quarter of the race is rowed in the slowest split time. At Olympic level, not rowing a final sprint is taken as a clear indicator of economising physical strengths.
Progression in the tournament is determined by the rank achieved in heats and repechages. Hence, if the ranks are taken by large margins at the 1.500m mark, there is no incentive for a favourite to increase his effort in the final quarter of the race. On the other hand, crews trailing behind are advised to show full effort in order to take advantage of any potential mishap in the boats of the leading crews. Athletes will economise on their strengths only if the price (progression in the tournament) is safe. This picture changes in the finals. First, there is no incentive to conserve energy for any further rounds of competition. Second, the strongest crews in the finals B, C, and D want to show by their end-time that they would have been able to compete in the respective final one step further up. In the final A, even the gold medal favourite will only refrain from a final sprint if his position is absolutely unchallenged. The above considerations yield hypotheses 2 and 3, respectively: Hypothesis 2: Favourites considerably hold back effort more often than underdogs.
Hypothesis 3:
Athletes do significantly more often hold back effort in the preliminary stages of the tournament than in the final round.
The above discussed effects are much more difficult to observe in crew boats than in the single scull events. First, speed differences are smaller in crew boats; this makes it more difficult to interpret race strategies from the split times. Second, the effort shown by the individual athlete is primarily determined by the race strategy given by the coxswain or the crew member chosen to call for changes in boat speed. In general, tactically rowed races with deliberate slow-downs are less often observed in crew boats (Teti/Nolte 2005). For a first analysis, we therefore focus on the single scull events at the Sydney 2000 Olympics. In order to account for underdog specific effects, this time we include all entries in the analysis. Favourites and underdogs were coded by their final rank in the tournament, using a median split to divide the field.
Since not all contenders finished all their races, the total sample consists of N=142 cases. To include deliberate holding back of effort in the analysis, we introduce the categorical variable shirking. As indicated before, by shirking we mean that a boat has slowed down in the final quarter of the race.
As predicted, table 4 shows correlations of the variable shirking with both variables favourite and final. Furthermore and not surprisingly, the variable favourite significantly correlates with experience. Table 5 presents a cross-tabulation of the dependent variable shirking with both hypothesised variables (favourite and final). In 75 of 141 cases rowers showed deliberate holding back of effort; but only 6 of these cases were final round races. Only 27 of the races by underdogs were identified as deliberate hold-up. The table also shows that favourites had to row more races than underdogs; in order to qualify for the finals A and B contestants have to row semi-finals which are not needed to compete in the finals C and D. In addition, 2 -tests show that the differences in shirking for both favourites and underdogs as well as preliminary and final stages of the tournament are statistically significant. This aligns with the contamination hypothesis (hypothesis 1) derived in section 3. Additionally, table 6 shows the results of a random-effects logistic regression for the dependent variable shirking. Final and favourite are taken as independent variables; we controlled for sex, age and experience of the athletes. 4 The results clearly imply that neither hypothesis 2 nor hypothesis 3 can be refused. Surprisingly, the variable SEX also has a statistically significant effect at the 5%-level; male rowers are more likely to hold back effort than female athletes, a result that is opposite to the findings presented by Frick/Klaeren (1997). However, this effect may be due to the skewness of the end-time distribution among the competitors in the women's single sculling event.
The field was dominated by the three medalists, namely Ekatarina Karsten-Khodotovitch (BLR), Rumyana Neykova (RUM), and Katrin Rutschow (GER). These three outstanding oarswomen passed the finish line within 9/10 of a second, but more than 8 seconds ahead of the rest of the field, whereas ranks 4 to rank 10 all achieved end-times within a span of 4 seconds. Hence, it is very likely that apart from the three medallists none of the other athletes coded as favourites by the median split, ever was in the comfortable position to deliberately slow down. Correlations between age, experience, and favourite indicate potential multi-collinearity. However, variance-inflation factors all do not exceed 2; the highest value being VIF Age = 1,76. Hence, there is no multicollinearity between our independent variables.
Discussion
In our study of the Olympic Rowing Regatta in Sydney 2000 we found empirical evidence for the contamination hypothesis. On average, races in heterogeneous fields are rowed slower than races in close competition. In an additional study of the single sculling events, we deliver the first field evidence that favourites and underdogs react differently to heterogeneous fields. Whereas favourites take advantage of their strengths and hold back effort in preliminary stages of the tournament, underdogs significantly less often deliberately slow down their boats.
The results also provide evidence for the importance of the prize structure of rank-order tournaments. Whereas in preliminary stages athletes economise on their strengths and foremost secure progression, rowers show their best possible performance only in the finals. This has important implications for the use of rank-order tournaments in internal labour markets. Everyday job performance can not be modelled as a -in Olympics it maybe once in a lifetime -once only chance. Tournaments must remain a special event in order to unfold incentive effects. Therefore, tournaments should only be used to a limited degree. In everyday work life principals must consider supplementing tournaments by other incentive schemes that have less strict effect requirements than rank-order tournaments.
A general limitation to our study is the operationalisation of variables, in our case effort levels and heterogeneity. In the first part (section 3), taking the end time as an indicator for effort levels is an approach well known in the analysis of endurance sports. As expected, the results align with evidence from studies on other sporting events. However, the results could be distorted because of the variable tournament stage as a measure for heterogeneity. Despite a potentially heterogeneous line-up, each competitor starting in one of the six lanes in a rowing course may have one rival of similar strengths. Taken to an extreme, a heat may consist of three close matches set apart by large margins between rival pairs. Hence, the likelihood of winning a rank by only marginally increasing effort depends on the existence of one close competitor and not on the heterogeneity of the field of six; this condition may be given as well in a heat as in a final. However, our results are stable across all four model specifications. This implies that the variable tournament stage can be interpreted as an indicator for heterogeneity.
In the second part focusing on the single sculling events (section 4) we use a different measure for effort, namely the fact whether a rower takes a final sprint for the line or not. Hence, of the two options for the favourite to take advantage of his superior strengths discussed in the introduction, our coding only covers one, namely slowing down once the progression in the tournament is secured. The other option, adjusting effort levels to the speed of slower competitors from the very start, is not captured. However, even loosing out on additional cases of hold-up, our results are statistically significant.
Aside from the above limitations, our results imply that firms organising heterogeneous tournaments have to find ways to reinstall incentives of the favourites. One alternative mentioned in the literature would be to handicap favourites. This, however, may be problematic due to labour law regulations. A better way to keep favourites' in-centives high is probably to enrich the tournament with absolute performance standards. If the size of the winner prize depends on the winner's absolute performance (i.e. on whether the winner's performance is above some standard), favourites have an incentive to put forth effort even if they are far ahead of their competitors since slacking off may come at the risk of not meeting the performance standard.
Finally, the evidence that rowers do not hold back effort in the finals implies that awarding absolute achievements (i.e. end-time) has a more profound incentive effect then awarding rank-order. This implication is fundamental for firm internal incentive schemes like e.g. goal attainment of sales forces.
Summarising the above discussion, the achieved results show that analysing the prize structure of rank-order tournaments is a promising field for further empirical research. Furthermore, the sport of rowing proved to provide suitable data to test theoretically derived hypotheses. Especially for research questions regarding different prize structures, rowing with national associations who differ in their regatta regulations provides ample opportunity for future empirical work.
|
2019-05-14T14:05:00.878Z
|
2009-07-01T00:00:00.000
|
{
"year": 2009,
"sha1": "a3856c5cca7b6d4399665b5a43fb63bb7ce3fbc9",
"oa_license": "CCBY",
"oa_url": "http://www.nomos-elibrary.de/10.5771/0935-9915-2009-3-239.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "19f910bb40645be1888e406cd993a948e9e2b2d5",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
182319022
|
pes2o/s2orc
|
v3-fos-license
|
Ethical Mimesis and Emergence Aesthetics
: In nature the transformation of dead matter (objects) into living matter endowed with green energy or subjectivity is called emergence . Art itself, I argue, is an emergence phenomenon, enacting and replicating in theme and form emergence in nature. Literature thus conceived is about the emergence of spirit. It depicts forces that suppress spirit and enables the spiritual in nature to find expression. It gives voice to spirit rising. Mimesis is thus reconceived as a replication of the natural phenomenon of emergence, which brings to life what has hitherto been seen as object, dead matter. This article outlines the concept of emergence in current philosophical and scientific theories; examines the aesthetic precursors of emergence theory in certain Frankfurt School theorists, notably Theodor Adorno; and applies emergence aesthetic theory to a contemporary novel, Richard Powers’ The Overstory (2018).
[or, indeed, into reality] what is not otherwise seen or recognized [or realized]. Art enacts mimesis-not in the traditional sense of an imitation of a surface reality-but . . . as a recognition of and a representation of [a realization of] the spiritual power that inheres in physical reality and which comes to life in the phenomenon of aesthetic emergence. (Donovan 2016, p. 204, parenthetical material added) I adhere in this regard to an emergence theory closely akin to that proposed by the panpsychists, who maintain that life and mind inhere in the most minute particles of matter and therefore are latent and ready to emerge when a constellation of materials and energies come together enabling their expression or emergence. Through this process of realization-of making real or enabling the real to emerge, otherwise dead matter comes to life. There is, Virginia Woolf once observed, "some real thing behind the appearances, and I make it real by putting it into words" (Woolf 1985, p. 72).
The matter of art-whether it be paint on a canvas or sound waves in a Bach fugue or words printed on a page-becomes transfigured through the minds of the creator and the receiver into a spiritual universe-an unreifiable, subjective, qualitative realm-an "other" dimension that is ultimately only accessible through human semiotics, signifying symbolic forms.
French poet Paul Valéry explained the transformative aesthetic process of emergence by analogy to physical processes of emergence. "The sound" one hears in a symphony, for example-inanimate physical acoustical waves-dissolves as it is transfigured into the nonphysical world of the "musical universe", just as, Valéry notes, "in a saturated salt solution a crystalline universe awaits the molecular shock of a minute crystal in order to declare itself " (Valéry 1971, p. 915). "Art", therefore, as Theodor Adorno maintained, "is an imitation . . . of the act of creation itself" (Adorno 1962, p. 171). In this essay, I am adapting Adorno's statement to stipulate that art is an imitation and a replication of the act of emergence itself -an imitation via both its form (replication) and its content (imitation).
Such a conception-of an emergence aesthetics-entails a mimesis that is inherently ethical in that the emergence of spirit necessarily obviates objectification and reification of alterity in all its forms. An emergence aesthetics-one identified by its character as an emergent process-is therefore an aesthetics of care, which involves a participatory form of empathic mimesis-what Adorno called "mimetic comportment" (Adorno 1997, p. 110), which dissolves subject-object duality into conversation, a dialogue that occurs in a realm beyond the composite physical words, notes, paint. "The thingly structure" of artworks, Adorno posited, "makes them into what is not a thing; their reity is the medium of their own transcendence" (Adorno 1997, p. 92).
In order to further explicate these ideas, it may be useful to review the root concept of emergence as seen in current scientific theory. Many theorists believe that we are in the early twenty-first century in the process of a paradigm shift away from classical reductionism, as articulated in Cartesian/Newtonian theory, toward emergence theory, in which "consciousness" is established "as a fundamental property of the universe" (Davies 2006, p. xiii). For, as philosopher Thomas Nagel recently asserted, "The great advances in the physical and biological sciences were made possible by excluding the mind from the physical world. . . . But at some point it will be necessary to make a new start on a more comprehensive understanding that includes the mind" (Nagel 2012, p. 8). That "new start" appears to be at hand in the emerging theories of emergence. One theorist goes so far as to propose that the twenty-first century may ultimately be called "the age of emergence" (Pearce 2015, p. 14). Emergence at its most elementary level occurs when two chemically different molecules (having a different atomic make-up) combine to form a qualitatively new substance. When hydrogen molecules combine with oxygen molecules they form a new substance, water. Neither oxygen, nor hydrogen have the qualities of liquidness or wetness, so the resulting substance-water-is qualitatively new. "Emergence properties are irreducible to, and unpredictable from, the lower-level phenomena from which they emerge", philosopher Philip Clayton explains (Clayton 2006, p. 2).
Some maintain that qualities such as wetness only emerge or become realized when there is an experiencing subject. A sodium chloride molecule is not in and of itself salty. Saltiness only emerges when experienced by a tasting subject. "For there to be something having the property of saltiness one needs something that experiences this property", Patrick Spät asserts. The saltiness "is in the sodium chloride as an unrealized disposition-and with the intervention of an experiencing subject this disposition becomes realized" (Spät 2009, pp. 162-63).
This theory connects to certain aspects of quantum physics theory where wave phenomena are seen to "collapse" into particles in the presence of an observing subject or a measuring/monitoring instrument. I will not further explore this connection here (see Donovan 2014), but one of its most intriguing aspects is the position held by some theorists that it is the environment itself that makes the unseen, nonphysical "wave"-universe "collapse" into physical form. That collapse is referred to in physics as "decoherence". It occurs when "quantum objects acquire classical [ordinary, everyday] properties only through the interactions with their natural environment" (Joos 2006, p. 53). "The properties of the 'ordinary' objects of our experience . . . emerge from, or are created by irreversible interactions with the environment" (Joos 2006, p. 71). For such objects "the environment [therefore] acts in a manner similar to a measuring device" (Joos 2006, p. 59). Since the measuring device is effectively a subjective observer, the implication here is that it is subjectivity that causes reality-the real world of "ordinary" objects-to emerge.
Beyond physiochemistry, emergence also occurs on the biochemical level where the appearance of self-replicating cells and clusters of cells-living forms-is held to be unpredictable from their physiochemical components. The DNA molecule, for example, is an emergent phenomenon: its "structure represents a high level of chemical improbability, since the nucleotide sequence is not determined by the underlying chemical structure" (Clayton 2006, p. 17).
And, finally, the phenomenon of consciousness and subjectivity is an emergent property, not causally explicable by the physical components that appear to be its base. "It is not enough to say that mind is the brain", Clayton maintains; "a mental event is . . . composed out of individual neural events and states, and something more" (Clayton 2006, p. 26).
It is that "something more" that remains the mystery at the heart of the creative process (Diotima's poiein) and thus of emergence aesthetics. What that "something more" is and where it comes from remains a question. Many scientists and philosophers of science believe that increasing physiochemical complexity triggers emergence in the natural world. Others-especially those inclined toward panpsychism, such as Patrick Spät cited above-hold that there is something latent in the component materials that is activated in the emergent process; that is, brought to life or realized therein. Still others maintain that-especially in the most mystifying ontological emergences such as the arising of life and mind-there is a divinity at work. The latter, especially those espousing process theology, note that life forms are guided by a telos, a formal purposive design that is not reducible to the laws of physics. Philosophers from Aristotle to Kant to certain twenty-first-century biologists take this view (Donovan 2018 for a further discussion).
Art replicates the process whereby life and mind emerge from inert matter. The material of art (the physical world) emerges as-is transfigured into-spirit through the subjective consciousness-the mind-of the artist. In this way art may be seen as a replication of the emergence character of quantum decoherence. Art turns a virtual subject into a representative object (mimesis), which is transformed into a subject again (existing in the mental universe of the artist and receiver) through the aesthetic process. As philosopher Martin Buber explained in his aesthetic theory, a work of art, though an object in concrete form, becomes alive as a thou in the encounter with a subject. Artworks thus may be termed geistige Wesenheiten, translated (by Buber) as "spirit in phenomenal forms" (Kepnes 1992, p. 23). "A geistige Wesenheit, a work of art, or form of spirit, although an It, can 'blaze up into presentness,' into the status of a Thou, again" (Kepnes 1992, p. 24).
Karl Steel, writing in a New Materialist vein, posits that "subjects are objects that are cared about" (Steel 2012, p. 33)-that is, paid attention to. The caring attentiveness of the artist brings out the latent subjectivity-expressed as the "aura", to reprise Walter Benjamin's celebrated term (which Adorno defined as "whatever goes beyond . . . factual givenness" (Adorno 1997, p. 45)-of the material she is processing, causing it thus to emerge, transforming non-being into being. Michael Pearce, himself a visual artist, proposed that art objects are materials isolated and shaped by the artist for the sole purpose of providing recipients (viewers, listeners, readers) with an emergent experience. In art, Pearce claims, "mind is expressed in material" (Pearce 2015, p. xv). The aesthetic "emergent experience [is] the moment when mind reaches out to mind through an object" (p. 41). "Creativity is emergence in action, when . . . an answer flowers from [the imagination] . . . as an emergence product of its nutritional home" (p. 146). This "spiritual experience is ontologically irreducible, like consciousness itself-we can't eliminate the spirituality of mind by reducing it to the action of neurons" (p. 44).
Pearce espouses a version of process theology in which the evolution of the universe in its living and mental forms is teleologically structured toward fulfillment or completion. "If we accept the idea of an evolving universe and its emergent phenomena as mind, then an artist's work is an imitation of mind, producing emergent works of art" (Pearce 2015, p. 85). The aesthetic experience of "transcendence is beautiful, sublime and humbling because we become aware of something that is awesome-the universal mind" (p. 85).
Pearce thus offers a definition of art as an expression of mind in material made solely for the purpose of providing for the emergent experience for another's mind. . . . The emergent experience is one of evolutionary affirmation in which the consciousness of the beholder evolves as a result of its unity with the appreciated thing . . . [contributing to] the evolution of consciousness. (Pearce 2015, p. 125) Pearce, like process theologians in general, sees this evolution as a "gradual evolutionary movement toward goodness and harmony" (p. 127), in which "emergent qualities that are not cooperative are less likely to succeed in the long run because they turn inward upon themselves" (p. 127); "hubris and nihilism [are thus] the opposite of emergence" (p. 128).
One might question, of course, whether there is in today's world much evidence of such an evolution toward the good. Flannery O'Connor effectively skewered a similar notion proposed by Teilhard de Chardin (in The Phenomenon of Man) in her short story "Everything that Rises Must Converge" (O'Connor 1962), which demonstrates inter alia that evil-"hubris and nihilism"-are alive and well. But, while Pearce's optimism may seem unwarranted, his theory of art as emergence contributes usefully to the project at hand.
The basic outlines of emergence aesthetics were laid out decades ago (and before scientific theories of emergence arose) by Theodor Adorno, who wrestled with the complex dialectical relationship between subject and object throughout his work. Adorno proposed "mimetic comportment" (mimetisches Verhaltung) as the requisite aesthetic "attitude toward reality" that is "distant from the fixated antithesis of subject and object" (Adorno 1997, p. 110). Through such "comportment" art "assimilates itself to the other rather than subordinating it" (Adorno 1997, p. 331). "Art's mimetic element" is therefore "incompatible with whatever is purely a thing" (Adorno 1997, p. 17). The artist thus sees into the spiritual heart of nature-its thou, in Buber's terms-and enables its expression through the aesthetic process. "If the language of nature is mute, art seeks to make this muteness eloquent" (Adorno 1997, p. 78). Art, therefore, gives expression to what is otherwise silent (or silenced by human domination). "What is waiting in the objects themselves", Adorno explains in Negative Dialectics, "needs . . . intervention to come to speak" (Adorno 2007a, p. 29). Art thus operates as an emergence, bringing to life what is latent and mute in the natural world but brought to consciousness, to spiritual life in the aesthetic process.
Adorno's fellow Frankfurt School theorist Max Horkheimer elaborated the idea further, explaining that through mimesis "nature is given the opportunity to mirror itself in the realm of spirit" (Horkheimer 1987, p. 179). Such mimesis is cast by both Horkheimer and Adorno as the opposite of fascist cultural forms of domination (both were Jewish refugees from Nazi Germany). "Fascism treated language as a power instrument. . . . for use in production and destruction in both war and peace. The repressed mimetic tendencies were cut off" (Horkheimer 1987, p. 179). It is important, therefore, as a counter to fascism, Horkheimer notes, to allow language and art "to fulfill [their] genuine mimetic function, [their] mission of mirroring the natural tendencies" (Horkheimer 1987, p. 179).
Without using the term emergence, philosopher Richard Wolin identifies Adorno's aesthetics as an emergence theory. All works of art, according to Adorno, Wolin notes, "inherently surpass their somatic side and thereby give rise to a force that transcends the sum total of their individual moments. . . . This is their 'surplus' [das Mehr], the moment of . . . Unwirklichkeit" (Wolin 1979, p. 118). Unwirklichkeit (Adorno's term) means Un-reality. This "surplus", Wolin explains, is "the spiritual element that arises from the interplay of tensions, the constellation of moments that comprise a work of art" (Wolin 1979, p. 118). Art, therefore, to reprise scientific theories of emergence noted previously, expresses the "something more" that emerges when the mental arises from the physical (Clayton 2006, p. 26).
Nature, whose subjective voice Adorno saw as repressed by human domination and objectification, includes all living life forms, especially nonhuman animals whose suffering he and Horkheimer were acutely aware of (see especially their Dialectic of Enlightenment). In Eclipse of Reason Horkheimer maintains that art's purpose is to "be the voice of all that is dumb, to endow nature with an organ for making known her sufferings" (Horkheimer 1987, p. 101). For, "nature's text . . . if rightly read, will unfold a tale of infinite suffering" (Horkheimer 1987, p. 126). And, Adorno noted in Negative Dialectics, "the need to lend a voice to suffering is a condition of all truth" (Adorno 2007a, pp. 17-18).
Ethical mimesis for Adorno (his "mimetic comportment") therefore entails "treating nature and animals as subjects, as ends in themselves" (Flodin 2011, p. 146). In his 1958-59 lectures on aesthetics Adorno specified that the mimetic process involves a transfigurative dialectic between subject and subjectified object. Mimesis is "the impulse to so to speak make yourself into the thing you stand before, or make the thing you stand before into a self" (Adorno 2007b, p. 70, as cited in Flodin 2011. In other words, mimesis requires meeting the other half-way, realizing her subjectivity by entering empathetically into her reality. In short, Adorno's "mimetic approach . . . respects the other as a subject" (Flodin 2011, p. 154).
In my article "Aestheticizing Animal Cruelty" (Donovan 2011) and my book The Aesthetics of Care (2016) (see especially Chapter 4), I detail that much literature of the past, and indeed, of the present fails to consider animals and/or the natural world as subjects. Rather, they are either dismissed as trivial and unworthy of full consideration, or they are objectified and treated as aesthetically interesting "local color". (There are notable exceptions, of course; Tolstoy, for example (Donovan 2009).) Especially deplorable is the all-too-common aestheticization of animal cruelty and human violence. Such aestheticization requires denying the subjectivity of the material being treated by the writer or artist. In an earlier article, "Beyond the Net: Feminist Criticism as a Moral Criticism" (Donovan 1983), I contended that much literature of the past denied or ignored the subjectivity of women, treating them as stereotyped objects of interest only insofar as they amplified the projects of the male protagonist (Kappeler 1986).
In a more recent article, "Literary Ecology and the Ethics of Texts" (Zapf 2008), Hubert Zapf provides an interesting example of Adorno's "mimetic comportment" in action (though Zapf does not identify it as such), showing how a writer-in this case, Emily Dickinson-effectively introduces animal subjectivity and agency into her literary work.
The poem, Dickinson's #986, concerns a snake in the garden-not demonized as an avatar of evil, but existing in his own right as a subjective presence. It wrinkled and was gone- (Dickinson 1979, p. 711, as cited in Zapf 2008 As Zapf points out, "the snake is presented as an independent, fascinating, yet uncanny presence" (p. 857). But the presence that emerges in Dickinson's poem is not just of the snake but of the ineffabile, the "something more".
The Grass divides as with a Comb-
Dickinson then refers to the snake as a person, including him in the designation "Nature's People".
Several of Natures People
I know, and they know me-
I feel for them a transport
Of cordiality- (Dickinson 1979, p. 711, as cited in Zapf 2008 Dickinson thus posits that mutual knowledge and understanding are exchanged between subjects, with sympathy expressed on the part of the human subject for the animal subject. "What is conveyed here", Zapf notes, "is the vital interconnection of the human subject with a symbolic life force . . . with an 'other' that is radically alien yet also affects the innermost core of the self" (Zapf 2008, p. 858).
In poem #1068 the poet conjures up the sacrality of the natural world by seeing it as the site of an unseen religious rite conducted by its creatures. "A minor Nation celebrates/Its unobtrusive Mass" such that "a Druidic Difference/Enhances Nature now" (Dickinson 1979, p. 752, as cited in Zapf 2008. As Zapf notes, we have here "the focus of attention of an observing consciousness that almost seems to merge into the observed microworld of nature, which is perceived in the imagery of an ancient highly ritualized culture [the Druidic]" (Zapf 2008, p. 858).
In his pathbreaking work on ecocriticism, The Environmental Imagination, Lawrence (Buell 1995) establishes four ethical criteria for an eco-aesthetics: among them, that "the nonhuman environment [should be] present not merely as a framing device but as a presence" and that "the human interest is not understood to be the only legitimate interest" (p. 7). Buell calls for "disciplined extrospection" (p. 104) on the part of the artist as a means of realizing nature's presence. Extrospection means focusing the mind outside the self so as to attempt "to see or articulate the natural environment on its own terms" (p. 81). Dickinson clearly manifests such extrospection-which seems but another term for Adorno's "mimetic comportment"-in her poetry.
Another, more recent example of a work that evidences such a sensibility is Richard Powers' contemporary novel, The Overstory (2018). In this work, trees are presences that affect the human characters in various ways and afford the structural model that unifies the work.
It is a novel about emergence-in both the natural world and in the transformations of the human characters. The work is structured on these transformations in a way that replicates the emergence of a group of trees into a forest community.
The novel traces the life-trajectories of nine human characters who are affected by specific trees in their youths and who later experience epiphanies in which they realize the subjectivity of trees and other creatures of the natural world. These epiphanies constitute a kind of metanoia or conversion experience in which their sensitivity to the suffering-in Adorno and Horkheimer's terms-that humans inflict upon the natural world-particularly trees-becomes more acute. Through this awakening each becomes moved toward ethical political commitment, motivated by a desire to save the trees and prevent further destruction. In becoming eco-activists their lives intertwine like the branches of trees that form the canopy in a forest, emerging thus as "an overstory" (the technical term for such a canopy). "They are humans on their way to turning into greener things. Together, they form one great symbiotic association" (Powers 2018, p. 141).
Unlike most "humans [who] hear nothing" (p. 168), several of the characters in the novel hear the "voices" of nature, but it is the women who seem to have the most sensitivity to these otherwise mute communications. 1 Olivia Vandergriff, an Ohio college student, begins to hear or experience "beings of light" after she is electrocuted in a near-death event: "they're . . . unbearable beauty, they pass into and through her body. . . . They speak no words out loud. . . . They aren't even they. They're part of her. . . . Emissaries of creation" (p. 163). These divinities, so to speak, guide her toward commitment as an eco-warrior; they tell her: "the most wondrous products of four billion years of life need help" (p. 165). Olivia says she hears "the trees. The life force . . . like a Greek chorus in my head" (p. 322). Her intensity attracts others who form a circle of political activists around her. She and her partner Nick Hoel end up spending nearly a year high up in the branches of a redwood tree as a protest against logging. Mima, the tree who harbors them, is a subject, a thou. (The tree is referred to with the personal pronoun "who" (p. 295).) Olivia "speaks the creature's name like it's an old friend" (p. 262).
Patricia Westerford is another who hears the voices of nature. A botanist, she is likely modeled on Professor Suzanne Simard of the University of British Columbia, who discovered that trees communicate via biochemical signals through their roots (Wohlleben 2016, pp. 247-50). Westerford makes a similar discovery: maple trees under attack by insects signal to other unaffected trees nearby, who express the same endogenous chemical insecticide as the affected trees. It is apparent therefore, that "the wounded trees send out alarms that the other trees smell. Her maples are signaling. . . . Life is talking to itself, and she has listened in" (p. 126).
Westerford publishes her astonishing results in a major scientific journal, but they are immediately refuted by the scientific community: she is ridiculed, loses her job, and spends several years in the wilderness (literally) before her discovery is validated and she is rehabilitated as an esteemed scientist. Her words thus, while long repressed, "have gone on drifting out on the open air, lighting up others, like a waft of pheromes" (p. 137).
As Patricia continues her research, she comes to realize Her trees are far more social than even [she] suspected. There are no individuals. . . . Everything in the forest is the forest. Competition is not separable from . . . cooperation. . . . It seems most of nature isn't red in tooth and claw, after all. (p. 144) In the end she concludes, "A forest knows things . . . There are brains down there, ones our own brains aren't shaped to see. . . . Link enough trees together, and a forest grows aware" (p. 453). In other words, an emergence occurs in nature when a certain conjunction of elements comes together.
Despairing of saving existing forests, Westerford begins a seed bank to save all existing species, so that in some distant, more enlightened era they may be planted and brought back to life. She's surrounded by thousands of sleeping seeds, cleaned, dried, winnowed, and X-rayed, all waiting for their DNA to awaken and begin remaking air into wood at the slightest hint of thaw and water. The seeds are humming. They're singing something-she'd swear it-just below earshot. (p. 389) Mimi Ma's epiphany occurs as she is sitting against a pine tree on the Pacific Coast. She has just learned that two of her co-conspirators in an eco-warrior arson action years before have been imprisoned, one of whom, Doug Pavlicek, has saved her from prosecution by refusing to reveal her participation in the event. "Mimi gets enlightened": "her mind becomes a greener thing". "Messages hum from out of the bark she leans against" (p. 499). "A chorus of living wood sings to the woman: If your mind were only a slightly greener thing, we'd drown you in meaning" (p. 4).
Powers seems to envisage that the human species is transforming or emerging into a new species, one that "will learn to translate between any human language and the language of green things" (p. 496). Each of the characters in the novel is engaged in the process of this evolutionary emergence-becoming a "greener thing" who is able to respond through Adorno's mimetic comportment to the languages and voices of the natural world.
There are several references in the novel to Ovid's Metamorphosis. Patricia, for example, reads the work as a teenager. "She loves best the stories where people change into trees" (p. 117). "She wants to . . . say, like Ovid, how all life is turning into other things" (p. 122; see also pp. 394, 466).
In the end, however, Powers vision is more dystopian (from a human point of view) than utopian. Adam Appich, for example, reflects, "Humankind is deeply ill. The species won't last long. It was an aberrant experiment. Soon the world will be returned to the healthy intelligences, the collective ones. Colonies and hives" (p. 56). 2 Doug sends a silent message to the trees: "Hang on. Only ten or twenty decades. . . . You just have to outlast us. Then no one will be left to fuck you over" (p. 90).
There is a sense in which trees have a higher far-reaching intelligence than humans: "Human wisdom", Patricia thinks, "counts less than the shimmer of beeches in a breeze" (p. 115). Trees even seem to be using humans or "toying" with them (p. 131), making it clear that "the world is not made for our utility. What use are we, to trees?" (p. 222). In the end, Adam thinks, he and his "green-souled friends" have been "used by life" (p. 495) in its never-ending self transcendence. "Life is going someplace. It wants to know itself " (p. 496).
Sitting up in their redwood bower, Olivia says of the loggers and of humankind in general, "They can't win. They can't beat nature". Nick replies sardonically, "But they can mess things over for an incredibly long time".
Yet on such a night as this, as the forest pumps out its million-part symphony, and the fat blazing moon gets shredded in Mima's branches, it's easy for even Nick to believe that green has a plan that will make the age of mammals seem like a minor detour. (p. 292) An emergence aesthetics, which inherently opposes any objectification of nature-whether it be via Cartesian scientific reductionism or stereotypical models that elide the subjectivity of various human groups, nonhuman animals, or other life forms-embraces instead an epistemology that sees the subjectivity-the mind, the spirit-inherent in these forms, and liberates it from human objectivist domination through the artistic process.
"Life is going someplace", a character in The Overstory observes. "It wants to know itself " (p. 496). Art, as envisaged in emergence aesthetics, plays a vital role in this evolutionary process. Through ethical mimesis it is nature emerging as a geistige Wesenheit, nature reflecting back on its spiritual self.
1
Otherwise, unfortunately, some of the minor women characters verge on stereotypes: Adam's dissertation advisor and Doug's camp visitor (pp. 236, 417) (seductress), Adam's wife Lois (p. 461) (shrew); Neelay's teacher (schoolmarm) (p. 99). All of these women threaten to thwart noble and idealistic projects of the men. 2 Here, it must be said, Adam and/or Powers veers toward what animal ethicist Tom Regan characterized as "environmental fascism" (Regan 1983, p. 362). Such thinking is found in "deep ecology" theory, for example, Aldo Leopold's Sand County Almanac (1949). Leopold asserted that the interests of the "biotic community" supercede that of any individual member (including human) of that community (Leopold 1966, p. 262). Echoes of Leopold's work recur in The Overstory. For example, one character states that the new human species "will come to think like rivers and forests and mountains" (p. 496)-one of Leopold's central ideas (Leopold 1966, p. 137).
|
2019-06-07T23:37:34.179Z
|
2019-05-22T00:00:00.000
|
{
"year": 2019,
"sha1": "504e43f8908e2b24aa6323a660e181359d665038",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0787/8/2/102/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "8d0521beaf6b265f05f2cab7ea8a88e281ccf4d0",
"s2fieldsofstudy": [
"Art",
"Philosophy"
],
"extfieldsofstudy": [
"Philosophy"
]
}
|
229345869
|
pes2o/s2orc
|
v3-fos-license
|
Re-Endothelialization of Bare Stroma after Descemet's Detachment due to Macroperforation during Deep Anterior Lamellar Keratoplasty
Purpose: To report a case with spontaneous re-endothelialization of bare stroma after subtotal detachment of Descemet's membrane (DM) due to macroperforation during deep anterior lamellar keratoplasty (DALK). Methods: Case report. Results: A 64-year-old patient underwent DALK for deep stromal scarring secondary to herpetic keratitis. During manual dissection, DM macroperforation occurred, and this was successfully managed intraoperatively and postoperatively. The DM with host posterior stroma remained attached for 10 months when it detached from the bare donor stroma. The cornea remained clear, with uncorrected distance visual acuity (UCVA) of 0.17 logMAR. After graft suture removal 30 months later, he was noted to have regular astigmatism and cataract for which he underwent phacoemulsification with toric intraocular lens implantation. Twenty-four months following his cataract surgery and 58 months following his DALK, his UCVA remains 0.17 logMAR and the cornea remains clear with no evidence of edema. His average specular count at 58 months was 1296 cell/mm2 . Conclusion: This case shows a very good visual outcome with clear cornea at 58 months despite of large DM detachment which happened 10 months following manual DALK with intraoperative macroperforation.
endothelial keratoplasty (DMEK) suggest migration and repopulation of endothelial cells, both from donor and host's DM on the bare recipient's stroma leading to the transparent cornea. 5,6 Dirisamer et al. 5 suggested that the presence of donor endothelium in the recipient anterior chamber (AC), as well as direct physical contact between donor and host tissues, may be prerequisites for endothelial repopulation of the recipient posterior cornea and/or recovery of corneal clarity.
We report a case of DALK with a DM macroperforation with late postoperative detached DM and re-endothelialization of donor stroma.
caSe report
We report this case after obtaining appropriate consent for publication from the patient. A 64-year-old male with a history of left eye herpes simplex virus keratitis, previous amniotic membrane transplant for recurrent non-healing epithelial defect, deep stromal scar, and the best corrected distance visual acuity (BCVA) of 0.3 underwent DALK in January 2015. During manual dissection of the anterior corneal lamella, a DM macroperforation was noted (approximately 4 mm in its maximum diameter). Gentle manual dissection was continued away from the location of macroperforation. Manual lamellar dissection was completed without converting to penetrating keratoplasty, and AC was maintained with repeated air injections during the entire procedure. 1 An 8.25-mm donor corneal button was prepared. The DM was stripped, and the donor stroma was transplanted with sixteen 10-0 nylon interrupted sutures. At the end of the procedure, air was injected into the AC to tamponade the host's DM to the donor stroma. The pupil was dilated with G. cyclopentolate 1% qds for 7 days. The patient also received tablet acetazolamide 250 mg qds for 3 days along with G. Tobradex (Alcon Laboratories, Fort Worth, Texas, USA) qds for a month and tablet aciclovir 400 mg 5 times a day for 2 weeks. On the following day, double AC sign 1 was noticed and the patient underwent rebubbling with air in the AC a week later as the double AC did not resolve spontaneously. The double AC took over 8 days following the rebubbling to settle. The patient still remains on G. loteprednol once a day along with tablet aciclovir 400 mg bd. He had no recurrences of herpetic keratitis till 58 months, postoperatively.
At 4 weeks postoperatively, the uncorrected distance visual acuity (UCVA) was 0.75 logMAR and BCVA was 0.3 logMAR. The patient was followed up in the corneal clinic almost every 6-8 weeks post-settlement after the resolution of double AC. The DM was noted to be attached to the stroma until 8 months postoperatively. Following this, at 10 months, a distinct separation was noted between host posterior corneal lamella (DM with some posterior stroma) and donor stroma [ Figure 1a and b]. There was no corneal edema or redetachment noted until 8 months postoperatively. On the next visit at month 10, the DM was noted to be detached with donor cornea. The patient did not report any gross deterioration of vision between months 8 and 10. Slit-lamp examination showed an eccentric defect in the host posterior lamella (DM with some posterior stroma) measuring 5 mm by 4.5 mm in size with fibrosing edge of the defect [ Figure 1a and b]. The cornea was clear with no evidence of sectoral or diffuse edema. Corneal ocular coherence tomography showed a clear detachment of host posterior lamellar (DM with some posterior stroma) from the donor stroma [ Figure 2]. There was an area of bare donor stroma measuring 6.5 mm by 5 mm involving the visual axis [ Figure 1a and b]. Two and a half years later, the graft sutures were removed, but by this time, he had developed cataract. Examination at this stage showed 6.5 diopters of regular corneal astigmatism with UCVA of 1.0 logMAR. He underwent successful phacoemulsification with toric intraocular lens implant (Rayner T-flex [Rayner, Worthing, UK] with an 8-diopter sphere and 9.5-diopter cylinder implanted at 8° axis and expected postoperative spherical equivalent of − 0.2 diopters), which improved his UCVA to 0.17 logMAR. Two years down the line following his cataract surgery and 58 months following his DALK, his UCVA remains 0.17 logMAR and the cornea remains clear with no evidence of edema [ Figure 1a and b]. A specular microscopy performed using CellChek ® specular microscopy (Konan Medical, Irvine, USA) in all quadrants confirms the presence of endothelial cells on the stromal side of the donor cornea at 58 months [ Figure 3].
dIScuSSIon
It is already known that the risk of DM perforation in DALK is significantly increased when the ratio of the stromal scar depth to minimum corneal thickness is >0.79. 7 Intraoperatively, this complication can be managed successfully with intracameral injection of air bubble to tamponade the DM and by a gentle manual dissection in a centripetal fashion, starting away from the perforation, to prevent further extension of the DM hole. 1 The use of fibrin glue to seal the detached DM to the donor's stroma 8 and stromal suturing techniques 9 are also described in literature to manage this. In early postoperative stage, careful observation is necessary to assess any double AC, which may resolve spontaneously. 1 The rate of appearance of the double AC is reported to be up to 60% following DM perforation. 1 The late postoperative sequelae of DM perforations include postoperative DM detachment, higher endothelial cell loss, endothelial decompensation, and interface scarring. 2 However, the good visual outcomes are reported following the successful management of micro or macroperforations.
Kodavoor et al. 3 reported a good visual and anatomical outcome in 16 patients with keratoconus, pellucid marginal degeneration, and macular corneal dystrophy, who underwent DALK with micro (12 eyes) and macroperforation (4 eyes defined as perforation >1 mm). In their report, postoperatively, the vision improved significantly in all patients with BCVA of 0.28 ± 0.09 logMAR. 3 Furthermore, in a study of 101 eyes with DM perforation during DALK by Huang et al., 2 78.2% of the patients had microperforation and the rest developed macroperforation (defined as any defect >0.5 mm). Cases with intraoperative DM perforations were reported to have equivalent visual outcomes compared to those without DM perforations and did not have any increased risk of graft failure or rejection at postoperative years 1 and 3. 2 In fact, 78% of the eyes with perforation and 68% of the eyes without perforation developed BCVA of 6/12 or better 3 years after the surgery. Similarly, Senoo et al. 4 reported no statistically significant difference between the BCVA in 54 eyes that underwent DALK between the groups with and without DM perforation.
Passos et al. 10 reported a case who had a spontaneous detachment of the DM after 5 months of DALK. However, unlike our case, they preserved the donor DM, and therefore, their donor cornea remained clear despite the DM detachment. In another report by Lin et al., 11 where the donor DM was not preserved, the donor cornea remained clear despite persistent detachment of DM and multiple rebubbling attempts before it finally attached spontaneously. We hypothesize two possible explanations for the cause of DM detachment 10 months after DALK. First, like Passos et al., 10 we believe that perhaps in cases with DM perforations, the recipient DM is not entirely attached to the donor stroma despite multiple rebubbling attempts, and there may be areas that maintain virtual spaces in the peripheral cornea without real adherence which are not apparent clinically. This may be responsible for the reduced adherence of the donor button to the DM from the recipient, facilitating a late detachment. Second, in our case, over time, there was a significant fibrosis near the edges of the detached DM defect (as also noted by Passos et al. 10 at 5 months). This fibrosis may have caused further traction on the remainder of the host posterior lamella which did not allow spontaneous reattachment (as noted in case by Lin et al. 11 ), and this traction may have led to the DM detachment remaining stable over 58-month period.
The pathophysiology of migration and repopulation of endothelial cells over the bare stroma overlying the showing re-endothelialization may explain the presence of clear cornea over the macroperforation in our case. Dirisamer et al. 5 described corneal re-endothelialization following complicated DMEK in 36 eyes out of consecutive 150 DMEK cases. Spontaneous corneal clearance was reported in 28 eyes with decentered, partially detached or upside-down grafts. 5 They noticed healthy endothelial cells with endothelial cell density similar to the control group (eyes with fully attached and centered grafts) on the recipient's corneal stroma. 5 This indicated that apart from migration, the endothelial cells still have the capacity to regenerate. 5 In another report, Daravagka et al. 12 described three cases with DMEK for Fuchs endothelial dystrophy (FED), with complete graft detachment and spontaneous corneal clearance. The patients were monitored closely without any intervention and noticed that the cornea cleared spontaneously in all cases within 3 months. 10 Specular microscopy confirmed regeneration of endothelial cells on the recipient stroma in all three eyes in their series. 12 In our case, the patient had attached host DM for approximately 9.5 months after rebubbling, but the cornea was still clear over the macroperforation site despite of bare stroma. We believe that during this time, there was re-endothelialization of the bare stroma and gradual repopulational of endothelial cells beyond the bare stroma just before or after the detachment of the host DM at 10 months leading to a reasonable endothelial cell count in all five regions of the cornea.
Similarly, there have been reports of spontaneous corneal clearance following Descemet's stripping without any endothelial keratoplasty in patients with FED. 6 The authors documented that the new endothelial cells had the functional properties of healthy corneal endothelium and produced a normal cornea with no structural alteration. 6 They reported corneal endothelial cells supplemented with rho-associated protein kinase inhibitor, when injected into the AC, repopulated, and self-organized on the posterior surface of the cornea. 6 In summary, it was already known that the outcome of eyes with DM perforation during DALK was good, but there is emerging evidence that endothelial cells migrate and repopulate in time over bare stroma over the site of DM perforation and on the bare stroma which is not in physical contact with the DM. To our knowledge, this is the first report of DM macroperforation, followed by subtotal separation of DM from the donor stroma due to fibrosis of the edges of macroperforation, with clear cornea despite subsequent phacoemulsification procedure and excellent visual outcomes.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms. In the form, the legal guardian has given his consent for images and other clinical information to be reported in the journal. The guardian understands that names and initials will not be published and due efforts will be made to conceal identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
|
2020-12-22T14:34:15.629Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "bc79e46cc9d30cfdf15bdfdb0cce69652ef491d5",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/joco.joco_79_20",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "75f98d8a6a5d673b7cff73c24e753d040a81d8c9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229477137
|
pes2o/s2orc
|
v3-fos-license
|
Immune System Modulations in Cancer Treatment: Nanoparticles in Immunotherapy
Cancer immunotherapy is based on the idea of overcoming the main problems in the traditional cancer treatments and enhancing the patient’s long-term survival and quality of life. Immunotherapy methods aimed to influence the immune system, to detect and eradicate the tumors site and predict the potential results. Nowadays, nanomaterials-based immunotherapy approaches are gaining interest due to numerous advantages like their ability to target cells and tissues directly and reduce the off-target toxicity. Therefore, topics about immune system components, nanomaterials, their usage in immunotherapy and the benefits they provide will be discussed in this presented book chapter. Immunotherapy can be divided into two groups mainly; active and passive immunotherapy including their subtitles such as immune checkpoint inhibitors, adoptive immunotherapy, CAR-T therapies, vaccines, and monoclonal antibodies. Main classification and the methods will be evaluated. Furthermore, state-of-art nanocarriers based immunotherapy methods will be mentioned in detail. The terms of size, charge, material type and surface modifications of the nanoparticles will be reviewed to understand the interfer-ence of immune system and nanoparticles and their advantages/disadvantages in immunotherapy systems. approaches and researches about the design and development of nanoparticle-based cancer immunotherapy are promising. Nanotechnology-based studies enable a therapeutic efficacy with a low dose of therapeutics, avoid cytotoxicity, and not to destroy the healthy cells of the patient. The quality and duration of cancer patients’ lives can be improved by developing new methodologies in cancer immunotherapy based on nanoparticles.
Introduction
Understanding the immune system and its components may enlighten future potential treatments to generate disease progression such as cancer. For almost 30 years, by targeting the immune system by therapeutics brought a totally new point of view in the field of cancer treatment. Accordingly, besides the commonly preferred cancer treatments, the treatments developed specifically for the patient and the diseases come to the forefront. To date, immunotherapy is a method developed as an alternative to conventional cancer treatments [1,2]. The immune system which is an awareness system based on distinguishing between "self " and "nonself " works in harmony with cells, related tissues, and organs respectively to protect
Cancer
Today, non-communicable diseases are held accountable as the leading cause of death worldwide. Among these diseases, cancer is predicted as one of the most important disease in the world that causes deaths and reduces the life quality [5]. Soon, it is thought that the number of cancer patients and cancer-related deaths will increase [6]. The definition of cancer for the first time in 3000 BC was used in inscriptions called the Edwin Smith Papyrus, the part of an ancient Egyptian textbook on trauma surgery. Cancer is generally characterized by the growth of abnormal cells beyond their normal limits. Cancer disease can affect almost any part of the body and has many anatomical and molecular subtypes, each of which requires specific treatment strategies. The main factors causing cancer are as follows; ionizing radiation, ultraviolet rays, age, inadequate physical activity, smoking and alcohol consumption, nutrition and diet, chemicals, microorganisms, and genetic factors. It is known that environmental factors are much more effective in the formation of the disease than hereditary factors. The most important reason is stated as mutations that occur in genes. Most cancers are caused by a series of mutations that allow cells to divide faster, escape internal and external controls, and prevent programmed cell death. As the cells continue to divide under the influence of mutations in solid tissue such as organ, bone or muscle the resulting mass is called tumor. Solid tumors are classified as; benign (noncancerous) and malignant (cancerous). Benign tumors do not have the ability to metastasize; they can only grow where they are located. On the other hand, malignant tumors have the ability to spread to neighboring tissues and organs from where they are formed. Many types of cancer initially show no symptoms. The main symptoms observed can be given as; unexplained, and rapid weight loss, fever, malaise, pain, swelling and bleeding. However, each type of cancer has its own specific symptoms, so the treatment method of each cancer type differs. Figure 1 shows a schematic representation of tumor cells progression.
Cancer treatment methods
Cancer is an individual disease; hence treatment methods vary from patient to patient. The method of treatment should be chosen by considering the degree and course of the disease, age and health situation of the patient. Generally, most of the patients have the combination of treatment methods. Surgical intervention, radiation therapy, chemotherapy, and hormone therapy are defined as traditional cancer treatments in the literature [2]. In recent years, immunotherapy has also been the increasingly used method in cancer treatment.
Surgical intervention
Surgical intervention, a local treatment method, can also be used in the combination of other treatment methods. It is applied in tumors without metastasis that only exist in one area such as solid tumors; but not effective in leukemia and cancer types that spread. Also, surgical intervention is preferred when the tumor is in an untreatable part of the body by other treatment methods such as radiation therapy or chemotherapy (cannot reach the brain). In order to remove the tumor without damaging the neighboring healthy cells, the size of the tumor can be reduced by other methods. The surgical intervention method works against cancer in three ways; eradicating the entire tumor, debulking a tumor, and palliate the disease symptoms. Only eradicating the entire tumor may cure the patients if the cancer cells are located in small area at one place. Debulking a tumor is used to reduce the tumors size while surgery is combined with other treatment methods. Palliate the symptoms, the last way in surgical intervention, is to remove the tumor to reduce the pain or pressure caused by the tumor. There are some disadvantages of surgical intervention such as the possibility of leaving microscopic residues around the tumor after surgery, the health status of the patients, and the success of the surgery [7, 8].
Radiation therapy
Radiation therapy or radiotherapy (RT) is based on the principle of using a fairly high dose of radiation to shrink the tumor by killing the cancer cells. There Cell Interaction -Molecular and Immunological Basis for Disease Management 4 are various types of radiotherapy depends on the general state of the patient and disease. The principle of radiation therapy is to destroy cancer cells as possible without damaging healthy cells. Because, in the late 20th century, scientist discovered that radiation therapy not only cures cancer cells but also may be the cause of cancer itself. The notable side effect, it can kill and harm healthy cells significantly. Thereby it has side effects such as hair loss, vomiting, and loss of appetite that will affect your daily life. The choice of the exact type of radiation therapy relies on several circumstances such as the type, stage, size and location of cancer, and medical history of the patients. Reducing the tumor mass by radiation therapy is helpful to decrease the pressure of tumor on the nearest healthy cells. Additionally, it is used before surgical intervention to shrink tumor mass to make it suitable for surgery and after surgery; the microscopic residues on the edge of the tumor can be removed much more easily. Also, this method of therapy is very suitable for making systemic therapy [9,10].
Chemotherapy
Chemotherapy (CT), also as chemo, is the most commonly used method in cancer treatment. The aim is to kill cancer cells using chemotherapeutic agents. This method is developed in the late 20th century and combined with surgery and/ or radiation therapy. Over the years, many chemotherapeutic drugs showed great impact and gained success for the treatment of many types of cancer. The aim of the treatment can be stated as reducing the size of the tumor, reducing the effects of the symptoms seen in the patient, preventing metastasis, and reducing the total number of tumor cells in the body. The drugs used in chemotherapy direct the cell to death by stopping or decelerating the cancer cell proliferation. Some of these drugs are natural and some of them are synthetic. Hair loss, vomiting, loss of appetite, fever, diarrhea and fatigue are temporary side effects of the drugs that end after the treatment [11,12].
Hormone therapy
Hormones, in the classical sense, are organic compounds that are synthesized in ductless glands such as the pituitary gland, adrenal gland, thyroid gland, and parathyroid gland, which are known as endocrine organs, and act on certain target tissue that is carried by the blood. All cells communicate with each other via hormones. In the human body, hormones either can be small proteins (insulin, etc.) or stimulator for a cell to generate new proteins or cease making products. One possible featured outcome is cell growth and proliferation. Even though cancer cells are abnormal, they still keep the ability to react to signals of hormones. The main idea of hormone-based treatments is to deprive cancer cells of hormone signals. Otherwise, they would be stimulating to continue dividing. The main theme of the drugs that are used in this method relies on preventing the activity of hormone within the target cell or blocking the production of the related hormone. Hormone therapy is often preferred for the treatment of prostate and breast cancer. Generally, hormone therapy is combined with other treatment methods depending on the cancer type. Hormone therapy is very suitable for adjuvant and neoadjuvant therapy to reduce tumor mass. The term adjuvant therapy is about reducing the risk of cancer recurrence after major cancer treatment. Hormone therapy is also appropriate for the removal of cancer cells that spread to different parts of the body. Like all other methods, hormone therapy has common side effects. But these effects depend on the body's response to the therapy and the type of hormone therapy. Side effects are influenced by different terms such as patients' sex and type of hormone that is used. Hot flashes, weakened bones, Immune System Modulations in Cancer Treatment: Nanoparticles in Immunotherapy DOI: http://dx.doi.org /10.5772/intechopen.94560 nausea, and fatigue are common side effects for men. Menstrual irregularities for women who are not menopausal and vaginal dryness are seen in addition to the common side effects. To date, there are several hormone based drugs based on the hormonal signals, but their principles are diverse from each other. They all attack different parts of the pathways to decelerate to the growth of cancer [13].
Immunotherapy
Nowadays, cancer treatment is moving from non-specific methods to specific methods. Although success is achieved in the destruction of tumors with surgery and radiotherapy, cancer may recur due to cancerous cell debris in the damaged area. Cancer immunotherapy, an individualized method, is referred to as the "fifth step" of the treatment following the traditional methods mentioned above [14]. The immunotherapy method; boost the immune system to fight against cancer, train the immune system component's to memory, attack the cancer cells, and heighten the immune response via biological substances. For the last decades, immunotherapy becomes a promising method to fight against cancer. Immunotherapy can be applied using either external substances or their body cells [4].
Historical background of cancer immunotherapy
It is common knowledge that many cases of regression of tumor growth after high fever attacks or infectious diseases have been reported throughout history from Ancient Egypt to the 18th century. However, the relationship between the immune system and cancer was noticed in the middle of the 18th century with the developing technology. In the mid-18th century, two German doctors, Busch and Fehleisen, independently reported cases of tumor regressions of patients after erysipelas infection (Streptococcus pyogenes infection). In the literature, the first systematic immunotherapy study for the treatment of malignant tumors was conducted in 1891 by William B. Coley, a surgical oncologist. Coley injected the heat-inactivated Streptococcus pyogenes and Serratia marcescens organisms into the patient to stimulate the patient's immune system. After the project that he initiated, Coley has seen a regression in the tumor in more than 1000 sarcoma patients who cannot undergo surgical intervention. In a very short time, humanity evaluated this mixture as a great invention, "Coley Toxins". However, the word "toxin" was an unfortunate choice; the more acceptable name for the treatment was "mixed bacteria vaccine". Although the bacteria had some side effects such as fever and malaise, it is not as toxic as chemotherapy or radiotherapy and does not destroy the immune system [15,16]. Coley's life-long cancer immunotherapy studies that will spearhead for many scientists have started after this project. In 1900, Paul Ehrlich stated that the first findings of the treatment, which would later be called antibody-mediated passive immunotherapy, had an important place in the treatment of tumors. In 1975, George Köhler and Cesar Milstein developed hybridoma technology for monoclonal antibody production. This was followed by the first successful use of monoclonal adults in human neoplasia in 1982 and the FDA (US Food and Drug Administration) approval of muromonab-CD3 (Orthoclone OKT3) in 1986. In 1997, both the first humanized monoclonal antibody, daclizumab (Zenapax), and the first monoclonal antibody for malignancy, rituximab (Rituxan), were approved by the FDA. This was followed by the FDA approval of gemtuzumab ozogamicin (Mylotarg) in 2000, the first toxin-bound monoclonal antibody, and ibritumomab tiuxetan (Zevalin) in 2002, the first radionuclidebound monoclonal antibody [17].
Another area that cancer immunotherapy has advanced was using the patient's body cells. In the 1960s, the tumor immune surveillance hypothesis was put forward by Burnet. Since 1995, persuasive studies on effective tumorspecific immunity have attracted great interest. In particular, many studies show the ability of dendritic cells to elicit tumor-specific T cell immunity has led to this situation. Following preclinical researches, many studies involving various types of cancer have been conducted in patients. Recent studies have also made the immunosurveillance hypothesis quite popular [18,19]. Immunotherapy studies have increased their importance in the 21st century with the licensing of clinical studies carried out with developing technology and methods [20]. Immunotherapy was declared as "breakthrough of the year" by Science magazine in 2013 after the clinical success achieved and has become even more prominent. Also, in 2018 James Allison and Tasuku Honjo received the Nobel Prize in Physiology and Medicine for their work based on the use of the immune system to destroy cancer cells. In the past two decades, great strides have been made in cancer immunotherapy. With all these spectacular developments, the number of cancer immunotherapy studies is increasing day by day [21,22]. There are certain categories in cancer immunotherapy applications. These are the mechanism of innate and acquired immune resistance, internal and external resistance to immunotherapy, self-neutralization of tumor cells and antigen-presenting cells, inhibition of immunity by exosome release mechanisms, the response of tumor cells to therapy. Like all other methods, cancer immunotherapy has several advantages and disadvantages. Higher precision and specificity, long-term survival rate, fewer side effects than traditional treatment methods, removing residual tumor cells and microscopic lesions that remain in the body after treatment and improving the body's immune function are the advantages of immunotherapy. Also, it can control and kill more than one tumor type and it uses the body's immune system to increase immune response. Higher treatment costs, various non-specific toxic side effects after treatment are the disadvantages of immunotherapy. There is a high selectivity for patients in treatment. When the tumor type is "immunosuppressant type" or "immune exclusion type", the effect of immunotherapy treatment is considerably weak. Additionally, in particular, the use of immune checkpoint inhibitors can have adverse consequences leading to autoimmune diseases and even death [23].
Classification of immunotherapy
Cancer immunotherapy is generally classified in three ways; passive, active and combination immunotherapy depending on the mechanism of the therapeutic agent and the state of the patient's immune system. Classification of passive and active cancer immunotherapy studies is shown in Table 1.
Passive immunotherapy
The main purpose of passive immunotherapy is to increase the current antitumor response by using therapeutics that can be produced under laboratory conditions. It is preferred to use the treatment in patients with weak or dysfunctional immune systems. It is designed to attack tumor cells independently by modifying the components of the immune system in the laboratory. Monoclonal antibodies and adoptive cell therapy are frequently used passive immunotherapy methods [4,20,24].
Monoclonal antibodies
For the past 20 years, monoclonal antibodies are the most commonly used FDA approved treatment in clinical immunotherapy studies. They are large artificial proteins with high antigen specificity produced by particular B cells. Due to their antigen specificity, their capacity to bind to epitopes on the surface of the tumor cell is high [25]. So, antibodies specific to antigens of cancer cells are produced in ex vivo conditions and transferred to the patient to increase the immune response. Antibodies in these targeted therapies are guided directly to the antigen on the surface of cancer cells. Different signaling functions can be created by the interaction of monoclonal antibodies and receptors on the surface of malignant tumors. Antibodies are used in treatment can be classified as naked, conjugated, radiolabeled, chemically labeled, and bispecific monoclonal antibodies. Naked monoclonal antibodies are most commonly used in cancer immunotherapy and bind directly to the antigen without any radioactive markers or drugs. Conjugated monoclonal antibodies are used to transfer chemotherapeutic drugs or radiolabeled particles to cancer cells. Radiolabeled monoclonal antibodies are created by adding radioactive particles to naked antibodies. Chemically labeled antibodies are monoclonal antibodies with a high chemotherapeutic effect. Radioactive or chemically labeled monoclonal antibodies aim to destroy the target cell with the toxins they contain or the radiation they emit. Bispecific antibodies carry two types of antibodies in their structure and can bind to two different antigens that are receptors for these two antibodies at the same time [18,26,27]. The first drug including monoclonal antibodies approved by the FDA was rituximab (Rituxan, Genentech) was used in the clinic at 1997. Today, with developing technology, many new drugs have emerged for the treatment of different types of cancer [25].
Adoptive cell therapy
It gathered speed with the studies carried out in the 20th century about the discovery of tumor-specific antigens located not on healthy cells but just on the tumor cells. Thus the importance of adoptive T cell transfer has been understood. Adoptive cell therapy is the transfer of natural or genetically modified T cells to patients in ex vivo conditions instead of stimulating the immune system. The transferred cells can be autologous or allogeneic targeted to a particular antigen in the host cell. It was pointed out that the stage of an immune response in the host is skipped directly by this step. To create a targeted immune response, autologous cells can recognize tumor antigens, move towards the tumor and exit the circulation. The transfer of T cells to destroy tumor cells is carried out in two ways; the infiltrating (TIL) of tumor specific T cells from existing tumor cells and the use of genetically modified T cells to specifically identify tumor cells. In both methods, the T cell is processed ex vivo and then transferred back to the patient [28]. The first successful cellular therapy in history was performed on an advanced melanoma patient with autologous TIL. The specific T cell receptor (TCR) is obtained by genetically modifying T cells. T cells and tumor-specific antigens are matched with HLA recognition by TCR technology. A minimal cytotoxic effect occurs by this natural pairing. TCRs also have disadvantages such as the low expression on the surface and short lifespan of T cells in vivo. Although the first studies ended up with disappointment, today, the other genetically modified T cell is chimeric antigen receptors, CAR. Many studies are conducted around the world on CAR-T technology and it is believed that positive results will be achieved in the near future [29,30].
Active immunotherapy
Active immunotherapy aims to destroy cancer cells by stimulating the immune system by vaccination, immunomodulation, or targeting specific antigen receptors. The method is carried out employing cancer vaccines, oncolytic viruses, immune checkpoint inhibitors, and cytokines [20].
Cancer vaccines
The purpose of vaccination is to create an immune response to detect and destroy cancer cells. Cancer vaccines, containing whole, part, or purified antigens of tumor cells, can be peptide-based, immune cell or dendritic cell-based, or tumor cell-based. After the tumor cells are removed from the body, the patient is vaccinated and an immune response is created against the tumor cells that may remain in the body. Variable antigen expression, low immune response, diminishing the immune response in the tumor microenvironment and a decrease in activity over time are the restrictions of the cancer vaccine applications [4,25].
• Peptide-based vaccines are designed to create an immune response against tumor antigens that interact with HLA molecules on the surface of tumor cells. Their toxic effects on healthy cells are low due to their antigen-specific design, but tumor antigen peptides and the patient's HLA type should be well characterized [31].
• Immune or dendritic cell-based vaccines consist of the use of tumor-associated antigens or autologous tumor cells and dendritic cells (DC) obtained from monocyte cells in early-stage cancer vaccines. In 2010, the drug called Sipuleucel-T (Provenge, Dendreon Corp.) is the first DC-based cancer vaccine was approved by the FDA for the treatment of prostate cancer. DC-based vaccines today use innovative in vitro culturing techniques enriched with cytokines, enhancing immunogenicity and improving DC function. DC-based cancer vaccines can be designed differently for both ex vivo and in vivo applications for various cancer types [4].
• Tumor cell-based cancer vaccines use the entire tumor cell to create an immune response. Unlike peptide-based vaccines, tumor cells are not specific to antigens on their surface, but the range of epitopes to which they can bind is wider. These vaccines can be prepared using the patient's cells (autologous) or using another patient's tumor cells (allogeneic). Tumor cell-based vaccines such as M-Vax (AVAX Technologies) can be used in the treatment of many different types of cancer in clinical studies [25].
Oncolytic viruses
These are called genetically altered viruses that can naturally penetrate only cancer cells and kill them. Talimogene laherparepvec (T-Vec) is the first oncolytic virus-based drug, approved by the FDA, that the protection mechanisms developed against viral infections are impaired in most cancer cells. By taking advantage of this degradation, viruses can reproduce more intensely in cancer cells than healthy cells. Recently, replication specific to cancer cells was obtained and a reovirus variant called Reolysin (exhibiting oncolytic behavior in cells with activated Ras signaling pathway) has been developed. In 1991, positive results were gained in the treatment of brain cancer with a mutation in a genetically modified type 1 herpes simplex virus [32].
Immune checkpoint inhibitors
Several inhibitory receptors and ligands expressed on T cells, antigen-presenting cells, and tumor cells have recently been important elements of immunosuppression in the tumor microenvironment. Because of their biological role as regulators of T cell activation, these receptor/ligand pairs have been termed "immune checkpoints". Immune checkpoints are cell membrane proteins involved in the regulation of the immune response. Multiple controls or "checkpoints" are present or activated to ensure that the immune-inflammatory response is not continuously activated after tumor antigens have generated a response. Immune checkpoints are signals that can halt an existing immune response. The over -expression of these signals by tumor cells affects tumor cell-specific T-cell immunity in the cancer microenvironment. The aim of treatments involving inhibition of the immune checkpoint is to use and strengthen the immune system by disrupting the negative immune system. In 2011, the drug called Ipilimumab was used in clinical use for melanoma patients by using immune control point drugs. As of March 2019, 7 immunotherapy drugs based on checkpoints are used in clinical practice. Monoclonal antibodies that bind to immune checkpoints bind with cytotoxic T lymphocyte-associated molecule-4 (CTLA-4), programmed cell death protein 1 (PD-1), and programmed cell death ligand (PDL-1) [33].
• PD-1/PDL-1, under normal circumstances, PD-1 has two ligands; PD-L1 and PD-L2. Blocking the interaction between PD-1 and PDL-1 with antibodies enhances the immune response against cancer cells, "releases the brakes" in the immune system and allows for the attack of tumor cells that express PDL-1. Nivolumab and Pembrolizumab are the first two drugs approved by the FDA in 2014 [34]. •
CTLA-4 inhibition increases the activation of cytotoxic T cells. Thus, immune blockade due to Treg cells is inhibited and antitumor activation is observed.
Ipilimumab is the first drug approved by the FDA for CTLA-4 treatment in 2011 [35].
Cytokines
They are the main regulators of innate and adaptive immune systems that allow cells of the immune system to communicate in paracrine and autocrine systems over short distances. Unlike other therapeutic agents, these molecules directly stimulate immune cells, for example Interleukin-21 (IL-21) can act as agents involved in active immunotherapy [36]. The use of cytokines in cancer immunotherapy showed tumor regression, prevention of metastasis formation, improvement of immunological memory and decrease in risk of disease recurrence with increased survival. The use of cytokine (IL-2, GM-CSF, IFN-α) -based biological therapy in combination with conventional therapies is under clinical development [37]. In 1986, IFN-α became the first FDA approved cytokine for the treatment of leukemia.
Subsequently, IL-2 was approved by the FDA in 1992 for metastatic kidney cancer and 1998 for advanced melanoma treatment [36].
Combinational immunotherapy
Combinational immunotherapy refers the use of a different anticancer agent for treatments of cancer. Conjugation of IL-2 and HER-2 monoclonal antibody proved to be a very forceful combination in immunotherapy. Lately, PD-1 and CTLA-4 conjugation has been examined. The results revealed that the combined system was safe and had no significant toxic effect [38].
Nanoparticles in cancer immunotherapy
Nanoparticle-based biomaterials have a critical role in cancer immunotherapy compared with conventional drugs [39]. Immunotherapy often targets tumor cells, immune and stromal cells in the tumor microenvironment [40]. Additionally, side reactions occurring due to the interactions between nanoparticles (NPs) and cells can be adjusted by modifications of nanoparticles [41]. Nanoparticle-based drug delivery systems can improve the solubility, in vivo stability, and pharmacokinetic profile. Also, they protect drugs from premature release and degradation in the living system. These systems can be designed according to the microenvironment of the target such as pH, redox potential or enzymes, and external dynamics such as light, electrical and magnetic fields. Targeted delivery with NPs can also reduce toxicity and immune-related side effects [2]. The size and the shape of the NP are very effective in therapeutic efficacy by changing its pharmacokinetics, transportation, and cellular uptake [42]. Recent advances in nanoparticle formulations have generated a wide range of other shapes like rods, prisms, cubes, stars, and discs out of spherical. It is considered as non-spherical particles have higher blood circulation periods, prolonged margination effects, and higher penetration capacities within solid tissues and tumors [43]. The charge of NP has great priority in the transition of it into cells. Besides, NP-ligand coupling conditions and the elasticity of NP upgrade transportation and accumulation of NP in the living system [44,45]. Generally, it is well known that cationic NPs create a higher immune response than neutral or anionic NPs [43]. The size, shape, elasticity, optical, magnetic, and electrical properties of nanoparticles can be modified to increase the usage of NPs in cancer therapy as a carrier [2,41,46]. High specificity, efficacy, diagnosing, imaging, and therapeutic properties make NPs candidates in immunotherapy for effective cancer treatment. Liposomes, micelles, polymeric, metallic, and inorganic NPs have a wide range of usage in cancer immunotherapy [44].
Classification of nanoparticles
The nanoparticles are generally categorized into tree class as organic, inorganic, and carbon-based. Dendrimers, micelles, and liposomes are the most widely known organic nanoparticles. These biodegradable, non-toxic, and capsule-shaped nanoparticles appear to be an ideal choice for drug delivery due to their sensitivity to thermal and electromagnetic radiation. Inorganic nanoparticles, metal, and metal oxide-based NPs do not contain carbon in their structure. Aluminum, cadmium, cobalt, copper, gold, iron, lead, silver, and zinc can be used to fabricate metallic NPs in 10 to 100 nm size range. Carbon-based nanoparticles, fullerenes, graphene, carbon nanotubes (CNT), and carbon nanofibers, are build up from carbon in nanosize [47].
Preparation methods of nanoparticles
It can be viewed two different ways to synthesize nanoparticles; bottom-up and top-down methods ( Table 2). These techniques also can be divided as chemical and physical methods. Although both methods have positive and negative features, the chemical one has more disadvantages due to the wet reaction steps it has [48].
Bottom-up method
It is also known as a constructive method like building-up of material from an atom. Sol-gel, spinning, chemical vapor deposition (CVD), pyrolysis, and biosynthesis are the foremost methods in this technique. Nanoparticles, nanoshells, and nanotubes with narrow size distribution can be synthesized by this approach. Besides, in this method deposition parameters can be controlled. However large scale production is difficult and chemical purification is needed.
• Sol-gel method: It is a simple, wet chemical process based on hydrolysis and polycondensation reactions [49]. This process indicates the chemical transformation of a system from a "sol" phase, a colloidal solution of solids suspended in a liquid phase, into a "gel" phase, a solid macromolecule submerged in a solvent [50]. The chemical and physical properties of the materials as high surface area and the stability can be obtained by the method via modifying experimental conditions. Metal oxide and chloride precursors are used in sol-gel process, and then a liquid and a solid phase separation occur after removing precursors either by shaking, stirring, or sonication. Nanoparticles are acquired in this phase separation by sedimentation, filtration, or centrifugation [47].
• Spinning disc processing (SDP): The method consists of a rotating disc inside a reactor generally filled with nitrogen or other inert gases to remove oxygen inside and avoid chemical reactions. The purpose of spinning is to merge atoms or molecules. The parameters of this process such as the liquid flow rate, disc rotation speed, liquid/precursor ratio, location of feed, and disc surface may vary for different systems and determine the characteristics of NPs [51]. • Chemical vapor deposition (CVD): It is the deposition technique of thin films of gaseous reactants onto a substrate. Gaseous reactants can be elemental and compound semiconductors, metal alloys, and amorphous or crystalline compounds. In the CVD process, a volatile material (chemically reactive) is coming together with other gases to produce a nonvolatile solid material that deposit at the atomic level on a suitable substrate. This is a well-organized process that some kind of reactors should be used depending on the type of precursors, deposition conditions, and the forms of the energy introduced to the system to stimulate the planned chemical reaction. Metal-organic, plasma-enhanced, low-pressure, laser-assisted, and aerosol-assisted CVDs are the most accepted methods [52]. The deposition is carried out in a reaction chamber at the temperature suitable for the reaction, the substrate is heated and the chemical reaction occurs when the heated substrate contact with the combined gas. The substrate temperature is an important parameter for this method to gain pure, uniform, hard, and strong nanoparticles [47,53].
• Spray pyrolysis: It is a method often used in industry for large scale production of NPs. Generally, nanometals and metal oxides are produced by this simple, reproducible, size controllable and low-cost method [54]. This process consists of a precursor with flame where the precursor solution is sprayed or injected using a nanoporous nebulizer onto the hot substrate into the furnace at high pressure to form a droplet. The precursor can be either liquid or vapor. After evaporation, the precursor decomposes to recover nanoparticles or films on the substrate. Some of the furnaces have laser or plasma to produce high temperature to facilitate evaporation [55].
• Biosynthesis: It is an alternative to conventional physical and chemical nanoparticle synthesizing methods. Plants are preferred in this green and environmentally friendly cost-effective technique to prepare non-toxic and biodegradable nanoparticles [56]. In this method, several microorganisms as bacteria, fungus, and yeasts, etc. are used along with the precursors to produce nanoparticle for bioreduction and capping purposes. The biosynthesized nanoparticles have unique and enhanced properties that find a wide range of applications in drug delivery systems [57].
Top-down method
This method is also known as a destructive method due to the reduction of bulk material to nanometric scale particles. Contrary to bottom-up, large-scale production is possible and chemical purification is unnecessary in the top-down method. Broad size distribution (10-1000 nm), varied particle shapes, control over deposition parameters and reaction costs are disadvantages of this method. There are many techniques in this method, but mechanical milling, nanolithography, laser ablation and sputtering are among the most frequently used ones.
• Mechanical milling: This process has been used for a long time in mineral, ceramic processing, and powder metallurgy industries. The aim of mechanical milling of materials consists of minimizing particle size, blending, changing particle shapes, and synthesizing nanoparticles in a high energy mill with a convenient medium. At nanoparticle synthesis elements are granulated in an inert atmosphere. Mechanical milling is an economical method for nanosize production of large quantities [58]. The dynamics of mechanical milling vary according to energy transfer to the material from the balls [59]. Type of mill, the powder supplied to drive the milling chamber, milling speed, size and size distribution of the balls, dry or wet milling, the temperature of milling and the duration of milling are the factors that affect the energy transfer. Also, deformations, fractures, and the type of welding cause variations in particle shape and size [58].
• Nanolithography: This is the fabrication of molecules in a nanometric size range of 1 to 100 nm. Lithography is a combination of deposition and etching to have high-resolution topography. There are two main methods called as masked and maskless lithography. These 2 methods contain many techniques inside. While a mask or a mold is needed in masked lithography to fabricate patterns, maskless lithography produces unstable patterns without the use of mask. Photolithography, soft lithography, and nanoimprint lithography are the main techniques in masked lithography. Maskless lithography consists of electron beam lithography, ion beam lithography, and scanning probe lithography [60].
The process is about printing material in a required shape or structure on a light-sensitive material. The main advantage of nanolithography is to make several copies with the desired shape and size from a single nanoparticle.
On the other hand, the necessity of some equipment and their costs are the disadvantages of nanolithography [61].
• Laser ablation (LA): The laser irradiates the surface of the sample with a changeable wavelength of the laser and the refractive index of the solid or liquid target material in this complex PVD process. The laser removes electrons from the target material in a high electric field and those scattered electrons meet with the atoms of the bulk sample, where the energy transfer occurs. This leads to the heating of the surface and vaporization. The material is converted to a plasma state at high laser flux. There are some different applications in this method such as welding, cladding, cutting, cleaning, and generation of nanoparticles. During applications environmental conditions such as vacuum, air, gas and liquid can be changed. Pulsed-laser ablation types of solid target materials have great potential in the fields of laser-material microprocessing, nanotechnology and device fabrication. Besides, Laser Ablation Synthesis in Solution (LASiS) is a common and reliable top-down method that provides an alternative solution to the conventional chemical synthesis of metal-based nanoparticles. Also, organic solvents and water can be used in LASiS for NP synthesis and the method can be called as a 'green' process [62,63].
• Sputtering: The principle of this physical process is to use the energy of plasma on the surface of a material, to arrange the atoms of the material and deposit them on the substrate with energetic ions. After the bombardment with ions, the ejection of atoms from the target occurs and then they deposit onto a substrate in the vacuum sputtering chamber. This high vacuum-based coating technique is included in the group of PVD processes. The shape, size, and composition of the nanoparticles vary with the layer thickness, temperature, and annealing time and substrate type [64].
Conclusion
The application of polymeric NPs in cancer therapy has been studied for decades. Poly(lactic-co-glycolic acid) (PLGA), chitosan, and polyethylene glycol
Author details
Kadriye Kızılbey 1 *, Nelisa Türkoğlu 2 and Fatma Ceren Kırmızıtaş 2 1 İstanbul Yeni Yüzyıl University, Biomedical Engineering Department, İstanbul, Turkey 2 Yıldız Technical University, Department of Molecular Biology and Genetic, İstanbul, Turkey *Address all correspondence to: kadriyekizilbey@gmail.com (PEG) are the most common, FDA-approved polymeric carriers for drug and bioagent delivery. PLGA and chitosan contain hydrophobic domains which also capable of activating immune cells by their adjuvant character. In general, PLGA-based NPs for cancer immunotherapy is based on targeting dendritic cells. Micelles and liposomes are also convenient for the delivery of therapeutics and antigens. Recently, immunomodulatory nanoliposomes with 1oo nm size were designed to deliver cancer antigens. The researches continued until today has indicated the importance of NPs in cancer immunotherapy. The antigen-NP conjugated systems help to introduce the immune-therapeutic agent to antigen-presenting cells efficiently. A high immune effect occurs with the presence of immunotherapeutic agent-loaded nano delivery systems in comparison to free immunotherapeutic agents. Prolongation, antigenicity, adjuvant selection, and inflammation are the most critical parameters for designing and engineering NPs.
On the other hand, there are still some issues to be solved in cancer immunotherapy. In some cases, insufficient information about cancer cells causes drugs not to present the expected effect. Scientists are unable to have precise information about the behavior of nanoparticles in the living system. In addition to these, there are difficulties in adjusting the toxicity, characterization, and monitoring behavior of nanomaterials in biochemical pathways. Moreover, failure to comply with the rules in drug use in such practices makes the work of the researchers even more difficult.
Besides, nanotechnology is promising for oncological applications for precise diagnoses and struggles with cancer cells. In light of the information mentioned in the literature, it is seen that interdisciplinary approaches and researches about the design and development of nanoparticle-based cancer immunotherapy are promising. Nanotechnology-based studies enable a therapeutic efficacy with a low dose of therapeutics, avoid cytotoxicity, and not to destroy the healthy cells of the patient. The quality and duration of cancer patients' lives can be improved by developing new methodologies in cancer immunotherapy based on nanoparticles.
© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
2020-12-03T09:06:35.596Z
|
2020-11-18T00:00:00.000
|
{
"year": 2020,
"sha1": "40d9dca03e0ef8524cc6d8ae4d29fb5631124a56",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/74113",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bc73d23e014a7d21cd63515d5101b0e7a2e492ce",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
51809696
|
pes2o/s2orc
|
v3-fos-license
|
Design , Construction , and Analysis of Specific Zinc Finger Nucleases for microphthalmia-associate transcription factor
This work studied the design, construction, and cle avage analysis of zinc finger nucleases (ZFNs) that could cut the specific sequences within microphthalmia associat e transcription factor (mitfa) of zebra fish. The t arget site and ZFPs were selected and designed with zinc finger to ols, while the ZFPs were synthesized using DNAWorks and twostep PCR. The ZFNs were constructed, expressed, pur ified, and analyzed in vitro. As expected, the desi gned ZFNs could create a double-stand break (DSB) at the targ et site in vitro. The DNAWorks, two-step PCR, and an optimized process of protein expression were firstly induced in the construction of ZFNs successfully, which was an effective and simplified protocol. These results could be use ful for further application of ZFNs – mediated gene targeting.
INTRODUCTION
Reverse genetics has been a focus in the fields of gene functions and diseases' gene therapy.However, the reverse genetics techniques of nonmammalian vertebrates and plants are limited to Morpholinos (MO), RNAi and TILLING.Performed in the early period of embryonic development, morpholinos have no effect on genome (Sumanas 2002).Heritable mutants can't be obtained by RNAi technology either.(Hannon 2002).Although heritable mutants can be obtained by TILLING, TILLING technology is noneffective and time-consuming for genes containing many introns (Sessions 2002).Gene targeting using the homologous recombination has performed well in embryonic stem cell lines of mouse.However, targeted genomic manipulation has failed in other nonmammalian vertebrates (Deiters 2006).Zinc finger nuclease (ZFN) has become a new tool for gene knockout in other metazoan, plants and even human cell lines.ZFNs are engineering restriction endonucleases, which can be used to cut the DNA sequence and create double-stranded breaks (DSBs) at target points in the chromosomes.Most DSBs in genome may be repaired by the homologous recombination (HR).While some DSBs of the damaged chromosomes can be repaired by non-homologous ending joining (NHEJ), small deletions or insertions were followed by ligations.Hence, the mutants with site-specific manipulation could be obtained through NHEJ.ZFNs are fusion proteins composed of DNAbinding domains and non-specific cleavage domains consisting of the cleavage domain of wild type restriction enzyme Fok I (WT Fok).The WT Fok I was discovered in Flavobacterium okeanokoites, which was a member of type ⅡS restriction endonucleases.The WT Fok I is also made up of a cleavage domain and a DNA recognition domain.The recognition domain of WT Fok I binds to the sequence 5'-GGATG-3' (Sugisaki, 1981), while the cleavage domains form a dimmer to cut DNA sequences.The cleavage domains of WT Fok I constitute the cleavage domains of ZFNs, which are more active in form of heterodimers.Thus, ZFNs form heterodimers to induce the DSB at target points in chromosomes (Bibikova, 2001).Another domain of ZFNs is the DNA-binding domain, which is zinc finger protein consisting of several C 2 H 2 zinc finger (ZF) motifs.Each ZF motif comprises 30 amino acids, and contains a αhelix and two β-sheets.A ZF motif binds to a triplet of DNA sequence by crucial amino acids at the positions -1, 2, 3 and 6 of α-helix (Erickson, 1999).Thus, different ZF motifs may be designed by changing these crucial amino acids to recognize different triplets of DNA sequence, while the other amino acids can be maintained as an unalterable backbone (Wolfe 1999).Each ZF motif binds to a triplet, while ZFP consists of consecutive ZF motifs recognizing consecutive triplets.ZFs recognizing 64 possible triplets have been detected and isolated with phage display (Segal 1999;Liu 2002).Thus, ZFPs and ZFNs can be designed to bind and cleave arbitrarily chosen sequences in principle.The interests of ZFNs application have been stimulated in the fields of gene knockout and gene replacement at target sites in the genome of many model organism, including Xenopus laevis (Bibikova 2001), plant cells (Lloyd 2005), Drosophila (Bibikova 2002), Caenorhabditis elegans (Morton 2006), Danio rerio (Doyon 2008) and even human cells (Porteus 2003).The DNA-binding domain of ZFNs contains several ZF motifs whose number can be changed.Three ZF motifs are believed to be the minimum to achieve the adequate specificity and affinity.Although adding more ZF motifs may enhance the binding specificity, it also increases the difficulty of ZFP gene synthesis and searching for an appropriate site.Three or four ZF motifs have been used wildly and successfully for strictly cleavage in genome (Bibikova 2002;Porteus 2006).The wildly application of ZFNs can be imagined, due to the targeted gene manipulation in nonmammalian vertebrates.
Microphthalmiaassociate transcription factor (mitf), a member of basic helix-loop-helix zipper proteins, regulates the specific gene expression and signal transmission of melanocytes.Mitf, a kind of conservative gene in evolution, encodes five protein isoforms, including MITF-A, MITF-B, MITF-C, MITF-H, and MITF-M.It encodes two isoform proteins, MITF-A and MITF-B, in zebra fish (Shibahara 2001).The mitfa was chosen as the target gene for constructing the specific ZFNs in this work to establish the basis for gene function research and human disease models' construction.This work studied the construction protocol of ZFNs for target site, in which DNAWorks was induced to design the coding sequences of ZFPs.Then a simplified method of gene synthesis, a twostep gene synthesis method, was carried out successfully using DNAWorks software.
Search for ZFN target sites
The DNA sequence and cDNA sequence of mitfa were found at NCBI website.When these sequences are put into the website of zinc finger tools (http://www.scripps.edu/mb/barbas/zfdesign/ zfdesignhome.php),several parameters should be chosen, such as "separated target sites", "the core sequence" and "triplets to search" (Mandell 2006).Then all the target sites on the DNA sequence are output.According to the scores of the website, the conserved sites of the DNA sequences and the difficulty of gene synthesis, the DNA segment "tttgactcttatcaaagacctgat" between 2965bp and 2988bp of mitfa was chosen as a plausible target site.
Design of the coding sequences of ZFPs
While the target site was output, the amino acid sequences of ZFP1 and ZFP2 were also obtained with zinc finger tools.To design the coding sequence of ZFPs, their amino acid sequences were input into the website of DNAWorks (http://helixweb.nih.gov/dna-works/).While several parameters needed to be defined, including codon optimization, codon frequencies, melting temperature and hairpins formation (Hoover 2002).Firstly, the codon usage table of Zebra fish was found in the codon usage database and input into DNAWorks.Secondly, the melting temperature was limited to 60ºC.Thirdly, the hairpin formation was avoided in the coding sequences of ZFPs.Finally, the website of DNAWorks output the coding sequences of ZFPs and fourteen oligonucleotides sequences for gene synthesis of each ZFP.
Construction of ZFPs' coding sequences
To construct the coding sequences of ZFPs, the two-step PCR method, overlap extension (OE) PCR and amplification PCR, was used for gene synthesis according to Dong et al (Dong, 2007).Fourteen oligonucleotides were mixed and diluted to 2 µM with ddH 2 O, which were used as the primer mixture in the PCR for synthesizing gene.The fist-step PCR, OE PCR, was carried out in a 50µl reaction mixture including 4 µl oligonucleotides mixture, 250 µM dNTP, 1×PCR buffer, and 0.25 U of TransStart Fastpfu DNA polymerase (Beijing TransGen Biotech Co., lid.).The OE PCR process consisted of 30 cycles at 95 ºC for 30s, 64 ºC for 60s, and 72 ºC for 60s.Then 1.5 µl of the OE PCR product was added into the amplification PCR reaction mixture, including 0.4 µM outer primers, 250 µM dNTP, 1×PCR buffer, and 0.25 U of TransStart Fastpfu DNA polymerase in a volume of 50 µl, and the process consisted of 30 cycles at 95ºC for 30s, 66 ºC for 30s, and 72 ºC for 60s and finally extension at 72 ºC for 5min.The final PCR products were detected using agarose gel electrophoresis and purified with Gel purification kit (Beijing TransGen Biotech Co., lid.).The whole sequences were cloned into pUC57 vectors, and subsequently sent to a commercial company (Shanghai Sangon Biotech Co., lid.) to sequence and detect its accuracy.
Assemblage of ZFNs
To assemble the coding sequences of ZFNs, Fok I (RV) (the two complementary FokI cleavage domain variants named as Fok I (RV) and Fok I (DA)) was amplified from the plasmid pCMV-GZFN1-Fok-RV using PCR, while Fok I (DA) was obtained from the plasmid pGK-GZF3-Fok-DA.The purified DNA sequences of Fok I (RV) and Fok I (DA) were cloned into two pET-30a vectors to form the pET-Fok I (RV)-30a and pET-Fok I (DA)-30a, respectively (Fig. 1).The expression vector pET-ZFN1-30a was constructed by linking the ZFP1 sequence into pET-Fok I (RV)-30a.The expression vector pET-ZFN2-30a was constructed by linking the ZFP2 sequence into pET-Fok I (DV)-30a.Both pET-ZFN1-30a and pET-ZFN2-30a were amplified in DH5α E. Coli.
Expression and Purification of ZFNs
To obtain the ZFNs towards mitfa gene in vitro, three steps were carried out including expression, detection, and purification.To express the ZFNs, the vectors pET-ZFN1-30a and pET-ZFN2-30a were transformed into competent DE3 E. coli cells separately, and selected on LB plate containing kanamycin (10µg/ml) growing at 37ºC for 12h.
One positive clone of each vector was cultured at 37ºC overnight in LB medium containing kanamycin (10 µg/ml) and ZnCl 2 (0. 1mM).Then 50 ml culture was added into 1 L high concentration medium, including 32 g tryptone, 20 g yeast extract, 5 g NaCl, 2.7 g ZnCl 2 , 5 mM NaOH, 12.54 g KHPO 4 , and 2.31 g KH 2 PO 4 .After growing at 37ºC for 3 h, the culture mixture was added into 0.3 mM IPTG and shifted to 22ºC overnight to induce the expression of ZFNs.
To detect the expression of ZFNs, the expression strain cells were collected with centrifuging at 6000 g for 3 min from 400 µl culture mixture at 4ºC.Then they were re-suspended in protein treatment solution, including 200 µl 10% SDS, 200 µl beta-mercaptoethanol, and 600 µl protein loading buffer, treated in water of 100ºC and subsequently analyzed with SDS-PAGE.
To analyze the expression of soluble ZFNs, DE3 E. coli cells were collected using centrifuging at 6000 g for 5 min in 2 ml culture mixture, and resuspended with 2.0 ml PBS in an ice bath.After the cells in PBS were broken by sonicating in an ice bath, total soluble proteins in the supernatant were collected with centrifuging at 6000 g for 3 min at 4ºC.The sediment consisted of total insoluble proteins of the cells.The soluble protein and insoluble protein were analyzed using SDS-PAGE respectively.
To purify the ZFNs, total soluble protein of DE3 E. coli cells were collected with ultrasonic wave in an ice bath and purified using Ni-NTA resin (Nanjing genscript Co., Lid.) according to its instruction.
The cleaving activity detection of ZFNs in vitro
To detect the cleaving activity of ZFNs, the targeted sequence was linked into a plasmid pEASY-blunt simple as the DNA substrate.If ZFNs cut the targeted sequence in the plasmid, the super-helical conformations of the plasmid would be changed into linear conformation.The change could be detected with agarose gel electrophoresis.The first step was the construction of the DNA substrate (Fig. 2).The DNA substrate sequence (cgcggatcctttgactcttatcaaagacctgataagcttggg) which contained the targeted sequence of ZFN1 and ZFN2 (tttgactattatcaaagacctgat) was synthesized using PCR with primers Pt1 and Pt2.The sequence of Pt1 was "cgcggatcctttgactcttatcaaagacctg", and that of Pt2 was "cccaagcttatcaggtctttgataagagtcaaagg".The PCR was performed in 30 µl reaction mixture, including 0.3 µM Pt1, 0.3 µM Pt2, 1×PCR buffer, and 0.25 U of TransStart Fastpfu DNA polymerase, and the process consisted of 35 cycles at 95ºC for 10 s, 54ºC for 30 s, and 72ºC for 30 s, and extension at 72 ºC for 10 min.The PCR products of 42 bases were detected using 3% agarose gel electrophoresis for 10 min.The product bands were cut and purified with Gel purification kit.The targeted sequence was cloned into pEASY-blunt simple, and subsequently sequenced to detect its accuracy.The plasmid pEASY-target-blunt simple was the DNA substrate of the analysis of ZFNs activity in vitro.According to Dana Carroll's report (Carroll, 2006), 20 µl reaction mixture consisted of 1×ZFN reaction buffer, 50 mM NaCl, 1mM DTT, 100 µg/ml yeast RNA, 50 ng DNA substrate, and the ZFNs.In the reaction mixture, ZFN1 and ZFN2 were equimolar as well as the DNA substrate.The reaction was started with 1.0 µl of 0.2 M MgCl 2 at 25ºC for 1 h.The ZFN reaction buffer consisted of 0.1 M Tris (pH8.5),0.5 mM ZnCl 2 , 250 µg/ml BSA.The reaction results were analyzed using agarose gel electrophoresis.
Targeting site of ZFN
The DNA and cDNA sequences of mitfa in zebra fish were obtained in NCBI database and subsequently input into zinc finger tools to search for the possible sites.According to conservative sites analysis and difficulty analysis of gene synthesis, the sequence between 2965 bp and 2988 bp of mitfa DNA sequence was chosen as targeting site, which was "tttgactcttatcaaagacctgat".
Amino acid sequence of ZFPs
The reorganization sequences of ZFP1 and ZFP2 are shown in Figure 3.The amino acid sequence of ZFP1 was "LEPGEKPYKCPECG KSFSTSGNLVRHQRTHTGEKPYKCPECGKS FSTKNSLTEHQRTHTGEKPYKCPECGKSFS QLAHLRAHQRTHTGKKTS", and that of ZFP2 was "LEPGEKPYKCPECGKSFSQRANLRAH QRTHTGEKPYKCPECGKSFSDPGALVRHQR THTGEKPYKCPECGKSFSQLAHLRAHQTH TGKKTS".
Coding sequences of ZFPs
After the amino acid sequences of ZFPs were input into DNAWorks, the coding sequences of ZFPs were obtained after condon optimization in DNAWorks.The coding sequence of ZFP1 was "ctggaaccgggcgagaaaccgtacaagtgcccagagtgcggcaa gagcttcagcacctctggtaatctcgtgcgccatcagcgtacccacac gggtgaaaaaccttacaaatgtccggagtgtggcaaatccttttccacc aaaaacagcctcaccgaacaccagcgcacccatacgggcgaaaag ccgtataaatgcccggaatgcggtaagtctttctctcagctggcgcatc tgcgtgcccaccaacgtacgcacaccggtaaaaagacctct".The coding sequence of ZFP2 was "ctggaaccgggcgaaaagccttacaaatgcccggagtgcggtaag tctttctcccagcgcgcaaacctccgtgcgcatcagcgcactcacacg ggcgagaaaccatataagtgccctgaatgtggcaaatcctttagcgac ccaggcgcgctcgttcgtcaccagcgtacccatacgggtgagaagcc gtacaagtgtccagaatgcggcaagtccttttctcagctggcacatctc cgcgctcaccaacgtacgcataccggcaaaaagacctct".
Oligonucleotides for ZFPs gene synthesis
The oligonucleotides for ZFPs gene synthesis were obtained using DNAWorks, and synthesized by a commercial company.The oligonucleotides for ZFP1 and ZFP2 are shown in Table 1 and Table 2, respectively.
Gene synthesis of ZFPs
Two-step PCR was carried out to synthesize the coding sequences of ZFPs, which were detected subsequently using agarose gel electrophoresis (Fig. 4).The length of successful PCR products was 276 bp.The coding sequences of ZFPs were sequenced analyzed accurately.The two-step PCR and DNAWorks were proved successfully applied in ZFPs gene synthesis.).Lane 2 is the coding sequence of ZFP1.Lane 3 is the coding sequence of ZFP2.
Expression vector of ZFNs
The coding sequence of ZFP1 and FokI (RV) were cloned into pET-30a to form ZFN1 about 900 bp, while ZFP2 and FokI (DA) were used to form ZFN2. The results of ZFNs construction were detected using PCR and shown as Figure 5.
ZFNs protein
The soluble ZFNs were expressed successfully with 0.3 mM IPTG at 22ºC in DH5α E.coli, and purified using Ni resin (Fig. 6).The targeted proteins of 30KD were expressed in DH5α E.coli with IPTG inducement.The soluble proteins were obtained with sonication and centrifugation.The results proved that the protein expression protocol and the high concentration medium were effective to express the soluble ZFNs.).Lane 2 , the total protein of DH5α E.coli without IPTG inducement; Lane 3, the total protein with IPTG inducement; Lane 4, the total soluble proteins with IPTG inducement; Lane5, the total insoluble proteins with IPTG inducement; Lane6, purified ZFNs.
Cleaving activity of ZFNs
The targeted DNA was linked into pEASY-blunt simple, which were super-helix plasmids.The cleaving activity of ZFNs was detected in vitro and analyzed using agarose gel electrophoresis.
Results in Figure 7 showed that ZFNs could cut the super-helix conformation plasmids into linear molecules at 25ºC in a reaction mixture with approximately equimolar ZFNs of the DNA substrate.
DISCUSSION
ZFNs are known as effective tools for "directed gene knockout" of the plants and vertebrates as well as human cell lines.Here, the protocol of ZFNs design, engineering and cleaving activity detection has been shown with an example of mitfa (Fig. 8).Several critical factors are involved in the protocols of ZFNs design and engineering.The first factor is the selection of target site in an interested gene.Although many possible sites and relevant ZFPs could be obtained with zinc finger tools, choosing a target site one should pay attention to the following points: (1) To make mutagenesis occur at target site and a unique site should be chosen.(2) Since ZFPs binding all ANN and GNN triplets have been reported detailedly (Liu 2002), target sites consisting of GNN triplets or ANN have priority to be chosen.ZFPs of high binding ability was obtained in this study by choosing a target site, including two GNN and two ANN triplets.
(3) The distance between two reorganization sequences in target sites ranged from 4 bp to 6 bp was successful and effective to be cut by ZFNs in vitro (Bibikova 2001).(4) The target sites could be selected considering the critical conservative sites of the gene.The target site chosen was near the critical sites encoding the DNA binding domain of transcription factor mitfa.The second involving factor was that the amino acid sequences of ZFPs which should be converted into coding sequences automatically using the DNA Works software.The third factor is the gene synthesis of ZFPs.The gene synthesis method used was a reproducible simple method of less error based on DNAWorks (Dong 2007).The genes could be synthesized using two-step PCR, including the overlap extension (OE) PCR and amplification PCR.DNAWorks software was used instead of manual design to obtain the oligonucleotides for gene synthesis, which could keep similar melting temperature of overlapped regions and make sure the specificity of the primers.Several synthesis strategies for target gene could be obtained in DNAWorks.Each synthesis strategy is scored by DNAWorks, and these scores are important parameters for the selection of synthesis strategy.The most critical parameter, overall score, would be zero in an ideal synthesis strategy (Hoover 2002).The two-step PCR is proved a successful method to be used to synthesize ZFP1 and ZFP2.The fourth factor is the expression of soluble ZFNs.Soluble protein is easier to be extracted, purified, and keep active than the insoluble protein.Protein expression is affected by several factors, including the temperature, concentration of IPTG and medium.Amino acid sequences fold too quickly to form accurate conformation at high temperature and most proteins with inaccurate conformation are insoluble.The expression level of such protein is very low at low temperature.
The expression temperature used and detected in this work was from 16 to 37ºC, while more soluble ZFNs were expressed at 22ºC, which was induced with IPTG of 0.3 mM.The soluble ZFNs were expressed in high concentration medium than Luria-Bertani (LB) medium.ZFN-mediated gene manipulation becomes an effective tool for gene function studies.Facile constructions and rapid cleavage detections of ZFNs in vitro are essential processes for gene manipulation.This work reported a simple, rapid and reproducible method of less error to design and construct the ZFN.The optimization of ZFPs' coding sequences was simplified with DNAWorks software according to the codon usage frequencies of different organisms.Since the gene synthesis primers with similar melting temperatures were simply obtained using DNAWorks, the coding sequences of ZFPs were successfully carried out using two-step PCR.The facile protocol consisting of design, construction and testing of ZFNs could be widely applied to knock out target gene in the plants, vertebrates and even human cell lines.
Figure 1 -
Figure 1 -The construction of ZFNs.†The sequence of FokI and ZFP were linked into the plasmids pET-30a to form pET-ZFN-30a.
Figure 3 -
Figure 3 -The reorganization between targeted sequence and ZFNs.
Figure 4 -
Figure 4 -The PCR products of coding sequences of ZFPs.†Lane 1 contains size standards of DL2000 DNA marker (Beijing Transgene Co., Lid.).Lane 2 is the coding sequence of ZFP1.Lane 3 is the coding sequence of ZFP2.
Figure 5 -
Figure 5 -The PCR results of ZFNs construction.†Lane 1 is the DL2000 DNA marker.Lane 2 and 3 are the coding sequences of ZFN1.Lane 4 and 5 are the coding sequences of ZFN2.
Figure 7 -
Figure 7 -The cleaving activity of ZFN.†Lane 1 is the DNA marker; lane 2, the control DNA substrate; lane 3 and lane 5, the incomplete cleavage of DNA substrate; lane 4, the complete cleavage of the DNA substrate.
|
2018-07-21T04:26:09.432Z
|
2012-08-01T00:00:00.000
|
{
"year": 2012,
"sha1": "3f81d666840d121a85bcf8446f3e74ab8516d0f0",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/babt/a/K8Mg4RSGXdtGvm3twCFdPFL/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3f81d666840d121a85bcf8446f3e74ab8516d0f0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
252280421
|
pes2o/s2orc
|
v3-fos-license
|
The Son-Of-X-shooter (SOXS) Data-Reduction Pipeline
The Son-Of-XShooter (SOXS) is a single object spectrograph (UV-VIS&NIR) and acquisition camera scheduled to be mounted on the ESO 3.58-m New Technology Telescope at the La Silla Observatory. Although the underlying data reduction processes to convert raw detector data to fully-reduced science ready data are complex and multi-stepped, we have designed the SOXS Data Reduction pipeline with the core aims of providing end-users with a simple-to-use, well-documented command-line interface while also allowing the pipeline to be run in a fully automated state; streaming reduced data into the ESO Science Archive Facility without need for human intervention. To keep up with the stream of data coming from the instrument, there is the requirement to optimise the software to reduce each observation block of data well within the typical observation exposure time. The pipeline is written in Python 3 and has been built with an agile development philosophy that includes CI and adaptive planning.
INTRODUCTION
The SOXS (Son Of X-Shooter) instrument is a new medium resolution spectrograph (R 4500) capable of simultaneously observing 350-2000nm (U-to H-band) to a limiting magnitude of R ∼ 20 (3600sec, S/N ∼ 10). It shall be hosted at the Nasmyth focus of the New Technology Telescope (NTT) at La Silla Observatory, Chile (see 1 for an overview). This paper describes the design of the SOXS data-reduction pipeline and data-flow system. Details of each of the other SOXS subsystems can be found in a set of related. Details of the three detectors included in the SOXS instrument, that the pipeline shall receive data from, are given in Section 2. Section 3 explains the aims and goals of the pipeline and Section 4 describes the pipeline software architecture and development environment. Finally, Section 5 outlines the data-products to be expected from the pipeline and the planned data-flow from raw data coming off the telescope through to the data owners collecting reduced data from the ESO SAF.
THE 3 SOXS DETECTORS
SOXS comprises three instruments; the UV-VIS and NIR spectrographs and an Acquisition and Imaging Camera (AC). The instruments are to be mounted on the NTT's Nasmyth focus rotator flange. It is the role of the SOXS data reduction pipeline to reduce the pixel data collected by each of these instruments into science-ready data products.
The NIR Spectrograph
The SOXS NIR spectrograph is a cross-dispersed echelle, employing "4C' (Collimator Correction of Camera Chromatism) to image spectra in 800-2000nm wavelength range, in 15 orders, onto a 2kx2k 18 micron pixel Teledyne H2RG TM array (see Figure 1). It will achieve a spectral resolution R 5000 (1 arcsec slit).
The UV-VIS Spectrograph
The UV-VIS spectrograph employs the novel design of 4 ion-etched transmission gratings in the first order (m = 1) to obtain spectra in the 350-850nm wavelength range (providing an overlap of 50nm with the NIR arm for cross-calibration). The spectral band is split into four poly-chromatic channels and sent to their specific grating. 19 Unlike the NIR arm, the UV-VIS arm will include an Atmospheric Dispersion Corrector (ADC). Each of the four dispersion orders are imaged to separate areas of the e2V CCD and aligned linearly along the direction of the CCD columns (see Figure 2).
The Acquisition and Imaging Camera
Although the primary use of the SOXS acquisition camera is to acquire spectral targets to allow for their centring on the slit, the camera's 3.5 × 3.5 FOV and 0.205 arcsec/px scale will also allow for science-grade, multi-band imaging. Observers will be able to select from 7 filters; the LSST u, g, r, i, z, y set and Johnson V .
DATA REDUCTION PIPELINE REMIT
The main purpose of the SOXS Data Reduction pipeline is to use SOXS calibration data (typically, but not necessarily, collected close in time to the science data), to remove all instrument signatures from the SOXS scientific data frames, convert this data into physical units and deliver them with their associated error bars to the ESO SAF as Phase 3 compliant science data products, all within a timescale shorter than a typical SOXS science exposure. The pipeline must also support the reduction of data taken in each of the available SOXS observation modes. The primary reduced pipeline product will be a detrended, wavelength and flux calibrated, telluric corrected 1D spectrum with UV-VIS + NIR arms stitched together (see Section 5).
Although the underlying data reduction processes to convert the raw detector data to fully-reduced, fluxand wavelength-calibrated science-ready data are complex and multi-stepped, soxspipe has been designed with Figure 3: The SOXS Spectroscopic Data Reduction Cascade. Each of the vertical lines in the map depicts a raw data frame, the specific recipe to be applied to that frame and the data product(s) output by that recipe. Horizontal lines show how those output data products are used by subsequent pipeline recipes. Time loosely proceeds from left to right (recipe order) and from top-to-bottom (recipe processing steps) on the map. a core aim of providing end-users with an easy-to-install, simple-to-use, clear, well-documented command-line interface while also allowing the pipeline to be run in a fully automated state; streaming reduced SOXS data into the ESO SAF without need for human intervention. Once users have miniconda * or anaconda † installed on their local machine, the pipeline can be installed via a single command and typically takes < 1 min to install. conda create -n soxspipe python=3.8 soxspipe -c conda-forge The static calibration files required by the pipeline are shipped alongside the code, removing the burden often * miniconda https://docs.conda.io/en/latest/miniconda.html required of pipeline users to separately download and manage these files. This has the added benefit of these files being version controlled alongside the code so the end-user will always have access to the suite of calibration files associated with the specific version of the pipeline they have installed on their machine.
The pipeline will also generate Quality Control (QC) metrics to monitor telescope, instrument and detector health. These metrics are to be read and presented by the SOXS health-monitoring system. 27
PIPELINE ARCHITECTURE AND DEVELOPMENT ENVIRONMENT
Presently, the astronomical community have overwhelmingly adopted Python as their scripting language of choice and there are a plethora of well-maintained, mature python packages to help with basic data-reduction routines, visualisation, user-interaction and data manipulation. It was a natural choice therefore to develop the SOXS pipeline in Python 3. We have implemented an object-orientated composition and the pipeline is designed to be primarily driven from the command line. The concept of 'recipes', originally employed by ESO's Common Pipeline Library (CPL), has been adopted to define the modular components of the data reduction workflow. These recipes can be connected together to create an end-to-end data-reduction cascade, taking as input raw and calibration frames from the SOXS instrument and processing them all the way through to fully reduced, calibrated, ESO Phase 3 compliant science products (see Figures 3 and 4). Recipes are named with the prefix 'soxs' followed by a succinct description of the recipe (e.g. soxs mbais for the master bias creation recipe). There are also many reusable functions designed to be called from multiple recipes; these are referred to as 'utilities' in soxspipe.
The pipeline has been built with an agile development philosophy that includes adaptive planning and evolutionary development. As with any software project, one of the greatest risks is knowledge loss due to a team member leaving before project completion. To mitigate this risk we have employed pair-programming techniques to share knowledge, both explicit and tacit, between two developers. In times of travel bans and remote working a JupterHub server with Python-based notebooks, shared screens and video conferencing tools have been essential to executing these techniques.
The SOXS End-to-End (E2E) simulator 14 is capable of producing simulated 2D images in the SOXS format that take into account the main optical behaviour of the system (grating dispersion, sampling, PSF, noises and position of various resolution elements coming from full ray-tracing). By using test-driven development throughout the development process, combined with 'extreme' mock data generated from the E2E simulator, we can verify the pipeline is not only able to reduce a typical data set but also data that is far from ideal. This extreme data helps us push the pipeline to the limits of its capabilities and allows us to defensively develop against the edge-case scenarios the pipeline will most certainly experience at some point in production mode. Figure 6 gives an indication of the quality of the reductions achieved by soxspipe when reducing E2E calibration frames. This spectrum file will also have the same 4 extensions described above.
2D Source Spectra
A 2D FITS image for each spectral arm containing wavelength and flux calibrated spectra (no other corrections applied) allowing users to perform source extraction with their tool of choice. Note that rectification of the curved orders in the NIR introduces a source of correlated noise not present in extractions performed on the unstraightened orders as done by the pipeline.
Acquisition Camera Images ugrizy astrometrically and photometrically (griz only) calibrated to Refcat2 29 The pipeline code is open-source, hosted on Github ‡ and connected to a Jenkins Continuous Integration/Continuous Deployment server via Github's webhooks. Any new push of code to a branch on the GitHub repository triggers a new 'build' of the code on the CI server where all unit tests are run. If all tests pass the branch can be merged into main development branch. If it is the main/production branch being tested, and all tests pass, then a new dot release version of the code is automatically shipped to PyPI § and conda-forge ready for deployment.
DATA PRODUCTS AND DATA FLOW
soxspipe will reduce data into a set of final data products (see Table 1 for details) which shall meet ESO Phase 3 standards 'out-of-the-box'. This has the benefit of allowing us to build an automated workflow (see Figure 7) to reduce data directly on the La Silla summit immediately after the data is acquired by the NTT and SOXS and then stream the reduced data directly into the ESO SAF 30 in Garching, Germany. Owners of the data will then be able to access the fully-reduced data alongside the raw data within minutes of the shutter closing on their observation. This low-latency, automatic reduction is possible thanks to the fixed format of SOXS (apart from exchangeable slit) allowing calibration frames to be prepared ahead of time before science data reductions. The SAF then acts as both a data distribution solution and also fulfils the SOXS consortium's legacy archive requirements.
Access to the 'open stream' method of shipping reduced data directly to the ESO SAF will initially require the ESO Archive Science Group to review and verify a moderately sized collection of soxspipe reduced data. Once the quality and content of the data produced by the pipeline have met ESO Phase 3 standards we will then be allowed to ship data products to the archive without further need of passing through a gatekeeper. The pipeline will automatically reduce on all point-source targets above an AB magnitude of r = 19 (with the stretch goal of r = 20). For sources below this magnitude, the pipeline will attempt to automatically reduce the data but may require some user interaction to optimise object extraction.
CONCLUSIONS
The SOXS Pipeline soxspipe has been designed and written in object-orientated Python 3 using an agile framework of development. Built with core aims of allowing for fast, automatic reduction of raw data, streaming reduced data into the ESO SAF without need for human intervention, while also providing end-users with a simple-to-use, well-documented command-line interface, it is our hope that the pipeline will help facilitate the success of SOXS in the years to come. Figure 6: The left panels show the NIR order-edges as identified by the SOXS data-reduction pipeline using a master-flat frame created from a set of full slit flat-lamp frames generated by the E2E simulator. On the right, the resulting final dispersion solution and residuals as fitted by the SOXS data-reduction pipeline using a simulated arc-lamp frame obscured by a multi-pinhole mask. The arc lines detected in the frame (top right image panel) are used to fit a global dispersion solution (middle right image panel). The residuals of the fits as compared to measured order-edge and arc-line locations can be found in the bottom panels. . Raw data is reduced on the summit (top centre) and transferred within minutes to the ESO SAF (Garching, Germany) where data-right owners can access it (central in blue). In parallel, the SOXS consortium will also reduce their data on a remote machine (probably cloud-based) with a leading-edge version of the pipeline (bottom in green). If at any point it is decided that new development of the pipeline has led to significantly improved data products compared to those hosted on the SAF the consortium may opt for a complete reprocessing and replacement of the data on the SAF via a dedicated Phase 3 Data Release (orange arrow).
|
2022-07-17T15:12:02.796Z
|
2022-08-29T00:00:00.000
|
{
"year": 2022,
"sha1": "ade04c9d7b6662a9b132cd31cb5300e99b08789f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2012.12678",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6ad6f364a88b3a0561cf24b298d5ed0220b08070",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Engineering",
"Physics"
]
}
|
16785193
|
pes2o/s2orc
|
v3-fos-license
|
Strong Adhesiveness of a New Biodegradable Hydrogel Glue, LYDEX, for Use on Articular Cartilage
AIM
Until recently, only fibrin glue has been available for clinical usage to repair articular cartilage, although its adhesiveness is not strong enough for use with articular cartilage, and it is derived from human blood and thus carries the risk of contamination. Recently, LYDEX, a new biodegradable hydrogel glue, has come onto the market. The purpose of this study was to evaluate the adhesive strength and cytotoxicity of LYDEX when used on articular cartilage.
MATERIALS AND METHODS
The differing adhesive strengths of collagen membrane and articular cartilage with LYDEX versus with fibrin glue were measured using a tensile tester. In addition, the cytotoxicity of LYDEX in vitro was evaluated. The cytotoxicity of LYDEX for the articular cartilage of rats was evaluated histopathologically.
RESULTS
The adhesive strength of LYDEX was significantly stronger than that of fibrin glue, giving values about 3.8 times higher. LYDEX has no discernible effect on normal articular cartilage.
CONCLUSIONS
Our study is the first to assess the usefulness and safety of LYDEX for use on articular cartilage.
an α-glucan (e.g., dextran (polysaccharide) and starch) through oxidation, followed by a reaction with polylysine, consisting of linearly linked lysines (an essential amino acid). The reaction between the aldehyde groups of the aldehyde dextran and the amino groups of the polylysine forms Schiff bonds. Polylysine is widely used as a safe food additive. The key characteristic of LYDEX is that it is created solely from medical and food additive sources.
In contrast to a fibrin glue, LYDEX holds great potential for the development of an effective cartilage repair treatment.
In the search for methods of treating osteoarthritic knees with wide cartilage defects in middle-aged patients, many strategies for cartilage repair have been developed (9)(10)(11). In clinical usage, such cases may be treated by drilling (12), microfracture (13), abrasion arthroplasty or destruction arthroplasty (14). However, these treatments are not always satisfactory. More efficient treatment is therefore needed for wide cartilage defects in chondral defective or osteoarthritic patients. It seems very likely that the focus for cartilage repair treatment is shifting from fibrin glue to LYDEX. Therefore, in our study we deemed it essential to first of all investigate the adhesive strength
Preparation of animals
Five normal mini-pigs from the Hiroshima animal laboratory were used for evaluation of adhesive strength. The pigs were aged approximately 13 months, with a mean weight of 25 kg. Six normal Sprague Dawley (SD) rats from CLEA Japan Inc. were used for evaluation of cytotoxity. The rats were between 11 and 13 weeks old, and their mean weight was 400 g. All procedures were performed according to the Guide for Animal Experimentation, Hiroshima University, and were approved by the Committee of Research Facilities for Laboratory Animal Sciences, Graduate School of Biomedical Sciences, Hiroshima University.
Measurement of adhesive strength using a tensile tester
We used normal cartilage from the femur or tibia of a mini-pig to evaluate the bonding strength of the glue. The distal femora and the proximal tibiae of the pigs were resected. The articular cartilages of the femur or tibia were bisected sagittally and placed on the tensile tester (Model-1840nt; Aikoh Engineering Co., Osaka, Japan) (Fig. 2a). The cartilage was wiped with a dry towel and kept dry while 0.5 mL of glue was applied to the femoral or tibial joint cartilage. The glue used was LYDEX in group L and Bolheal TM in group F. Next, an atelocollagen membrane (AteloCell R , Koken Co., Tokyo, Japan) was placed on the adhesive. Immediately, the adhesive area was examined, and the atelo collagen membrane was marked (Fig. 2b). and safety of LYDEX for application on articular cartilage. The purpose of our study was to evaluate the adhesive strength and cytotoxicity of LYDEX when used on articular cartilage of the knee joint.
Preparation of adhesive
Liquid LYDEX was provided by the Institute for Frontier Medical Sciences, Kyoto University, Kyoto, Japan ( Fig. 1). LYDEX is a hydrogel adhesive prepared by mixing 2 kinds of liquid -polysaccharide aldehydes and ε-poly(L-lysine) -which form a Schiff base. The colorless liquid shown in the syringe is 20 w/w% aldehyde dextran (molecular weight [MW] = 70 kDa, aldehyde introduction = 0.46/sugar unit). The blue-colored (brilliant Blue FCF, food additive, 50 ppm) liquid in the syringe is 10 w/w% ε-poly(L-lysine) (MW = 4 kDa) containing 3.0 w/w% acetic anhydride. Both liquids were sterilized with a syringe filter (0.2-µm pore size) (Fig. 1a). The container has a special mixing tip which can mix equal volumes of the 2 liquids together as they pass through it when the plunger is depressed, allowing the mixture to be applied directly as a glue (Fig. 1b). Once mixed, the glue has a gelation time of about 13 seconds at 37°C (Fig. 1c).
Fibrin glue (Bolheal TM ) was obtained from Astellas Pharma Inc, Tokyo, Japan, for comparison. Fig. 1 -a) LYDEX was prepared in a syringe-like container with 2 cylinders: The colorless liquid in the syringe is 20 w/w% dextran aldehyde (molecular weight 70 kDa, aldehyde introduction = 0.46/sugar unit); the blue-colored liquid (brilliant Blue FCF, food additive, 50 ppm) in the syringe is 10 w/w% of ε-poly(L-lysine) containing 3.0 w/w% acetic anhydride. b) The container has a special mixing tip which can mix the 2 liquids together in equal volumes as they pass through it when the plunger is depressed, allowing it to be applied directly as a glue. c) After 13 seconds, the mixed liquids have gelled.
Fig. 2 -a)
Bonding strength was measured using a tensile tester. The articular cartilage was placed on the tensile tester, and 0.5 mL of glue was applied to the femoral or tibial joint cartilage, then the atelocollagen membrane was placed on top. b) The adhesive area was checked and marked on the atelocollagen membrane. The adhesive area was measured using Image J software.
LYDEX for use on articular cartilage
The collagen membrane was fixed under standardized conditions which maintained an even tension on the thread used to fix the collagen membrane, with the distance of the thread between the end of the collagen membrane and the tensile tester set at 3 cm. After loading 100 g of force for 5 minutes at room temperature (25°C), the bonding strength was measured using a tensile tester with a shearing speed of 10 mm/min.
The adhesive area was measured using Image J software. The tensile strength per square centimeter was then determined by dividing the initial tension, measured with the tensile tester, by the adhesive area.
Cytotoxicity test in vitro
The cytotoxicity of the new glue was evaluated by a V79 fibroblasts colony assay, according to the national standard guidelines of cytotoxicity tests for biomaterials (15,16). A V79 Chinese hamster-established fibroblast cell line was obtained from the Japanese Health Science Foundation, Tokyo, Japan. Eagle's Minimum Essential Medium (MEM) (Nissui Seiyaku Co., Tokyo, Japan) supplemented with 10 v/v% of fetal calf serum (FCS) was used for the general cell culture. MEM Earle's (Life Technologies Japan Ltd., Tokyo, Japan) supplemented with 5 v/v% of FCS was used for the cytotoxicity assay. Monolayer V79 cells were recovered with 0.05 v/v% trypsin containing 0.02 v/v% of ethylenediaminetetraacetic acid (EDTA) and were then resuspended in MEM Earle's at a concentration of 100 cells/mL, after which 0.5 mL of the cell suspension was poured into a 24-well culture plate. After 6-hour incubation at 37 o C in 5% CO 2 , the culture medium was sucked out and MEM Earle's containing different concentrations of test substances was added, followed by 6 days of incubation at 37 o C. After this incubation period, the cells were fixed with 10% formaldehyde and stained with 0.1% methylene blue to count the colony number (each colony consists of >50 cells).
In the case of gelling materials, the extraction method was as follows: The same volume of aldehyde dextran solution (20 w/w%) and ε-poly(L-lysine) (10 w/w%) was mechanically mixed to prepare gel formation. After 2 minutes at 25 o C, 29.15 mL of MEM Earle's to 1 g of the hydrogel was added for the extraction. In this step, 5 mg of aldehyde dextran with ε-poly(L-lysine) was extracted with 1 mL of water and MEM Earle's. This extraction (5 mg/mL) was diluted with MEM Earle's and used for a cytotoxicity test after 24 hours of extraction at 37 o C. In this experiment, GRF glue (MicroVal, Saint-Just Malmont, France), one of the commercially available tissue adhesives, was selected as a reference material. The same extraction (5 mg/mL) was prepared with MEM Earle's.
Cytotoxicity test in vivo
Surgical procedure for adhesion of collagen membrane The surgical procedures in rats were carried out under general anesthesia induced by an intraperitoneal injection of 1 ml/kg sodium pentobarbital. The patella was everted through the medial approach, then the articular cartilage of the distal femur's patellar groove was exposed. The cartilage was wiped with a dry towel and kept dry while an atelocollagen membrane (AteloCell) was glued onto the joint cartilage of the patellar groove with LYDEX. The arthrotomy was closed with interrupted 5-0 nylon sutures. This was group L. Sham groups were operated on in the same way and were categorized as group S.
One week after surgery, the rats in groups L and S were killed by intraperitoneal injection of a lethal dose of pentobarbital sodium. The whole knee joints were resected en bloc and fixed in 4% paraformaldehyde for 24 hours. They were then decalcified in a 0.5 M of EDTA solution. Next, the specimens were embedded in paraffin and cut into 5-µm serial sections along the sagittal plane. For histological evaluation, the sections were stained with hematoxylin and eosin (HE), and safranin-O/ fast green.
Immunohistochemistry
Sections washed in phosphate-buffered saline (PBS) were treated for 20 minutes at 90°C with a retrieval solution (Dako Cytomation; Dako Japan Co, Tokyo, Japan). After blocking the sections for 30 minutes with a blocking reagent (Block Ace; DS Pharma Biomedical Co, Osaka, Japan), they were incubated with a primary antibody at appropriate dilutions for 1 hour at room temperature. For immunohistological evaluation, the primary antibodies used were as follows: rabbit anti-rat collagen type II polyclonal antibody (Millipore, Billerica, MA, USA), rat antigoat TNF-α (Santa Cruz Biotechnology, Santa Cruz, CA, USA) and rat anti-goat IL-6 (Santa Cruz Biotechnology). The secondary antibodies used were as follows: peroxidase-labeled polymer-horseradish peroxidase (HRP)-conjugated goat anti-rabbit IgG for collagen type II and Alexa Fluor 488-conjugated rabbit anti-goat IgG (Molecular Probes, Eugene, OR, USA) for TNF-α and IL-6. DAPI (4,6diamidino-2-phenylindole) solution (Dojindo Laboratories, Kumamoto, Japan) was used for 10 minutes as a nuclear counterstain. Diaminobenzidine (DAB) was used as the chromogen for collagen type II. Negative controls were prepared in the same manner, but without the primary antibody.
Histological evaluation
The histopathological scale for grading the severity of knee arthritis was used for evaluation. The synovial membrane, trochlear sulcus of the femur and sagittal surface of the patella from the knee joints of the rats were examined. Changes were classified into 5 stages according to the items and criteria for arthritis shown in Table I (17) as follows: no change (score 0), minimal change (score 1), slight change (score 2), moderate change (score 3) and severe change (score 4). Synovial tissues were assessed by scoring the following items: edema, inflammatory cell infiltration, proliferation of synovial cells, granulation tissue formation, fibrosis and exudation into the joint cavity. The trochlear sulcus of the femur and patella were assessed by scoring the following items: pannus formation, destruction of the cartilage and destruction of the bone. The maximum score was 48 points.
Statistical analysis
The bonding strength in each group and the histopathological scales for grading the severity of knee arthritis in the 2 groups were calculated as means ± standard deviation. The Student's unpaired t-test was used to compare the different treatments. A P value of <0.05 was considered to be statistically significant. All statistical analyses were performed on a personal computer using the statistical package Excel-Toukei 2010 (Social Survey Research Information Co.).
Adhesive strength test using a tensile tester
In strength tests, the adhesive strengths of the adhesive in group L were a minimum of 0.97 N/cm 2 and a maximum of 1.95 N/cm 2 ; those in group F were a minimum of 0.25 N/cm 2 and a maximum of 0.65 N/cm 2 (Tab. II). The mean adhesive strength of group L was 1.5 ± 0.4 N/cm 2 , and that of group F was 0.4 ± 0.2 N/cm 2 . The adhesive strength of LYDEX was therefore significantly stronger than that of fibrin glue, giving values about 3.8 times higher (t-test; P<0.05) (Fig. 3). In all cases, neither the thread fixed to the collagen membrane nor the collagen membrane itself was ruptured.
Low cytotoxity of LYDEX in vitro
Aldehyde dextran and ε-poly(L-lysine) were separately diluted with MEM Earle's, and colony formation of V79 cells in their presence was evaluated. The results are shown in Figure 4. The colony formation was suppressed with the increase of aldehyde dextran and ε-poly(L-lysine). In contrast, almost no suppression or cytotoxicity was observed after gel formation, as shown in Figure 5. values (mg/mL), at which the colony formation was suppressed to 50%, was calculated, and the results are summarized in Table III. The IC50 of gelling of the glue (aldehyde dextran + ε-poly(L-lysine)) was higher than 5 mg/mL. On the other hand, IC50 for the GRF extraction was 0.11 mg/mL. High cytotoxicity of GRF-cured glue was due to the remaining low aldehyde molecules, such as glutaraldehyde and formaldehyde, contained in GRF. These findings suggest that no cytotoxic materials remained in the new glue after gel formation, and no cytotoxicity to surrounding tissue in clinical application would be expected.
Low cytotoxity of LYDEX for articular cartilage in vivo
Macroscopic examination found no signs of infection, swelling or redness of the joint in either of the groups of rats. The adhesive area was evaluated after staining with HE and safranin-O / fast green from serial sections for microscopic evaluation (Fig. 6). There were no significant differences between group L and group S with regard to inflammatory reactions. The synovial membrane, trochlear sulcus of the femur and patella were evaluated according to the histopathological scale for grading the severity of knee arthritis (Tab. I). These results showed no significant difference between the 2 groups (Fig. 7).
Immunohistochemistry showed the same level of staining for collagen type II in the cartilage underneath the adherent collagen membrane in both groups. The expression of TNFα and IL-6 in articular cartilage was not up-regulated in either of the 2 groups. The expression of TNFα in synovium was also not up-regulated in either group (Fig. 8). selected as the starting materials, instead of human plasma and animal-derived components such as are used for other adhesives. Furthermore, it has a high degree of flexibility and a bonding strength higher than that of fibrin glue. After the first report in 2007, LYDEX was tested in various fields. In thoracic surgery, Araki et al (18) reported that LYDEX had sufficient sealing properties to prevent air leakage from large pleuroparenchymal defects and was significantly superior to fibrin glue as a sealant in their beagle model. In this study, normal lung structure was restored without fibrosis by 6 months. In ophthalmology, Takaoka et al (19) reported a safe and simple technique for sutureless amniotic membrane transplantation using LYDEX, which promoted a secure and rapid adhesion onto the sclera in vivo without the need for suturing. They found no significant differences between the sutured and nonsutured models with regard to inflammatory reactions. In orthopedics, Yamamoto et al (20) reported that LYDEX combined with hydroxyapatite granules was useful for repairing rabbit bone defects.
Our study is the first to assess LYDEX for the repair of articular cartilage of the knee joint, and it has proved that the combination of LYDEX and collagen membrane is safe to use for articular cartilage. Nakajima et al (8) have developed and reported LYDEX as a self-degradable bioadhesive. In their report, commercial fibrin glue was used as a reference. In 2007, they reported that LYDEX gave 4.0 times greater strength than fibrin glue when tested on cow skin. Concurring with this, we found that LYDEX gave 3.8 times more strength than fibrin glue when tested on articular cartilage.
We have demonstrated that LYDEX has no discernible effect on normal articular cartilage when examined histopathologically. We needed to examine whether LYDEX application had any effects on articular cartilage. The results of our study will markedly advance the treatment of cartilage injury.
Recently, Wegener et al (21) reported the use of bone marrow mesenchymal stem cells implanted into the area of cartilage injury using implants with added fibrin glue. Jung et al (22) reported that chondrogenic-differentiated mesenchymal stem cells derived from human adipose tissues combined with fibrin glue were able to proliferate and form new cartilage. Therefore, studies of the treatment for articular cartilage combined with fibrin glue are ongoing and are progressing. However, fibrin glue is derived from blood and thus carries the risk of contamination. The development of a treatment using LYDEX instead of a fibrin glue for cartilage repair has great potential.
This study is limited by the following: (i) articular cartilage is round and not flat, thus precluding the uniform application of LYDEX containing the ingredients used
|
2017-03-30T22:02:58.506Z
|
2013-09-01T00:00:00.000
|
{
"year": 2013,
"sha1": "3891b89039b548570b1793bf1dbd09cbdafd9616",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc6161642?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "3891b89039b548570b1793bf1dbd09cbdafd9616",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
119281564
|
pes2o/s2orc
|
v3-fos-license
|
Lyapunov instabilities in lattices of interacting classical spins at infinite temperature
We numerically investigate Lyapunov instabilities for one-, two- and three-dimensional lattices of interacting classical spins at infinite temperature. We obtain the largest Lyapunov exponents for a very large variety of nearest-neighbor spin-spin interactions and complete Lyapunov spectra in a few selected cases. We investigate the dependence of the largest Lyapunov exponents and whole Lyapunov spectra on the lattice size and find that both quickly become size-independent. Finally, we analyze the dependence of the largest Lyapunov exponents on the anisotropy of spin-spin interaction with the particular focus on the difference between bipartite and nonbipartite lattices.
Introduction
Investigations of the Lyapunov instabilities in many-particle systems are often motivated by the role of chaos in the foundations of statistical physics [1,2,3], which is still not fully understood. In general, interacting many-particle classical systems are expected to be chaotic. The largest Lyapunov exponents and properties of the entire Lyapunov spectra have been calculated numerically for classical many-body systems, such as gases of hard-core particles [4,5], fluids with soft interactions [6], and lattice two-dimensional rotators [6,7,8,9,10], and analytically in a few cases [11,12,13,14,15,16,17,8,9,10]. In the present paper, we focus on lattices of interacting classical spins.
Classical spins often appear in the theoretical studies as the large-spin limit of quantum spins. Moreover, even when one deals with lattices of spins 1/2, parallels between classical and quantum spin dynamics still remain. These parallels have recently received much attention, in particular, in the context of the asymptotic exponential-oscillatory behavior of nuclear spin decays in solids [18,19,20,21,22,23,24]. These decays were identified in Refs. [18,20] with chaotic eigenmodes in both classical and quantum many-spin systems.
Although it appears very likely a priori that lattices of interacting classical spins exhibit chaotic dynamics, no systematic investigation of the chaotic properties of these lattice was undertaken until our previous work [25], which presented a survey of the largest Lyapunov exponents for a very large variety of spin lattices and Hamiltonian anisotropies. The principal finding of Ref. [25] was that all Hamiltonians considered, with the exception of the Ising case, led to chaotic dynamics as evidenced by the nonzero value of the largest Lyapunov exponent. We also obtained both analytically and numerically the power-law scaling of the largest Lyapunov exponent in the vicinity of the integrable Ising limit.
In the present paper, we complement the findings of Ref. [25] in several respects. Namely, we compute complete Lyapunov spectra for a few selected spin lattices and show their dependence on the lattice size, and also present a more extensive investigation of the lattice size dependence of the largest Lyapunov exponent. Finally, we discuss the dependence of the largest Lyapunov exponent on the Hamiltonian anisotropy with particular emphasis on the difference between bipartite and nonbipartite lattices.
Spin model
We consider periodically closed spin lattices with the nearest-neighbor (NN) interaction Hamiltonian of the following kind: where (S ix , S iy , S iz ) ≡ S i are the three projections of the classical spin vector of unit length on the ith lattice site (i.e. S 2 i = 1), and J x , J y , J z are the coupling constants, which we also normalize by condition J 2 x + J 2 y + J 2 z = 1. Below, we often mention Ising, Heisenberg and "anti-Heisenberg" limits of the Hamiltonian (1). The Ising Hamiltonian corresponds to J x = J y = 0, J z = 1, Heisenberg Hamiltonian to J x = J y = J z = 1/ √ 3, and, finally, anti-Heisenberg Hamiltonian to J x = J y = −J z = 1/ √ 3. The total number of spins in the lattice is denoted as N . The phase space of such a lattice has dimensionality 2N .
We consider seven lattices shown in Fig. 1 and labeled as (L1-L7). Lattices (L1-L5) are bipartite, which means that they can be divided into two sublattices such that all the interacting neighbors for a spin on one sublattice belong to the other sublattice. Lattices (L6,L7) are nonbipartite.
The equations of motion associated with the Hamiltonian (1) can be obtained in the Poisson-bracket formalism [33,34,35]: dS i,µ /dt = {H, S i,µ }, where the index µ admits values 1, 2 or 3 representing the projections x, y, or z, respectively. The primary Poisson brackets are: {S i,µ , S j,ν } = δ ij κ ǫ µνκ S i,κ , where δ ij is the Kronecker delta and ǫ µνκ the Levi-Civita symbol. The evaluation of {H, S i,µ } involves the following Poisson brackets: The resulting equations of motion arė where h i is the local field given by the expression Here e x , e y and e z are the unit vectors along the respective directions, and j(i) implies the summation of the nearest neighbors of the i-th lattice site.
In this work, we restrict ourselves to the Lyapunov instabilities on the zero energy shell, which corresponds to infinite temperature in the microcanonical sense.
Analytical considerations
Since the dynamics of this system are fully time-reversible, positive and negative Lyapunov exponents are expected to form conjugate pairs with equal absolute values. In the general case of different J x , J y , and J z , there should be only two zero Lyapunov exponents, corresponding to the energy and the time shift directions. In the case J x = J y = J z , of which the anti-Heisenberg case is an example, the z-component of the total spin polarization becomes an integral of motion, and hence one more pair of Lyapunov exponents should also become equal to zero. In the Heisenberg case, J x = J y = J z , all three components of the total spin polarization become integrals of motion. However, these three integrals of motion are not dynamically independent, because one of them can be obtained as the Poisson bracket of the two others. The two independent integrals of motion thus imply two additional pairs of zero Lyapunov exponents. In the Ising case, the z-component of each spin is an integral of motion. Hence, the system is integrable and all Lyapunov exponents are equal to zero. Now we discuss the connections between the Lyapunov spectra of different anisotropic Hamiltonians. We first note that infinite-temperature Lyapunov spectra are expected to be identical for the Hamiltonians with the coupling constants (J x , J y , J z ) and (−J x , −J y , −J z ), because the zero-energy shells in the two cases are identical, while the change of the sign of the coupling constants amounts to the operation of time-reversal and flips the sign of the energy. We further note that, for bipartite lattices, the infinite temperature Lyapunov spectra for the coupling constants (J x , J y , J z ) and (−J x , −J y , J z ) should also be identical. In order to see this, one should make the transformation S ix → −S ix and S iy → −S iy for one of the two sublattices forming the bipartite lattice and then examine the resulting equations of motion.
The two observations made above also imply that, for a given bipartite lattice, the Lyapunov spectra for the Heisenberg and the anti-Heisenberg Hamiltonians are identical to each other, which means that, in the latter case, the Lyapunov spectrum has three pairs of zero Lyapunov exponents instead of two expected for the generic case of J x = J y = J z . The extra pair of the zero exponents originates from the following two (not independent) integrals of motion: where ξ is the index taking value 0 for one sublattice and 1 for the other. The same considerations also imply that any Hamiltonian with J x = −J z , or equivalent, on a bipartite lattice can be converted to the axially symmetric Hamiltonian with J x = J z . Therefore, the original Hamiltonain has an extra pair of zero Lyapunov exponents corresponding to the integral of motion i (−1) ξ S iy .
Small spin clusters may have additional nontrivial integrals of motion. In particular, any 4-spin periodic chain with the general anisotropic Hamiltonian of form (1) is fully integrable. This is because the first and third spins in this chain rotate in the same local field [J x (S 2x + S 4x ), J y (S 2y + S 4y ), J z (S 2z + S 4z )], while the second and the fourth spins rotate in another local field [J x (S 1x + S 3x ), J y (S 1y + S 3y ), J z (S 1z + S 3z )]. Therefore, there are two additional integrals of motion, namely: S 1 · S 3 and S 2 · S 4 . Finally, one can also check that (S 1 + S 3 ) · (S 2 + S 4 ) is also an integral of motion. As a result, the number of integrals of motion (including energy) becomes 4, while the dimensionality of the phase space is 8, i.e. the problem is fully integrable. In the case of the isotropic Heisenberg Hamiltonian, a much larger variety of small spin clusters with nontrivial integrals of motion were cataloged in Ref. [26].
Numerical simulations
The equations of motion were integrated using a fourth-order Runge-Kutta algorithm with a time-step δt = 0.005. This is sufficiently small so that on the time scale of our simulations, energy is conserved with the 6-digit accuracy. Simulations were run for 20000 time units, which was sufficient for accurate convergence of the smaller exponents.
The Lyapunov exponents were obtained by using a version of the standard reorthonormalization algorithm [4,27,28]. In order to obtain the first n largest Lyapunov exponents, we numerically propagate a reference trajectory γ(t) and n initially orthogonal perturbation vectors δγ i (t). At every time step, we propagate the perturbations using the linear tangent space map that is obtained by numerically taking derivatives, After each time interval ∆t = 0.25, we hierarchically reorthogonalize the perturbation vectors δγ i (t) using the Gram-Schmidt procedure, and then renormalize their lengths back to the initial values. The renormalization factors are recorded for the kth time interval ∆t as α i (k). Finally, we compute the ith Lyapunov exponent using the formula where K is the number of the renormalization time intervals. As a test of the accuracy of our numerical routine, we have checked that the Lyapunov exponents form conjugate pairs and that the number of zero Lyapunov exponents is equal to the number expected from the symmetry of the Hamiltonian as discussed in Section 2.2. We have also checked the symmetries between Heisenberg and anti-Heisenberg Lyapunov spectra expected based on the arguments presented in Section 2.2.
In order to obtain initial conditions at zero total energy, we first choose random orientations for all spins. This produces a total energy close to zero, but with fluctuations of the order of √ N . Then, in order to arrive at zero total energy, we evolve the dissipative equations of motion: Depending on the sign in front of the right-hand side, this increases or decreases the total energy associated with the Hamiltonian (1). Once the zero value of the total energy is reached, we additionally assure the randomness of the initial conditions on the energy shell by performing 10N sequential rotations of random spins by random angles around the directions of their respective local fields h i given by Eq.(3). These rotations preserve the total energy.
Lyapunov spectra
We have computed the full Lyapunov spectra for the linear chain (L1), cubic lattice (L5) and triangular lattice (L7) with the Heisenberg Hamiltoninan, and also for the triangular lattice (L7) with the anti-Heisenberg Hamiltonian. The results are presented in Fig. 2. The spectra (i.e. the Lyapunov exponents as a function the exponent's index) are typically weakly convex, regardless of the lattice. In the case of a cubic lattice with Heisenberg interaction, the spectrum is actually very close to linear. As explained in Section 2.2, the Lyapunov spectra for the bipartite lattices (L1) and (L5) with anti-Heisenberg Hamiltonian are identical to those already presented in Fig. 2 for the Heisenberg Hamiltonian. The effect of the lattice size on the Lyapunov spectrum for the linear chain with Heisenberg Hamiltonian is shown in Fig. 3. The size dependence of the spectrum appears to be very weak for N > 4 and becomes virtually unobservable for chains containing more than 32 spins.
Comparing the Lyapunov spectra of classical spins with the Lyapunov spectra of other many-particle systems, we first note, that the spectra presented in Fig. 2 not exhibit an offset from zero for the smallest positive exponents. Such an offset was observed in gases of particles [4] and high-dimensional billiards [29,15] but not in systems with sufficiently soft interactions [6]. The classical spin lattices obviously belong to the latter group. We further remark on the existence of the delocalized Lyapunov-Goldstone modes, which were observed in dilute gases [4,30,31,14] and in some other extended systems [32]. If these modes exist in the spin systems, the projections on single spins of the Lyapunov vectors corresponding to the smallest nonzero Lyapunov exponents should exhibit a sinusoidal dependence on the positions of spins. In the present work, we did not investigate the properties of the Lyapunov vectors systematically. However, our several attempts to find the Lyapunov-Goldstone modes for the infinite temperature energy shells did not produce any positive evidence: the projections of the Lyapunov vectors for all exponents were strongly localized on the lattice and not sinusoidal. There is also no indication of a dependence of the smalles nonvanishing exponent on the system length, as is characteristic for the Lyapunov-Goldstone modes. The Lyapunov-Goldstone modes may still exist in classical spin systems at low temperatures but this is a subject beyond the scope of the present paper.
Largest Lyapunov exponents: finite size effects
Accurate numerical calculations of full Lyapunov spectra are very demanding, because the computational cost grows as N 2 . For this reason, the lattices investigated in the preceding subsection were relatively small. In this subsection, we focus only on the largest Lyapunov exponents, λ 1 . The cost of computing λ 1 grows only as N , which allows us to investigate much larger lattices and a much greater variety of Hamiltonians. An extensive investigation of this kind was already reported by us in Ref. [25]. In the present and the next subsections we present some results and analysis that were not included in Ref. [25].
In this subsection, we investigate the dependence of λ 1 on the lattice size for all seven lattices shown in Fig. 1 with Heisenberg and the anti-Heisenberg Hamiltonians. The results are shown in Fig. 4. They indicate that, within our numerical accuracy, λ 1 becomes size-independent for sufficiently large lattices. In particular, on the basis of these results even slow logarithmic growth of λ 1 with the lattice size can be excluded.
The saturation of λ 1 with the lattice size can be explained on the basis of the following consideration. The exponential growth of the perturbation vector δγ 1 in many-spin phase space with rate λ 1 implies that the projection δγ 1 on the subspace of each individual spin {S ix , S iy , S iz } should also, on average, grow exponentially with the same rate. The instantaneous growth rate of the perturbations of the coordinates of a given spin can be obtained from the linearized equations of motion, which, in turn can be obtained from equations (2) and (3), Here, δS i indicates a perturbation in the i-th spin, while δS ix indicates a perturbation in its xth component. From these equations, one can see that the instantaneous growth rate is limited from above by the value on the order of the maximum value of the local field max(|h i |) = n 0 max(|J x |, |J y |, |J z |), where n 0 is the number of nearest neighbors for each lattice site. Since this constraint does not depend on the lattice size, the growth of the perturbation belonging to the largest Lyapunov exponent must saturate as the lattice size increases. In principle, it might also be possible for the largest Lyapunov exponent to oscillate with the lattice size, but in a system with exponential decay of spatial correlation, this is extremely unlikely.
Largest Lyapunov exponents: dependence on the Hamiltonian anisotropy
In Ref. [25], we conducted a systematic survey of the dependence of λ 1 on the Hamiltonian anisotropy on the "interaction sphere" constrained by the condition J 2 x + J 2 y + J 2 z = 1. We have found that the principal parameter controlling this dependence is J max ≡ max(|J x |, |J y |, |J z |). In particular, this parameter quantifies the approach to the integrable Ising case corresponding to J max = 1. The main plot of Ref. [25] is reproduced in Fig. 5.
Here we focus on the difference between the anisotropy dependence of λ 1 for bipartite lattices (L1-L5) and nonbipartite lattices (L6, L7). As can be seen in Fig. 5, the bipartite lattices (L1-L5) show a nearly universal dependence λ 1 (J max ), which scales only with the number of interacting neighbors. The small spread of the sampled values of λ 1 at a given value of J max indicates that λ 1 depends only very little on the ratio of the two coupling constants that have smaller absolute values. Our investigation of the nonbipartite lattices (L6,L7) was, in fact, motivated by the impression that the number of interacting neighbors alone determines the entire anisotropy dependence of λ 1 . We wanted to compare λ 1 for the lattices (L6) and (L3), where, in the both cases, each site has four nearest neighbors, and for the lattices (L7) and (L5), each site has six nearest neighbors.
We found, however, that, as seen in Fig. 5, the nonbipartite lattices (L6) and (L7) exhibit a noticeable fork-like spread of λ 1 as J max approaches 1/ √ 3. The upper and the lower tips of the fork correspond to the anti-Heisenberg and Heisenberg Hamiltonians, respectively. In the anti-Heisenberg case, the value of λ 1 is close to that of a bipartite lattice with the same number of nearest neighbors. In the Heisenberg case, the value of λ 1 is closer to the bipartite lattice with one fewer nearest neighbor. This spread indicates that the knowledge of J max alone is insufficient to determine λ 1 . The twodimensional anisotropy dependence behind this spread is shown in Fig. 6, where we present it for lattices (L5) and (L7) in the form of the color density plots as a function of J x and J y .
Less obvious from Fig. 5, is the fact that the significant majority of the sampled values of λ 1 for the lattices (L6) and (L7) agree very well with the dependence λ 1 (J max ) for the bipartite lattices (L3) and (L5), respectively. The comparison between Figs. 6(a) and (b) clearly shows that the difference between lattices (L5) and (L7) is only pronounced, when all three interaction constants have the same sign and roughly the same value-or, in other words, when they approach the Heisenberg limit. This implies that λ 1 for the anti-Heisenberg Hamiltonian has a more typical value than for the Heisenberg Hamiltonian, and that the expected universality of λ 1 (J max ) for the same number of nearest neighbors still roughly holds even for nonbipartite lattices. It thus appears that the conservation of the total spin, i.e. i S i , in the Heisenberg case leads to the reduction of the effective number of the nearest neighbors by roughly one, as far as the value of λ 1 is concerned.
We suspect that the situation here is similar to the origin of the frustrated low-temperature magnetism for the Heisenberg model on nonbipartite lattices. The interaction energy for a spin pair is minimal when the two spins are antiparallel to each other, but, on a nonbipartite lattice, such an antiparallel configuration cannot simultaneously exist for all pairs of interacting spins. Hence the ground state of a nonbipartite lattice is frustrated. In the case of the Lyapunov instabilities, the conservation of the total spin polarization implies that the perturbation vector δγ 1 ≡ {δS i } corresponding to the largest Lyapunov exponent does not grow along the direction of the total spin polarization, i.e i δS i (t) = i δS i (0). At the same time, |δS i (t)| grows, on average, exponentially, which means that, for t ≫ 1/λ 1 , |δS i (t)| ≫ |δS i (0)|. This, in turn, implies that, in the leading order, i δS i (t) ≈ 0. As a result, when a given projection δS i (t) grows, this growth needs to be compensated by the growth of δS j (t) for other spins in the opposite direction. However, since the interaction is local, these "other spins" can only be the nearest neighbors. Achieving the maximum growth of the perturbation, and hence the largest value of the Lyapunov exponent presumably requires that the perturbation vector δγ 1 maximizes the antialignment of δS i (t) for the adjacent sites. For the bipartite lattices, this anti-alignment can, in principle, be made perfect, but for the nonbipartite lattices this is impossible. Such an explanation is consistent with the fact that λ 1 for lattices (L6,L7) in the Heisenberg limit is smaller than in the anti-Heisenberg limit. It seems also to be connected to the fact that the above reduction approximately leads to the value of λ 1 for a bipartite lattice with roughly one fewer nearest neighbor per site.
We finally remark that the overall small spread of values of λ 1 for bipartite lattices at a given value of J max is related to the symmetries described in Section 2.2, which imply that the Lyapunov exponents are identical for eight combinations of the coupling constants characterized by the same value of J max , namely: . The values of λ 1 cannot change much between these eight points.
Summary and conclusions
We have investigated numerically the Lyapunov spectra of systems of many classical spins for a variety of lattices and coupling constants at infinite temperature. The possibility of varying the coupling constants, from the highly symmetric isotropic Heisenberg model, through partially symmetric couplings such as the anti-Heisenberg model, to the completely anisotropic integrable case, makes these systems particularly interesting. We have presented: (i) calculations of the Lyapunov spectra for selected lattices of interacting classical spins; (ii) investigations of the lattice-size dependence of the Lyapunov spectra; (iii) investigations the largest Lyapunov exponents for a broader group of coupling constants, lattices and large lattice sizes; and (iv) discussion of the difference between the largest Lyapunov exponents for bipartite and nonbipartite lattices. The computed Lyapunov spectra were found to be weakly convex. We have observed no finite offset of the smallest positive exponents, and, to the extent that we have searched, we have not encountered any evidence of Lyapunov-Goldstone modes. Both the largest Lyapunov exponents and the whole Lyapunov spectra were found to become independent of the lattice sizes for sufficiently large lattices. We have given an analytical argument explaining this finding. In addition, we have found that the largest Lyapunov exponents for bipartite and nonbipartite lattices depend differently on the anisotropy of the coupling. This is due to the special symmetry of the bipartite lattices with respect to the sign change of the two out of three coupling constants.
Acknowledgements
A.S.dW's work is financially supported by an Unga Forskare grant from the Swedish Research Council. B.V.F. acknowledges the hospitality of the Kavli Institute for
Lyapunov instabilities in lattices of interacting classical spins at infinite temperature11
Theoretical Physics at the University of California, Santa-Barbara, where a part of this paper was written, and support by the National Science foundation under Grant No. NSF PHY11-25915. The numerical part of this work was performed at the bwGRiD computing cluster at the University of Heidelberg.
|
2013-05-21T11:39:14.000Z
|
2012-09-07T00:00:00.000
|
{
"year": 2012,
"sha1": "b6032852a57bea1ff9d79b33e0df3669effb7bf9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1209.1468",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b6032852a57bea1ff9d79b33e0df3669effb7bf9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
21702404
|
pes2o/s2orc
|
v3-fos-license
|
Degenerate observables and the many Eigenstate Thermalization Hypotheses
Under unitary time evolution, expectation values of physically reasonable observables often evolve towards the predictions of equilibrium statistical mechanics. The eigenstate thermalization hypothesis (ETH) states that this is also true already for individual energy eigenstates. Here we aim at elucidating the emergence of ETH for observables that can realistically be measured due to their high degeneracy, such as local, extensive or macroscopic observables. We bisect this problem into two parts, a condition on the relative overlaps and one on the relative phases between the eigenbases of the observable and Hamiltonian.
"Pure state quantum statistical mechanics" [1][2][3][4][5] aims at understanding under which conditions the use of tools from statistical mechanics can be justified based on the first principles of standard quantum mechanics with as few extra assumptions as possible.To explain the emergence of thermalization it combines three approaches: Typicality arguments [6][7][8][9][10][11][12][13], the dynamical equilibration approach [14][15][16][17][18][19][20][21] and the Eigenstate Thermalization Hypothesis (ETH) [22][23][24][25][26][27][28][29][30][31][32][33][34].According to the first one, systems appear to be in equilibrium because, in a precise sense, most states are in equilibrium.Alternatively, according to the second approach apparent equilibration of observables and whole subsystems emerges because initial states of large many-body systems overlap with many energy eigenstates and therefore explore a large part of Hilbert space during their evolution, almost all the while being almost indistinguishable from a static equilibrium state.ETH, on the other hand, is a hypothesis about properties of individual eigenstates of sufficiently complicated quantum many-body systems which was suggested by various results in quantum chaos theory and it adduces the appearance of thermalization during such equilibration to an underlying chaotic behavior.The basic idea is that, for large system sizes and in sufficiently complicated quantum many-body systems, the energy eigenstates can be so entangled that when we look at their overlaps with the basis of a physical observable they can be effectively described by random variables.If the ETH is fulfilled, it guarantees thermalization whenever equilibration happens because of the mechanisms described above.Depending on how broad one wants the class of initial states that thermalize to be, the fulfillment of the ETH is also a necessary criterion for thermalization [5,35].
The ETH is sometimes criticized for its lack of pre-dictive power, as it leaves open at least three important questions: what precisely are "physical observables"; what makes a system "sufficiently complicated" to expect that ETH applies; how long will it take for such observables to reach thermal expectation values [18,21].For this reason, a lot of effort has been focused on numerical investigations that validate the ETH in specific Hamiltonian models and for various observables, often including local ones.The ETH is generally found to hold in nonintegrable systems that are not many-body localized and equilibration towards thermal expectation values usually happens on reasonable times scales [18,20,21,34].
Recently [36] it has been shown that for any Hamiltonian there is always a large number of observables which satisfy ETH.They have been dubbed "Hamiltonian Unbiased Observables" (HUO) and admit an algorithmic construction.Unfortunately this still leaves open when concrete physically relevant observables satisfy the ETH.In this letter we make progress in this direction.Building on the connection between HUOs and ETH, we present a theorem which can be used as a tool to investigate the emergence of ETH.In order to show how it can be used, we present three applications: local observables, extensive observables, and macro-observables.We will give precise definitions for each of them later.
The paper is organized as follows.First we set-up the notation, recall different formulations of the ETH and clarify which one we will be using throughout the paper.We continue with a brief digression on physical observables and degeneracies and recall the concepts of Hamiltonian unbiased basis and observables.We then present our main result, which elucidates the question under which conditions highly degenerate observables are HUO and discuss consequences of it for local observables, extensive observables, and a certain type of macro-observables.
Versions of the ETH.We start by reviewing several versions of the ETH that have appeared in the literature.All versions of the ETH are statements about properties of large systems.In principle one would hence state the following in terms of families of systems of increasing size/particle number.To not over-complicate things we do not make this explicit and instead implicitly assume that a limit of large system size exists and makes sense and that it is understood that the following are meant as statements about asymptotic scaling.Throughout the paper we assume all Hamiltonians H to be non-degenerate with eigenvalues E m and eigenstates |E m .For any given initial state of the form the diagonal ensemble, also known as the time averaged state.
Before we continue, we review some variants of the eigenstate thermalization hypothesis.These are essentially different mathematical statements which aim at formalizing the same physical intuition.Our goal here is to provide a reasonable clusterization of the most used versions of ETH and to state which one we will refer to throughout the paper.
Hypothesis 1 (Original ETH). The matrix elements
A m,n := E m |A|E n of any physically reasonable observable A with respect to the energy eigenstates |E m in the bulk of the spectrum of a Hamiltonian of a system with N particles satisfy In words: Off-diagonal elements of physically reasonable observables and the differences between neighboring diagonal elements are exponentially small in the size of the system.This kind of ETH is what Srednicki argued to be fulfilled in a hard-sphere gas [25].Similar variants appeared for example in [26,27,32,34,37].
Hypothesis 2 (Thermal ETH).
There exists a function β : R → R + 0 such that for any physically reasonable observable A the expectation values A m := E m |A|E m of A with respect to the energy eigenstates |E m in the bulk of the spectrum of a Hamiltonian of system are close to thermal in the sense that Such formulations of ETH appeared for example in [28,35,38,39], along with a rigorous proof of a statement that is closely related but weaker than Hypothesis 1 for translation invariant Hamiltonians with finite range interactions.Whether a 1/N scaling should be required or whether one would be content with a weaker decay is debatable.
Hypothesis 3 (Smoothness ETH).For any physically reasonable observable A there exists a function a : R → R that is Lipschitz continuous with a Lipschitz constant L ∈ O(1/N ) such that the expectation values A m := E m |A|E m of A with respect to the energy eigenstates |E m in the bulk of the spectrum of a Hamiltonian of system with N particles satisfy In words: The expectation values of physically reasonable observables in energy eigenstates approximately vary slowly as a function of energy instead of widely jumping over a broad range of values even in small energy intervals.The function a(E) is often related to the average of A over a small microcanonical energy window around E. Similar statements of the ETH have been used for example in [3,[40][41][42][43][44][45][46].
Several other versions of the ETH and variations of the statements above can be found in the literature and there is a further level of diversification which needs to be mentioned: All the statements above are intended to hold for all energy eigenstates in the bulk of the spectrum.It is also possible to require them to hold only for all but a small fraction of these eigenstates, which somehow goes to zero in the thermodynamical limit.Such statements have been dubbed Weak ETH [47].Another related concept is the eigenstate randomization Hypothesis [48], which states that the diagonal elements of physical observables should behave as random variables.Together with an assumption on the smoothness of the energy distribution, this allows to derive a bound on the difference between the infinite-time and a suitable microcanonical average.
The main difference among the formulations of the ETH listed above is that the first one is also a statement about the off-diagonal matrix elements A mn while the other two pertain only to diagonal matrix elements A mm .We believe it is important to highlight this aspect because the off-diagonal matrix elements contribute in a non-trivial way to the out-of-equilibrium dynamics of the observable [15,[17][18][19][20][21]49].This is the reason why we (as others do [32]) consider the Original ETH as more fundamental.Hereafter, when we refer to ETH we will always refer to the technical statement of Original ETH or ETH 1.
Physical observables.Another issue left open by the above definitions of the ETH is the identification of physical observables for which ETH is supposed to hold.In this work we show that highly degenerate observables are good candidates.Those are natural in at least three scenarios: First, local observables only have a small number of distinct eigenvalues, as they act non-trivially only on a low dimensional space, and each such level is exponentially degenerate in the size of the system on which they do not act.Second, averages of local observables, like for example the total magnetization, are, for combinatorial reasons, highly degenerate around the center of their spectrum.Third, macro observables as introduced by von Neumann [6,11] and studied in [9,12,13] that are degenerate through the notion of macroscopicity.Here the idea is that on macroscopically large systems one can only ever measure a rather small number of observables and these observables can take only a number of values that is much smaller than the enormous dimension of the Hilbert space and they commute either exactly or are very close to commuting observables.An example are the classical position and momentum of a macroscopic system.While they are of course ultimately a coarse grained version of the sum of the microscopic positions and momenta of the all the constituents they can both be measured without disturbing the other in any noticeable way.Such classical observables hence partition, in a natural way, the Hilbert space of a quantum system in a direct sum of subspaces, each corresponding to a vector of assignments of outcomes for all the macro observables.Even by measuring all the available macro observables one can only identify which subspace a quantum system is in, but never learn its precise quantum state.To get the impression that a system equilibrates or thermalizes it is hence sufficient that the overlap of the true quantum state with each of the subspaces from the partition is roughly constant in time and the average agrees with the suitable thermodynamical prediction.One would thus expect ETH to hold for such observables.As in any realistic situation, the number of observables times the maximum number of outcomes per observable (and hence the number of different subspaces) is vastly smaller than the dimension of the Hilbert space, one is again dealing with highly degenerate observables.
Hamiltonian Unbiased Observables.Before we proceed with the main result of the paper, it is important to summarize the results derived in [36].Suppose A := i a i A i is an observable with eigenvalues a i and respective projectors A i .We say that A is a thermal observable with respect to the state ρ if its measurement statistics p(a i ) := Tr (ρ A i ) maximizes the Shannon entropy S A := − i p(a i ) log p(a i ) under two constraints: normalization of the state Tr(ρ) = 1 and fixed average energy Tr (ρ H).
In [36] it was proven that this is a generalization of the standard notion of thermal equilibrium, in the following sense: What we usually mean by thermal equilibrium is that the state of the system ρ is close to the Gibbs state ρ G , in the sense given by some distance defined on the convex set of density matrices.A well-known way to characterize ρ G is via the constrained maximization of von Neumann entropy S vN := − Tr(ρ log ρ).Now, for any state ρ, the minimum Shannon entropy S A (among all the observables A) is the von Neumann entropy Therefore, the Gibbs ensembles is the state that maximizes the lowest among all the Shannon entropies S A .
Hence the maximization of the Shannon entropy S A is an observable dependent generalization of the ordinary notion of thermal equilibrium.One can use the Lagrange multiplier technique to solve the constrained optimization problem and two equilibrium equations emerge.They implicitly define the equilibrium distribution p eq (a i ) as their solution.Using such equations to investigate the emergence of thermal observables in a closed quantum system, it can be proven that for any given Hamiltonian there is a huge amount of observables that satisfy ETH: the Hamiltonian Unbiased Observables (HUO).
The name originates from the following notion: A set of normalized vectors {|u j } j is mutually unbiased with respect to another set of vectors {|v k } k if the inner product between any pair satisfies | where D is the dimension of the Hilbert space.A basis is called Hamiltonian Unbiased Basis (HUB) if it is unbiased with respect to the Hamiltonian basis.Accordingly, a HUO is an observable which is diagonal in a HUB.The concept of mutually unbiased basis (MUBs) has been studied in depth in quantum information theory [50][51][52][53][54][55].For our purposes, the most important result is the following: Given a Hilbert space H = ⊗ N j=1 H j with dim(H j ) = p for some prime number p and some fixed orthonormal basis in H there is a total of p N + 1 orthonormal bases, including the fixed basis, that are all pairwise mutually unbiased [50,52].Moreover, there is an algorithm to explicitly construct all of them [50,52].Applying this result to the Hamiltonian basis we conclude that there are p N HUBs.
By studying the matrix elements of a HUO, in the Hamiltonian basis, it is not too difficult to see that sufficiently degenerate HUOs should satisfy ETH (under some mild additional conditions that we discuss in the following.Suppose a HUO O HUO has spectral decomposition where {|j, s } is the HUB whose elements have been labeled with two indices: j runs over the distinct eigenvalues λ j while s runs over the possible d j degeneracies of each eigenvalue.It is easy to see that Therefore, the diagonal matrix elements are constant and the average value at equilibrium, i.e., computed from the diagonal ensemble, is microcanonical where • mc is the expectation value computed on the microcanonical state 1 D I. Because of the MUB condition we have E m |j, s = e iθ m js / √ D, which means that the off-diagonal matrix elements are given by e iγ mn js γ mn js := (θ m js − θ n js ) .( 8) In [36] a numerical study on the phases γ mn js was performed.It was argued that the γ mn js , when constructed with the standard algorithm to build MUBs, have certain features of pseudo-random variables with uniform distribution in [−π, π].Whenever each eigenvalue has a large degeneracy, i.e., d j n A ≥ 2, we can apply the central limit theorem to argue that where mn is a complex random variable, normally distributed, with mean µ and variance σ 2 .Under the additional assumption that the X (j) mn are independent, one finds that, because Eq. ( 9) is a finite sum of normally distributed random variables, we have O HUO mn ∼ N 0, σ 2 n A with variance Eventually we get: For a binary observable, i.e, such with eigenvalues ±1, this means that for large d j which means that O HUO satisfies Hypothesis 1.
Before we proceed, we would like to expand on the mechanism behind the emergence of ETH for a highlydegenerate HUO.Eq. ( 9) will hold whenever we can apply the central limit theorem within each subspace at fixed eigenvalue.As was argued in [36], for a fixed pair of indices (m, n), the phases γ mn js behave as if they were pseudo-random variables and their number is exponentially large in the system size.The labels (j, s) provide a partition of these D phases into n A groups, each made of d j elements.In the overwhelming majority of cases each group of d j phases will exhibit the same statistical behavior as the whole set.In this case, Eq. ( 9) will behave as a sum of independent random variables and it will give the exponential decay of the off-diagonal matrix elements.It may happen that the index j, labeling different eigenvalues, samples the phases in a biased way and prevents some of the off-diagonal matrix elements from being exponentially small.This, even though it seems unlikely, is possible and it would induce a coherent dynamics on the observable which can prevent its thermalization.This can happen for example in integrable quantum system for observables which are close to being conserved quantity.
The point can also be seen from the perspective of random matrix theory.Given the Hamiltonian eigenbasis, if we perform several random unitary transformations and study the distribution of the outcome basis, it can be shown that, in the overwhelming majority of cases we will end up with a basis that is almost HUB [53,55], up to corrections which are exponentially small in the system size.Hence for large system sizes, if we pick a basis at random, most likely it will be almost a HUB [53,55].
We now present the main result of the paper: a theorem that can be used to study under which conditions highly degenerate observables are HUO.
j=1 a j Π j be an operator on H with n A ≤ D distinct eigenvalues a j and corresponding eigen-projectors Π j .Decompose H = n A j=1 H j into a direct sum such that each H j is the image of the corresponding Π j with dimension D j .For each j for which D j (D j − 1) ≥ M + 1 there exists an orthonormal basis {|j, k } Dj k=1 ⊂ H j such that for all k, m A detailed proof is provided in the Supplemental Material [? ].If the condition D j (D j − 1) ≥ M + 1 is fulfilled for all j, then the set of all {|j, k } j,k obviously is an orthonormal basis for all of H and A is diagonal in that basis.So, as long as the degeneracies D j of A are all high enough with respect to M , A has an eigenbasis whose overlaps with the states |ψ m are given exactly by the right hand side of (13).
A particularly relevant case is when A is a local observable acting non-trivially only on some small subsystem S of dimension D S of a larger N -partite spin system of dimension D = d N , i.e., A := D S j=1 a j |a j a j | ⊗ 1 S and {|ψ m } M m=1 is taken to be an eigenbasis {|E m } D m=1 of the Hamiltonian H of the full system.We summarize some non-essential further details in the Supplemental Material [? ].In this case the degeneracies are all at least D j ≥ D/D S = d N −|S| , so that the above results guarantees that for all observables on up to |S| < N/2 sites there exists a tensor product basis {|a j , k } j,k for H which diagonalizes A and with the property that For subsystems with support on a small part of the whole system |S| N − |S|, it is well known that the reduced states of highly entangled states are (almost) maximally mixed [8], i.e. proportional to the identity.Moreover, based on the data available in the literature [56][57][58][59][60][61][62][63][64], there is agreement on the fact that, away from integrability, the energy eigenstates in the bulk of the spectrum have a large amount of entanglement.Thus, if the eigenstates This way of arguing shows how entanglement in the energy basis can lead the emergence of the ETH in a local observable.While this result was expected for the diagonal part of ETH, we would like to stress that it is a nontrivial statement about the off-diagonal matrix elements.Since the magnitude of the off-diagonal matrix elements controls the magnitude of fluctuations around the equilibrium values, their suppression in increasing system size is of paramount importance for the emergence of thermal equilibrium.If one assumes high-entanglement in the energy eigenstates, it is trivial to see that A mm ≈ Tr A/D.Moreover, thanks to the HUO construction and Theorem 1 we can also make non-trivial statements (Eq.( 9) and Eq. ( 11)) about the off-diagonal matrix elements.
The physical picture that emerges is the following: Entanglement in the energy eigenstates is the feature which makes a local observable satisfy the statement of the ETH.If the energy eigenstates are highly entangled in a certain energy window I 0 = [E a , E b ], as it is expected to happen in a non-integrable model, the ETH will be true for local observables, in the same energy window.
We now turn our attention to the study of extensive observables and assume that we are interested in a certain energy window [E a , E b ] which contains M ≤ D energy eigenstates.The details of the computations can be found in the Supplemental Material [? ].The paradigmatic case that we study is the global magnetization M z := N i=1 σ z i .Writing its spectral decomposition we have M z = N j=−N jΠ j , where the degeneracy Tr Π j = D j of each eigenvalue j can be easily computed to be D j = N N −j 2 . Again, we call H j ⊂ H the image of the projector Π j .The inequality D j (D j − 1) ≥ M selects a subset j ∈ [−j * (M ), j * (M )] of spaces H j for which the conditions of our theorem are satisfied.Small M will guarantee that the hypothesis of the theorem are satisfied in a larger set of subspaces H j .If we are interested in the whole energy spectrum M = D, a rough estimation, supported by numerical calculations, shows that j * (D) scales linearly with system size: j * (D) 0.78N .The physical intuition that we obtain is the following: Subspaces with "macroscopic magnetization", i.e. around the edges of the spectrum of M z , have very small degeneracy and the theorem does not yield anything meaningful for them.However, in the bulk of the spectrum there is a large window j ∈ [−j * (D), j * (D)] where the respective subspaces H j meet the conditions for the applicability of the theorem.Therefore ∀j ∈ Z∩[−j * (D), j * (D)] we have If, for some physical reasons, one is not interested in the whole set energy spectrum but only in a small subset, the window [−j * (M ), j * (M )] will increase accordingly.
Thanks to our theorem we can extract a physical criterion under which the global magnetization will satisfy ETH.
Assuming that we can use Stirling's approximation, the M z is a HUO iff where p(j) := 1 2 + j 2N , 1 2 − j 2N , p mix := p(0) and we used the binary relative entropy H 2 (p||q) := k=1,2 p k log p k q k .This relation has a natural interpretation in terms of large-deviation theory.Indeed, such a relation is a statement about the statistics induced by the energy eigenstates on the observable M z .If such statistics satisfy large-deviation theory, as in Eq. ( 17), the observable will satisfy ETH.A complete understanding of how this concretely happens goes beyond the purpose of the present work and it is left for future investigation.
We note that the hypothesis of the theorem do not hold for the whole spectrum of M z .Moreover, the proven connection between HUOs and ETH relies on the applicability of the central limit theorem in the degeneracy space H j .Hence the picture that emerges is the following.For extensive observables, ETH will hold if the statistics induced by the energy eigenstates satisfies a large deviation theory.If this is true, we do not expect it to hold through the whole spectrum but only in the subsectors with sufficiently high degeneracy.Both statements fully agree with the intuition that, in the thermodynamic limit, macroscopically large values of an extensive sum of local observables should be highly unlikely.In a recent work by Biroli et al. [30], it was argued that in a chain of interacting harmonic oscillators, the measurement statistics of the average of the nearest-neighbor interactions, given by the diagonal ensemble, satisfies a large-deviation statistics.This allows for the presence of rare, non-thermal, eigenstates which can account for the absence of thermalization in some integrable systems.Our results goes along with such intuition.Indeed, if it is possible to show that a large-deviation bound emerges at the level of each energy eigenstate, for all of them, this would amount to a proof of ETH, as discussed before.
We now come to the last application of our theorem: the macro-observables originally proposed by von Neumann.As for the two previous applications, more details can be found in the Supplemental Material [? ].As explained before, macro-observables induce a partition of the Hilbert space into subspaces in which such classicallike observables have all well defined eigenvalues.In this sense a macrostate is an assignment of the eigenvalues of all these observables and the index j runs over different macrostates.By construction, each macrostate j = 1, . . ., n corresponds to a subspace H j of the whole Hilbert space which is highly degenerate and to which we can apply our theorem.According to the result by von Neumann [6] and Goldstein et al. [9] it can be proven that the following relation holds for a given partition, for most Hamiltonians, in the sense of the Haar measure: E m |P j |E m = Dj D .The P j 's are the projectors onto the subspaces H j .Our theorem tells us that there exists a basis {|j, s } which diagonalises all the macro-observables such that Using it in synergy with the previously mentioned result we find: This means that for most Hamiltonians, those macroobservables have a common basis that is a HUB.Given the huge degeneracy of the spaces H j this in turn allows us to formulate the following statement: for most Hamiltonians, in the sense of Haar, the macro-observables are degenerate HUOs and therefore satisfy ETH 1.
Conclusions.The ETH captures the wide spread and numerically very well corroborated intuition that the eigenstates of sufficiently complicated quantum manybody system have thermal properties.Its importance stems from the fact that together with the results that constitute the framework of pure state quantum statistical mechanics, a proof of the ETH would yield a very general argument for the emergence of not just equilibration, but thermalization towards the prediction of equilibrium statistical mechanics from quantum mechanics alone.Such a rigorous proof is, however, still missing, despite the progress in recent years that have significantly improved our understanding of the ETH by means of proofs of related statements and counterexamples.Here we contribute to this program by bisecting the problem of proving ETH in two sub-problems related to the relative phases and the the overlaps between the eigenstates of the Hamiltonian and an observable.We argue that the ETH can fail because of the former only through conspiratorial correlations in the phases.Our main result concerns the second half of the problem.Here we prove a rigorous result that shows when highly degenerate observables satisfy this part of the ETH and become Hamiltonian unbiased observables.We illustrate our results with three types of physical observables, local, extensive, and macroscopic observables and collect and compare different versions of the ETH.Our approach allows us in particular to make statements about the off-diagonal elements that are prominent in the original version of the ETH.
Section A: Proof of the main theorem In this Appendix we provide the details of the proof of the main result of the paper, Theorem 1.In the first subsection we give some background material, concerning the formalism of the generalized Bloch-vector parametrization.Such formalism will be used in the second subsection, where we give the actual proof of Theorem 1.
Subsection A1: Generalised Bloch-vector parametrization
We start by briefly recalling the formalism of the generalized Bloch-vector parametrization [65,66] of a pure quantum state.The standard Bloch-vector parametrization is a well-known way to describe the space of pure-states of a qubit, by using the isomorphism between its two-dimensional projective Hilbert space and a 2-sphere S 2 .Such an isomorphism can be easily generalized to arbitrary dimensions and it is well known that the projective space of a D−dimensional complex Hilbert space is isomorphic to S D 2 −2 .This isomorphism can be made explicit by associating to any normalized rank-1 projector |ψ ψ| a generalized Bloch vector b where γ is a vector with elements γ i := γi / √ 2 and γi are the D 2 −1 generators of SU (D), with the following properties: Even though the term "Bloch vector" is normally used to identify the 2-dimensional case, hereafter we will use it for its D-dimensional counterpart.The constant prefactor D−1 D has been put to make the norm of the Bloch vector independent on the dimension of the Hilbert space and always equal to one.The square of the absolute value of the scalar product between two pure states |ψ , |ψ ∈ H is mapped into the scalar product of the two Bloch vectors b, b , plus a constant term From this relation we can see that mutual unbiasedness is a very natural condition when written in term of the respective Bloch vectors.For any two sets of pure states {|ψ j } j and {|ψ k } k , with respective Bloch vectors { b j } j and { b k } k we have In other words the sets {|ψ j } j and {|ψ k } k are mutually unbiased if and only if their respective sets of Bloch vectors are orthogonal.Now we look at how the property of being a basis of the Hilbert space is written in terms of the Bloch vectors of the basis elements.Let {|ψ j } D j=1 ⊂ H be a basis of a Hilbert space of dimension D, with associated Bloch vectors { b j } j .Using Eq. ( 19) we find that {|ψ j } D j=1 spans all of H if and only if Since the elements of γ are the linearly independent generators of SU (D), this is equivalent to D j=1 b j = 0.At the same time, the vectors {|ψ j } D j=1 are orthonormal if and only if ∀j, k ∈ {1, . . ., D} which is equivalent to In summary we obtain that {|ψ j } D j=1 is a complete orthonormal basis if and only if their Bloch vectors { b j } j satisfy the two following conditions In this second Appendix we present a detailed proof of Theorem 1 from the main text.In order to do this we first introduce a well known theorem from geometry and the notions necessary to state it.We then show how the generalized Block vector parametrization together with this theorem and properties of simplices allow to prove Theorem 1.
In R n an n-simplex is the generalization of the 2D triangle and the 3D tetrahedron to arbitrary dimensions.A regular simplex is a simplex which is also a regular polytope.For example, the regular 2-simplex is the equilateral triangle and the regular 3-simplex is a tetrahedron in which all faces are equilateral triangles.A n-simplex can be constructed by connecting a new vertex to all vertices of an n − 1-simplex with the same distance as the common edge distance of the existing vertices.This readily implies that the convex hull of any subset of n out of the n + 1 vertices of an n simplex is itself a n − 1-simplex, a so called facet of the simplex.For n = 2 they are the sides of the triangle, for n = 3 they are the two dimensional triangles building the boundary surface of the tetrahedron.To each facet we can associate a facet vector defined as the vector orthogonal to the facet and with Euclidean length equal to the volume of the facet.The result we need about these objects is the following theorem.
Theorem 2 (Minkowski(-Weyl) Theorem [67]).For any set of n + 1 non co-planar vectors V i ∈ R n that span R n with the property there is a closed convex n-dim polyhedron whose facet vectors are the V i .The converse is also true, for any closed convex polyhedron the facets vectors sum to zero.
If we apply the theorem to an n-simplex, whose facets vector are all of equal magnitude it can be easily seen that the (all equal) dihedral angles α between two facet vectors are such that cos α = − 1 n .This fact will be used in the proof of Theorem 1. Projecting Eq. ( 27) onto the direction of one vector V k and using the fact that all dihedral angles have the same magnitude α in a simplex we have We can now proceed with the proof of Theorem 1.
Proof of Theorem 1.If H is a D-dimensional Hilbert space, take an arbitrary decomposition H = ⊕ n j=1 H j and call P j the projectors onto H j .Define p m,j := ψ m | P j |ψ m .For every |ψ m let |ψ (j) m := be the normalized projection onto the subspace associated with P j or the zero vector if |ψ m is orthogonal to that subspace.Now, for any vector |ϕ ∈ H j we can write and |ϕ are contained in H j , via the construction described in Subsection A1, they have associated generalized Bloch vectors b m and b in S D 2 j −2 .Using Eq. ( 19) we thus have We conclude that |ϕ ∈ H j has the desired property (Eq.( 13)) of the basis vectors |j, k if and only if b is orthogonal to all the b (j) m .For any given j, in the worst case, all the M vectors b (j) m are linearly independent, leaving a subspace of dimension D 2 j − 2 − M for picking b.Now, we don't want to pick just one vector b from this subspace, but D j many such vectors, which moreover satisfy the conditions in (26a) so that their associated state vectors form an orthonormal basis for H j .The Minkowski(-Weyl) Theorem (Theorem 2) tells us that this can be achieved by taking them to be the facet vectors V i of a regular simplex in this subspace, as long as the subspace has sufficiently high dimension.More precisely, the first condition from (26a) is always satisfied for facet vectors V i of general polytopes and the second condition can be achieved by using the facet vectors of a regular simplex, scaled so that they have Euclidean norm equal to one.This follows because the cosine of the angle between any two facet vectors of an n-simplex is −1/n.So, as long as the space of vectors orthogonal to all the b (j) m is large enough to accommodate for a D j − 1-simplex, D j suitable Bloch vectors of an orthonormal basis {|j, k } Dj k=1 ⊂ H j that is unbiased with respect to all |ψ m can be found.This is the case as long as D 2 j − 2 − M ≥ D j − 1.
Section B: Examples
In this second Appendix, we give more details about how to apply Theorem 1 to the three examples given in the manuscript and how to derive the results.We use a one-dimensional spin-1/2 chain as an exemplary case to showcase our result.Moreover, we will always be interested in using the Hamiltonian eigenvectors as a set of vectors for our theorem.This means M = D and {|ψ j } D j=1 = {|E m } D m=1 .However, if for some reason one is interested in a limited portion of the energy spectrum, the results can be strengthened by limiting the set of eigenvectors to M < D.
Example 1: Local observables
As first application of our Theorem, we study the emergence of ETH in a local observable which has support on less than half of the whole chain.The total number of spins is N and the Hilbert space is split into tensor products of k and N − k spins: j=1 P j a j have support on k ≤ N − k sites.In this case all eigenvalues have degenerate subspaces with the same dimension: dimH j = Tr P j = D j = 2 N −k .The condition that ensures the validity of the hypothesis of Theorem 1 is 2 N −k (2 N −k − 1) ≥ 2 N + 1. Applying the log to both sides and with some algebraic manipulations we obtain The right-hand side is always negative.So we request the following (slightly stronger) condition The condition arising from the first inequality gives k ≤ N 2 .Therefore, local observables with support on less than half of the chain satisfy the assumptions of our theorem.For them we obtain that there is a basis |a j , k that diagonalizes the observable, such that For small subsystems k N − k, if the Hamiltonian eigenstates are highly entangled, which is expected to be true for a non-integrable system in the bulk of the spectrum, the von Neumann entropy of the reduced state is close to the maximum value k log 2 − S vN (ρ k (E m )) ≤ k (E m ) with k (E m ) ≥ 0. Using Pinsker's inequality and the fact that the relative entropy with respect to the maximally mixed state is just the difference between the two entropies we have Whenever k (E m ) 1, which is expected to be true in the bulk of the spectrum, we have We can therefore conclude that entanglement in the energy eigenstate is the feature that makes local observables be HUOs.Provided certain mild assumptions, which have been discussed in the paper, are satisfied, this guarantees that they satisfy ETH.We conclude that, if the energy eigenstates are highly entangled in a certain energy window as it is expected to happen in a non-integrable model, ETH will hold for all local observables, in the same energy window.
Example 2: Extensive observable -Global magnetization In this second example we study the consequences of our theorem for an observable which is the extensive sum of local observables: the global magnetization M z = N n=1 σ z n .Its spectral decomposition is M z = N j=−N j P j so the Hilbert space is decomposed as the direct sum of the H j , which are the images of the P j : H = N j=−N H j .Their dimension D j = Tr P j can be computed using combinatorial arguments: . At fixed size N , Dj selects a subset j ∈ [−j * , j * ] of subspaces H j for which the theorem will hold.Note that the interval [−j * , j * ] is symmetric with respect to zero because D j = D −j .In order to find how j * scales with the system size, we numerically compute how many subspaces H j meet the condition D j ≥ 1 + 2 N +1 Dj .We call this number q(N ).Since the eigenvalues are given by the relative number j ∈ Z ∩ [−N, N ] and they are equally spaced, we have q(N ) = 2j * + 1.Which means j * = q(N )−1 2 .In Fig. 1 we can see that it scales linearly with the system size: q(N ) ∼ 1.56N .This gives j * (N ) ∼ 0.78N .The picture that we obtain is the following.States with "macroscopic magnetization", i.e. around the edges of the spectrum of M z , have very small degeneracy and the theorem is not going to hold for them.In the bulk of the spectrum, however, there is a large window j ∈ [−j * (N ), j * (N )] where the respective subspaces H j meet the conditions for the validity of the theorem.In summary, if we apply the theorem to the global magnetization we obtain: We know that the relation we are interested in is the Hamiltonian Unbiasedness, which would be which in turn means to study how Dj 2 N behaves.For this goal, in the large N regime we can use Stirling's approximation.As it is known, there is not a unique way of using it.Rather, there are different ways, depending on the number of sub-leading terms that one is willing to use.Here we focus on the leading term.Note that Stirling's approximation can be used throughout the whole window [−j * (N ), j * (N )], as long as N 10.This is true because j * (N ) ∼ 0.78N so |j| ∈ [0, 0.78N ] and N −j 2 ∼ 0.1 * N .Therefore, as long as 0.1N 1, we can use Stirling's formula for all the factorials involved in D j .It can be shown that if n ≥ k 1, at the leading order we have is the binary entropy.Using this we get which in turn gives We have the size of the system N which multiplies a function which is a binary relative entropy.If we call p mix := 1 2 , 1 2 and p Eq. ( 39) has a very interesting form.
the global magnetization M z will be an HUO and satisfy the ETH in the subspaces |j|<k H j .Concretely, this will happen if the measurement statistics generated by the energy eigenstates |E m on the eigenvalues a j satisfies a large deviation bound.
To build our intuition on what this means we evaluate H 2 in two regimes allowed by our Theorem: |j| In the regime |j| ≈ j * we have a better way to estimate D j .Indeed in such regime D j ≈ D j * , which satisfies D j * ≈ 1 + 2 N −1 Dj * .Solving for D j * and taking the leading order in N we obtain D j * ≈ 2 N/2 .Moreover, using the expression in Eq. (40) we can find how D j deviates from D j * .Indeed expanding H 2 ( 1−|j|/N 2 ) around j * we get In summary, when N 10 This means that when we approach the thermodynamic limit N → ∞, the eigenvalues with higher magnetization will be exponentially suppressed in the system size.This is indeed what we expect to be true at the macroscopic level.In this last example we investigate the connection of our theorem with the notion of Macro-observables proposed by von Neumann in his work on the Quantum H-theorem [6,11].This in turn is strictly related with the notion of Normal typicality developed in a series of more recent works by Goldstein et al. [9,12,13].Again, we start by decomposing our Hilbert space H as a direct sum of subspaces H j .The index j runs over a finite number of values that identify different macroscopic properties of the system.One could say that it identifies different "macrostates", characterized by the expectation value of commuting macroscopic observables.In the original idea by von Neumann, in a classical system we measure position and momentum, which commute.His point was that there are some coarse-grained approximation of actual position and momenta which can be "rounded" to obtain a set of commuting macro-observables.Such set of commuting Macro-observables provides a decomposition of the Hilbert space H = n j=1 H j where the index j runs over all the possible different macrostates.Each one of these spaces H j is hugely degenerate and we assume here that we can use our Theorem for all of them.Using the concentration of measure phenomenon it can be shown [9,12,13] that for most t, ψ(t)|P j |ψ(t) The unitary for which this "most" holds is the one connecting the Hamiltonian eigenbasis to the basis giving the decomposition of the Hilbert space into "commuting macro-observables".We can now see that the connection of these ideas with ETH is unraveled by our theorem 1 and by the notion of HUO.Indeed, using our theorem, we can write Therefore From this we conclude that for most Hamiltonians, in the sense of Haar, that the basis {|j, s } which diagonalizes all the "commuting macro-observables" giving the decomposition H = j H j is an Hamiltonian Unbiased Basis (HUB).Moreover, thanks to the fact that each subspace H j is highly degenerate and that the decomposition H = ⊕ j H j is generated by Macro-observables, this proves that all Macro-observables built in this way are HUO.Again, provided certain mild assumptions, which have been discussed in the main text, are satisfied, this guarantees that they satisfy ETH.
Figure 1 .
Figure 1.Scaling of the number of subspaces Hj which meet the condition Dj ≥ 1 + 2 N +1 D j .
N 1 and |j|−j * N 1 .
In the first case, calling x = |j| N we can Taylor-expand H 2 ( 1−x 2 ) around x 1 to obtain
Example 3 :
Macroscopic equilibrium -Normal typicality and von Neumann's Quantum H-theorem
DjD
for all j, for most Hamiltonians in the sense of Haar and for all ψ(0).Concretely, this happens for all ψ(0) if and only if E m |P j |E m Dj D for all j and m.Such a relation can be proven to hold in the same sense as before.For most Hamiltonians in the sense of Haar E m |P j |E m D j D ∀j, m .
It is telling us that the statistics of the eigenvalues a j , induced by the eigenstates |E m , satisfies a large deviation bound.The rate function is given by the binary Kullback-Leibler divergence H 2 [p(j)||p mix ].Now we can formulate a clear statement.Choose a subspace H j with |j| < j * where the hypothesis of our theorem hold.If there is a
|
2017-08-09T15:47:48.000Z
|
2017-08-09T00:00:00.000
|
{
"year": 2018,
"sha1": "aa05f863e6f35477b4b7e946c7340f22dde16d0c",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1708.02881",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d3f579650a151ef01cf65f5b850520d424b89b83",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
207759446
|
pes2o/s2orc
|
v3-fos-license
|
Escherichia coli Culture Filtrate Enhances the Growth of Gemmata spp.
Background Planctomycetes bacteria are known to be difficult to isolate, we hypothesized this may be due to missing iron compounds known to be important for other bacteria. We tested the growth-enhancement effect of complementing two standard media with Escherichia coli culture filtrate on two cultured strains of Gemmata spp. Also, the acquisition of iron by Gemmata spp. was evaluated by measuring various molecules involved in iron metabolism. Materials and Methods Gemmata obscuriglobus and Gemmata massiliana were cultured in Caulobacter and Staley’s medium supplemented or not with E. coli culture filtrate, likely containing siderophores and extracellular ferrireductases. We performed iron metabolism studies with FeSO4, FeCl3 and deferoxamine in the cultures with the E. coli filtrate and the controls. Results and Discussion The numbers of G. obscuriglobus and G. massiliana colonies on Caulobacter medium or Staley’s medium supplemented with E. coli culture filtrate were significantly higher than those on the standard medium (p < 0.0001). Agar plate assays revealed that the Gemmata colonies near E. coli colonies were larger than the more distant colonies, suggesting the diffusion of unknown growth promoting molecules. The inclusion of 10–4 to 10–3 M FeSO4 resulted in rapid Gemmata spp. growth (4–5 days compared with 8–9 days for the controls), suggesting that both species can utilize FeSO4 to boost their growth. In contrast, deferoxamine slowed down and prevented Gemmata spp. growth. Further studies revealed that the complementation of Caulobacter medium with E. coli culture filtrate and 10–4 M FeSO4 exerted a significant growth-enhancement effect compared with that obtained with Caulobacter medium supplemented with E. coli culture filtrate alone (p < 0.0122). Moreover, the intracellular iron concentrations in G. obscuriglobus and G. massiliana cultures in iron-depleted broth supplemented with the E. coli filtrate were 0.63 ± 0.16 and 0.78 ± 0.12 μmol/L, respectively, whereas concentrations of 1.72 ± 0.13 and 1.56± 0.11 μmol/L were found in the G. obscuriglobus and G. massiliana cultures grown in broth supplemented with the E. coli filtrate and FeSO4. The data reported here indicated that both E. coli culture filtrate and FeSO4 act as growth factors for Gemmata spp. via a potentiation mechanism.
BACKGROUND
Bacteria of the genus Gemmata belong to the superphylum Planctomycetes-Verrucomicrobia-Chlamydia (PVC) and the phylum Planctomycetes (Wagner and Horn, 2006). Similarly to other members of Planctomycetes, Gemmata bacteria constitute one of the phylogenetically distinct major groups with increasing relevance to research in microbial ecology, molecular evolution, cell biology, and most recently, clinical microbiology (Fuerst, 2004;Drancourt et al., 2014;Aghnatios and Drancourt, 2016;van Niftrik and Devos, 2017). Indeed, some biologists now claim that Gemmata bacteria are nucleus−bearing prokaryotes but are considered evolutionary intermediates in the transition from prokaryote to eukaryote due to their amazingly complex cellular architectures that are typical of eukaryotes, such as those associated with cytosolic compartmentalization (Santarella-Mellwig et al., 2013;Sagulenko et al., 2014), sterol synthesis (Pearson et al., 2003;Gudde et al., 2019) and endocytosis-like macromolecular uptake (Lonhienne et al., 2010;Boedeker et al., 2017). These species form a remarkable gram-negative-staining group of bacteria that exhibit characteristic bud production, and a division process independent of FtsZ via budding-mediated polar fission, which is different from that of ordinary bacteria, where FtsZ is the main molecule involved in cell division (Fuerst, 2004;Bernander and Ettema, 2010). Both G. obscuriglobus and G. massiliana are slow-growing, fastidious organisms, and G. obscuriglobus exhibits a 13-h doubling time (Lee et al., 2009). Gemmata bacteria require highly specific culture media and long incubation times (Schlesner, 1994;Winkelmann and Harder, 2009;Lage and Bondoso, 2012;Mishek et al., 2018). We recently found some Gemmata-like sequences in blood collected from two patients with febrile aplastic neutropenia and leukemia, although we failed to isolate any Planctomycetes from these blood samples (Drancourt et al., 2014). Accordingly, conventional automated microbial detection of blood culture systems is not appropriate for the detection of these type of bacteria (undetected) and is less sensitive than the culture of mock-infected blood on Caulobacter agar (Christen et al., 2018). Nevertheless, the resistance of these bacteria to most of the routinely used antibiotics (Cayrou et al., 2010a;Godinho et al., 2019) and their recently demonstrated association with humans (Cayrou et al., 2013;Drancourt et al., 2014) support the potential behavior of Gemmata organisms as opportunistic pathogens, and this hypothesis warrants further investigations (Aghnatios and Drancourt, 2016).
The culture-based isolation of microbial pathogens remains the gold standard in diagnostic microbiological laboratories, but it has been reported that the lack of complex factors/conditions in these laboratories contributes to the inability to isolate some fastidious bacterial species. Accordingly, the provision of environmental and nutritional conditions similar to those existing in the natural habitat where yet-uncultured/refractory bacteria can be detected might be an option for their potential isolation and culture (Kaeberlein et al., 2002;Vartoukian et al., 2010). Some yet uncultured planctomycetes, such as Planctomycetes bekefii, possess stalks encrusted with iron oxide deposits (Schmidt et al., 1981), but the associated mechanism (active oxidation or passive deposition) has not been determined, and these findings suggest an important role for iron in these organisms. Our preliminary genome analysis of Gemmata obscuriglobus [UQM 2246 (GenBank: NZ_ABGO00000000.1)] and G. massiliana [GenBank: CBXA000000000.1], which are the only cultured representatives of the Planctomycetes genus Gemmata that have been formally described (Franzmann and Skerman, 1984;Aghnatios et al., 2015) using the Rapid Annotation Subsystem Technology server (Meyer et al., 2008), revealed that these bacteria do not contain molecules involved in the iron acquisition pathway, which might partially explain their notable fastidiousness when grown on culture media. We thus hypothesized that supplementation of the standard culture media (Caulobacter and Stanley) for Gemmata spp. with Escherichia coli culture filtrate and iron ("ecological" medium) could enhance their growth and isolation in clinical microbiology laboratories.
Bacterial Strains
Gemmata obscuriglobus DSM 5831 T and G. massiliana DSM 26013 T (CSUR P189 T ) were obtained from the Collection de Souches de l'Unité des Rickettsies (Marseille, France) and the German Collection of Microorganisms and Cell Cultures (Braunschweig, Germany). Both species were subcultured on Caulobacter medium DSMZ 595 or Staley's maintenance medium DSMZ 629 prepared as described on the website 1 The bacteria were grown through aerobic incubation on these solid media at 30 • C for 7 to 14 days. The colonies was identified by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) analysis as previously described (Cayrou et al., 2010b).
Escherichia coli Culture Filtrate Preparation
Escherichia coli strain CIP 7624 (Collection de l'Institut Pasteur, Paris, France) was initially cultured on blood agar (BioMérieux, Marcy-l'Étoile, France) for 24 h at 37 • C and identified by MALDI-TOF-MS as previously described (Seng et al., 2009). The bacterial cell counts were calibrated to 10 12 colony forming units (CFUs)/mL using Kovas slide 10 (Hycor Biomedical, Germany) and microscopic examination. One milliliter of this suspension was then subcultured in 75-cm 2 culture flasks containing 49 mL of autoclaved GLD medium (1 g of glucose, 1.4 g of peptone, 0.3 g of NaCl, 20 mL of Hutner's salt (DSMZ 590), 10 mL of Staley vitamins (DSMZ 600, added after filter-sterilized) and 970 mL of distilled water) and incubated aerobically with shaking at 250 rpm for 2 days at 30 • C to elicit the release of E. coli siderophores in a low-iron environment (Miethke and Marahiel, 2007). Sonication was performed to increase the release of Ecoli siderophores, as previously described (Kwon and Jewett, 2015). Briefly, the cells were transferred to 1.5-mL microtubes and sonicated in a water bath sonicator (Bransonic R Ultrasonic Cleaner Model 5510R-MT, Branson Ultrasonic Corporation) at ∼20 • C, a frequency of 20 kHz and an amplitude of 50% for 1 × 2 h. Subsequently, the sonication broth was filtered through a 0.2 µm filter (Sigma-Aldrich, Saint-Quentin-Fallavier, France) to obtain the E. coli filtrate named solution A. Solution B was prepared in the same manner as solution A with the exception that the GLD medium was supplemented with 10 −4 M ferrous sulfate heptahydrate (Sigma-Aldrich) and the culture was incubated for 3 days and then filtered. Solution B was prepared with the aim of inducing the production of extracellular iron reductase by E. coli in an iron-rich environment. As a negative control, autoclaved noninoculated GLD medium was manipulated under the same conditions as the inoculated culture flasks. Finally, 10 µL of solution A, solution B and the control GLD medium were seeded in blood, Staley's and Caulobacter solid agar to ensure sterility.
Culture of Gemmata spp. on Caulobacter and Staley's Liquid Media With E. coli Filtrate Gemmata obscuriglobus and G. massiliana were cultured independently in five replicates in a final volume of Caulobacter liquid medium of 15 mL. In detail, five tubes contained 9 mL of Caulobacter liquid medium supplemented with 5 mL of E. coli filtrate (2.5 mL of solution A + 2.5 mL of solution B), and five tubes contained 9 mL of Caulobacter liquid medium supplemented with 5 mL of GLD medium (negative controls). Each tube (five test tubes and five control tubes) was inoculated with 1 mL of 3.10 2 CFUs/mL suspended in sterile distilled water (Bio-Rad Laboratories, Hercules, CA, United States). Moreover, two test tubes and two control tubes were inoculated with 1 mL of sterile distilled water (noninoculated tubes) and manipulated in parallel to the negative control tubes. The preparations were then incubated at 30 • C in an aerobic atmosphere for 7 days. At days 1, 2, 3, 4, and 7 postinoculation, each tube was shaken, and 1 mL of the broth was removed to obtain serial dilutions of 1, 1/10, 1/100, 1/1000, and 1/10000 in sterile distilled water for culture-based microbial enumerations. The CFUs were enumerated on 100-mm Petri dishes containing Caulobacter solid agar, and the colonies were counted using scanning software (ImageJ, Interscience, Saint-Nom-la-Bretèche, France). The means and standard errors were calculated at each time point (five replicates, n = 5). All experiments were reproduced independently with G. obscuriglobus in Staley's liquid medium, and these were performed in parallel to those conducted with G. massiliana. Five experiments with Caulobacter liquid medium (Gemmata spp. grown in standard iron-free medium compared with Staley's medium, which contains FeSO 4 ) were performed independently. The iron metabolism in assay tubes containing Caulobacter liquid medium in the presence of E. coli culture filtrate under iron-repleted, iron-depleted and control conditions was studied. Ferrous iron heptahydrate (FeSO 4 •7H 2 O, Sigma Aldrich), ferric chloride (FeCl 3 , Sigma Aldrich) and deferoxamine mesylate (Desferal R , Novartis, Rueil-Malmaison, France) were used to probe iron assimilation. Each of these components was added to a final concentration of 10 −4 M in a final volume of 15 mL (appropriately low concentrations of E. coli filtrate (5%), FeSO 4 (0.2 M), FeCl 3 (0.2 M) and deferoxamine (100 mg/mL), as determined by serial dilution of 0.2 M to 10 −4 M; see Table 1 for the oxidation-reduction potential (ORP) and pH at 25 • C obtained with all initial solutions used). In detail, the first tube contained 10 −4 M FeSO 4 , the second tube contained 10 −4 M FeCl 3 , the third tube contained 10 −4 M deferoxamine, the fourth tube contained 10 −4 M FeSO 4 + 10 −4 M deferoxamine, the fifth tube contained 10 −4 M FeCl 3 + 10 −4 M deferoxamine dissolved in Caulobacter liquid medium, and the last tube contained only Caulobacter liquid medium. In parallel, six other tubes contained 9 mL of Caulobacter liquid medium supplemented with 5 mL of E. coli culture filtrate (2.5 mL of solution A + 2.5 mL of solution B), and each of these components was added to a final concentration of 10 −4 M in a final volume of 15 mL, as described above. Subsequently, the 12 tubes were inoculated with 1 mL of 3 × 10 2 CFUs/mL suspended in Caulobacter liquid medium and incubated aerobically at 30 • C for 7 days. One noninoculated (negative control) tube for each of the 12 tubes was manipulated in parallel. At days 1, 2, 3, 4, and 7 postinoculation, each tube was shaken, and 1 mL was removed to obtain serial dilutions of 1, 1/10, 1/100, 1/1000, and 1/10000 in distilled sterile water for culture-based CFU enumerations on Caulobacter solid agar. In addition, daily measurements of the ORP and pH at 25 • C (accumet R AE150, Fisher Scientific) of each liquid medium were performed in parallel. Moreover, 2 × 50 µL of each liquid medium was adsorbed on blotting paper and deposited on solid medium in parallel to observe the growth time around the blotting paper. Furthermore, for each tube, 100 mm Petri dishes containing solid agar were prepared in parallel to monitor the growth on solid media (colony features, color and growth time in the presence or absence of E. coli filtrate), and these contained all the above-mentioned components at the same final concentrations. The Petri dishes prepared to contain E. coli filtrate were supplemented with 500 µL of solution A and 500 µL of solution B and dried at room temperature for 30 min in a laminar flow cabinet. The noninoculated (negative control) tubes and Petri dishes were manipulated in parallel. The bacteria were then counted using scanning software. G. massiliana and G. obscuriglobus were cultured independently in the same manner. The amount of intracellular iron was quantified after incubation for 1 and 7 days. Ten microliters of each liquid culture were inoculated on Caulobacter solid medium and Caulobacter solid medium complemented with each component as described above to monitor the bacterial features, survival and contamination. After incubation for 7 days, the liquid medium was centrifuged at 1.1 g for 5 min, and the pellet was washed three times with 10 −4 M deferoxamine. The concentration of iron was measured using a colorimetric ferrozine method as previously described Live E. coli (soaked in sterile blotting paper, used as a helper strain) was cultured in close proximity to G. obscuriglobus and G. massiliana to assess its ability to promote the growth of these Gemmata bacteria. The growth of Gemmata spp. in the presence of E. coli filtrate (prepared in simple Caulobacter liquid medium) was then assessed through plate assays as previously described by D' Onofrio et al. (2010). Briefly, 2 × 50 µL of each solution containing various molecules involved in iron metabolism, namely, FeSO 4 , FeCl 3 , FeSO 4 and deferoxamine at concentrations of 0.2, 10 −1 , 10 −2 , 10 −3 , and 10 −4 M, with and without E. coli filtrate was triturated and adsorbed on blotting paper to study the influence of these components on Gemmata growth in Caulobacter medium through plate assays.
RESULTS AND DISCUSSION
To the best of our knowledge, no member of Planctomycetes has been isolated from clinical samples, even though Planctomycetes bacteria have recently been detected in aplastic patients by PCR (Drancourt et al., 2014). This study aimed to develop an optimal medium for the culture and recovery of fastidious Gemmata bacteria in our laboratory using an "ecological" medium. Hence, this study was performed from a translational perspective for environmental/clinical microbiologists, and the results should not be translated to mechanistic studies conducted in clinical microbiology laboratories aiming to describe the iron metabolism of fastidious Gemmata. Thus, we reasoned that the enhancement in the growth of Gemmata obscuriglobus and Gemmata massiliana obtained by supplementation with filtrates of E. coli cultures and iron at low concentrations (5% filtrates and 10 −4 M FeSO 4 ) reduce the doubling time of these fastidious bacteria potentially via a potentiation mechanism. Indeed, our observations revealed that although the noninoculated (negative) controls remained sterile throughout the experiments, the number of G. obscuriglobus colonies on Caulobacter medium supplemented with E. coli filtrate (126 ± 13 colonies on day 1 and 787 ± 38 colonies on day 7) was significantly higher than that on the standard medium (62 ± 10 colonies on day 1 and 261 ± 27 colonies on day 7) (p < 0.0001). Similarly, the number of G. obscuriglobus colonies on Staley's medium supplemented with E. coli filtrate (75 ± 11 colonies on day 1 and 247 ± 20 colonies on day 7) was significantly higher than that on the standard medium (32 ± 6 colonies on day 1 and 82 ± 18 colonies on day 7) (p < 0.0001) (Figure 1). For G. massiliana, the number of colonies on the medium supplemented with E. coli filtrate (Caulobacter medium, 170 ± 29 colonies on day 1 and 694 ± 35 colonies on day 7; Staley medium, 74 ± 12 colonies on day 1 and 246 ± 21 colonies on day 7) was significantly higher than that on the standard medium (Caulobacter medium, 89 ± 11 colonies on day 1 and 329 ± 37 colonies on day 7, p < 0.0001; Staley medium, 54 ± 8 colonies on day 1 and 148 ± 17 colonies on day 7, p < 0.0001) (Figure 2). Altogether, a significantly higher number of Gemmata spp. colonies was obtained after enrichment of the reference culture medium with E. coli filtrate (p < 0.0001). Surprisingly, the growth of Gemmata spp. on Caulobacter medium supplemented with E. coli filtrate was improved compared with that on Staley's medium supplemented with E. coli culture filtrate (Figures 1, 2), even though Staley's medium contains more components such as Staley's vitamins (see medium DSZM 600) and Hunter's salts (see medium DSZM 590), which includes 99 mg/L FeSO 4 . These observations are consistent with the fact that many planctomycetes grow better in nutrient-poor (oligotrophic) medium (Staley, 1973;Schlesner, 1994). In addition, not all Staley vitamins are needed for optimal growth, as noted in a previous study (Mishek et al., 2018). To better understand the mechanism associated with the improvement in growth obtained with the addition of E. coli culture filtrate, iron-free Caulobacter medium (which contains less nutrients than Staley's medium) was retained as the baseline for further study on iron acquisition by Gemmata spp. Indeed, this study was suggested to us by the marked diversity of Planctomycetes lineages, including Gemmata-Isosphaera, Planctomyces, Phycisphaerae, Pirellula-Rhodopirellula-Blastopirellula and the "OM190" lineage, detected in iron-hydroxide deposits in association with other bacteria that synthetize bacterioferritin, which captures and stores ferric iron. The high diversity of Planctomycetes in these microbial-rich environments contrasts with the restricted diversity of Planctomycetes in some other environments, which suggests the existence of an iron-based cooperation between ordinary bacteria such as Proteobacteria (E. coli live in the human gut in association with Gemmata spp., Cayrou et al., 2013) and members of Planctomycetes (van Niftrik and Jetten, 2012;Storesund and Øvreås, 2013). Consistent with this hypothesis agar plate assays revealed that the Gemmata colonies near E. coli colonies are larger than those farther from E. coli colonies, which suggests the diffusion of unknown molecules that serve as potential growth factors for Gemmata spp. (Figure 3). In addition, the impregnation of FeSO 4 at concentrations ranging from 10 −4 to 10 −3 M in blotting paper or solid agar plates resulted in rapid Gemmata spp. growth around the nitrocellulose FIGURE 1 | G. obscuriglobus growth in standard Caulobacter medium (gray bar), standard Staley medium (green bar), Caulobacter medium supplemented with E. coli filtrate (yellow bar) and Staley medium supplemented with E. coli filtrate (red bar). The number of G. obscuriglobus colonies per milliliter (Y axis) on solid agar medium was monitored over a 7-day period (X axis).
disks, which was detected on days 4 and 5 (Figure 4A), whereas small colonies did not begin to appear until days 8 and 9 in more distant areas of the disk (Figure 4B). This effect was observed with both Gemmata massiliana and Gemmata obscuriglobus, even though a more dramatic effect was obtained with Gemmata massiliana. This finding suggests that both species can use iron under aerobic conditions. FeSO 4 at a concentration ranging from 10 −4 to 10 −3 M promotes greater Gemmata spp. growth than FeCl 3 at the same concentration; however, 0.2 to 10 −1 M FIGURE 2 | G. massiliana growth in standard Caulobacter medium (gray bar), standard Staley medium (green bar), Caulobacter medium supplemented with E. coli filtrate (yellow bar) and Staley medium supplemented with E. coli filtrate (red bar). The number of G. massiliana colonies per milliliter (Y axis) on solid agar medium was monitored over a 7-day period (X axis).
FeCl 3 and 0.2 to 10 −1 M iron is toxic for both species. The finding that deferoxamine slows down and prevents the growth of Gemmata spp. suggests that iron improves Gemmata spp. growth, as indicated in Figures 5, 6.
Iron is a trace metal involved in many crucial biological processes as components of metalloproteins and serves as a cofactor or structural element for enzymes needed for bacterial survival and growth (Schalk et al., 2011). Iron found in soil, sediments and, more rarely, ocean water (Andrews et al., 2003) FIGURE 3 | Live E. coli promotes the growth of Gemmata massiliana. Ferric and ferrous iron at 0.2 M are toxic to Gemmata, and 10 −4 M deferoxamine prevents bacterial growth.
FIGURE 4 | Gemmata massiliana showed improved growth in blotting paper impregnated with 10 −3 M FeSO 4 (50 mL) nitrocellulose disks at day 4 (a), whereas small colonies did not begin to appear until days 8 and 9 in more distant areas in the disks (b).
is extracted from the environment and transported into a bacterial cell by siderophores, which are repressed in an iron-rich environment. Additionally, environmental ferric iron must be reduced into ferrous iron by extracellular bacterial reductase for assimilation by bacteria (Vartivarian and Cowart, 1999;Guan et al., 2001;Miethke and Marahiel, 2007;D'Onofrio et al., 2010). The ferric uptake regulator protein controls iron acquisition through the ferrous iron-mediated repression of iron-regulated promoters because an excess of intracellular iron induces the production of reactive oxygen species via the Fenton reaction (Escolar et al., 1999). Therefore, several bacteria lacking siderophores depend on other bacteria to provide them with iron (Reeves et al., 1983;Posey and Gherardini, 2000;D'Onofrio et al., 2010), which partly explains the fastidiousness of these bacteria when grown on a synthetic medium (D'Onofrio et al., 2010). Accordingly, our observations revealed that the complementation of Caulobacter medium with E. coli culture filtrate and 10 −4 M FeSO 4 exerted a high growthenhancement effect (G. obscuriglobus, 189 ± 22 colonies on day 1 and 1,091 ± 53 colonies on day 7; G. massiliana, 248 ± 19 colonies on day 1 and 1,029 ± 32 colonies on day 7) compared with that obtained with Caulobacter medium supplemented with E. coli filtrate alone (G. obscuriglobus, 134 ± 17 colonies on day 1 and 783 ± 31 colonies on day 7, p < 0.0016; G. massiliana, 166 ± 18 colonies on day 1 and 713 ± 27 colonies on day 7, p < 0.0122) (Figures 5, 6). The intracellular iron concentrations in G. obscuriglobus and G. massiliana cultured in an iron-depleted broth supplemented with E. coli filtrate were 0.63 ± 0.16 µmol/L and 0.78 ± 0.12 µmol/L, respectively, whereas concentrations of 1.72 ± 0.13 and 1.56 ± 0.11 µmol/L were found in G. obscuriglobus and G. massiliana grown in broth supplemented with E. coli filtrate and FeSO 4 . Under the other culture conditions, the iron concentrations in G. obscuriglobus and G. massiliana were 0.66 ± 0.17 and 0.52 ± 0.14 µmol/L, respectively. Hence, the addition of E. coli culture filtrate was found to act as a growth-promoting factor, and this finding raises questions regarding the nature of unknown growthpromoting factors in E. coli culture filtrate that improve the iron metabolism in microbial communities (D'Onofrio et al., 2010). In contrast, some siderophores produced by certain bacteria, such as deferoxamine by Streptomyces, could slow down and inhibit the growth of Gemmata and lead to the inability to isolate these bacteria via chelating iron. As indicated in Figures 5, 6, E. coli culture filtrate might contain siderophores that have higher affinity for iron than deferoxamine, which suggests that E. coli siderophores are able to shift the balance FIGURE 5 | G. obscuriglobus growth in standard Caulobacter medium supplemented with E. coli filtrate, FeSO 4 , FeCl 3 , and deferoxamine and control medium. The number of G. obscuriglobus colonies per milliliter (Y axis) on solid agar medium was monitored over a 7-day period (X axis). between deferoxamine and iron and make iron more available for cell growth. Additionally, our experiments revealed the aerobic oxidation of ferrous iron (the color of the Caulobacter liquid medium turned from light yellow to a color similar to that of iron rust after the addition of 7.5 µL of iron (to obtain a concentration of 0.2 M) at neutral pH (7.24), and it is possible that E. coli filtrate contains certain molecules, such as the ferrireductase enzyme, that can reduce ferric iron to promote iron uptake, as shown in Figure 7. The pH and ORP measured for all the media over the 7-day experiment ranged from 7 to 6, which suggests that the predominant form of iron is ferric iron (Supplementary Figures S1, S2). Both species can adapt to various culture conditions, including iron-replete and iron-depleted conditions, and regulate the pH under neutral conditions. The analysis of the features of the colonies on solid Caulobacter agar complemented or not completed with E. coli filtrate (500 µL of solution A and 500 µL of solution B), FeSO 4 , FeCl 3 and deferoxamine showed that the colonies grown on iron-enriched Caulobacter broth were bigger and redder in color than the colonies grown under the other culture conditions, which were small and pale pink in color. Although this phenomenon was observed with both G. obscuriglobus and G. massiliana, the effect on G. obscuriglobus was more dramatic, and the growth times to achieve visible colony formation in the presence of E. coli filtrate and FeSO 4 (5-7 days for G. obscuriglobus and 6-7 days for G. massiliana) were shorter than those in media supplemented by FeSO 4 or FeCl 3 without E. coli filtrate (8-9 days for G. massiliana). Additionally, the bacteria showed moderate growth on Caulobacter solid agar after preincubation in broth containing deferoxamine (Figures 5, 6). Moreover, a slight growth-enhancement effect was observed in medium supplemented with E. coli filtrate and 10 −4 M FeCl 3 , which might suggest that the presence of Cl slowed Gemmata growth compared with the presence of sulfate in FeSO 4 because planctomycetes possess many sulfatases.
These data suggest that in the environment, as well as in human microbiota, Gemmata organisms might rely on neighboring bacteria to obtain the required amount of ferrous iron. In contrast, axenic media limit the ability of Gemmata bacteria to acquire iron because ferrous iron oxidizes into ferric iron at pH higher than 5, which results in a very low amount of available ferrous iron in axenic media. However, the growthenhancing effect of filtrate and iron supplementation on the two species might only be explained by a potentiation mechanism. These results are encouraging, but further studies are needed to identify the potential growth factors secreted by E. coli via their purification and freeze-drying and to thus define approaches for enriching planctomycetes culture media.
In conclusion, our results indicate that not only ferrous iron but also E. coli culture filtrate, as a source of unknown growth factors that promote the rapid growth of Gemmata species, enhance Gemmata growth and can thus be used to improve the empirical culture media for Planctomycetes, as illustrated for Gemmata species in this study. This strategy involving the design of specific culture media helped improve the culture of fastidious bacteria and allows researchers to design specialized media from an empirical medium. Similarly, future investigation of the nutrients required by Gemmata organisms might aid the design of new culture media for their recovery from both environmental samples and host microbiota (Drancourt et al., 2014).
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this manuscript will be made available by the authors, without undue reservation, to any qualified researcher.
AUTHOR CONTRIBUTIONS
OK and RA performed the experiments and drafted the manuscript. SG and MD interpreted the data and drafted the manuscript.
FUNDING
This work was supported by the French Government under the Investissements d'Avenir (Investments for the Future) program managed by the Agence Nationale de la Recherche (ANR, fr: National Agency for Research) (reference: Méditerranée Infection 10-IAHU-03). This work was supported by Région Provence Alpes Côte d'Azur and the European fund FEDER PA 0000319 IHUBIOTK.
ACKNOWLEDGMENTS
We acknowledge Saber Khelaifia, Safiatou Fall, and Marion Bonnet for the technical help provided. OK benefits from a Ph.D. grant provided by IHU Méditerranée Infection, Marseille, France. We acknowledge the contribution provided by Magdalen LARDIERE, who reviewed the English language of the manuscript.
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmicb. 2019.02552/full#supplementary-material FIGURE S1 | Daily oxidoreduction potential (ORP) of Gemmata obscuriglobus. The Y axis represents the ORP value, and the X axis represents the day of measurement.
FIGURE S2 | Daily oxidoreduction potential (ORP) of Gemmata massiliana. The Y axis represents the ORP value, and the X axis shows the day of measurement.
|
2019-11-07T14:15:32.885Z
|
2019-11-06T00:00:00.000
|
{
"year": 2019,
"sha1": "d223e1d124db722975b65741cec13adf54f7e6b1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.02552/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d223e1d124db722975b65741cec13adf54f7e6b1",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
248893782
|
pes2o/s2orc
|
v3-fos-license
|
Creatine O'Clock: Does Timing of Ingestion Really Influence Muscle Mass and Performance?
It is well-established that creatine supplementation augments the gains in muscle mass and performance during periods of resistance training. However, whether the timing of creatine ingestion influences these physical and physiological adaptations is unclear. Muscle contractions increase blood flow and possibly creatine transport kinetics which has led some to speculate that creatine in close proximity to resistance training sessions may lead to superior improvements in muscle mass and performance. Furthermore, creatine co-ingested with carbohydrates or a mixture of carbohydrates and protein that alter insulin enhance creatine uptake. The purpose of this narrative review is to (i) discuss the purported mechanisms and variables that possibly justify creatine timing strategies, (ii) to critically evaluate research examining the strategic ingestion of creatine during a resistance training program, and (iii) provide future research directions pertaining to creatine timing.
INTRODUCTION
Creatine (α-methyl guandino-acetic acid) is endogenously synthesized, primarily in the kidneys and liver in reactions involving the amino acids arginine, glycine, and methionine (Wyss and Kaddurah-Daouk, 2000;Ostojic and Forbes, 2022). Alternatively, creatine can be exogenously consumed through the ingestion of commercially manufactured creatine, with the most common type being creatine monohydrate (Kreider et al., 2017). Through the combination of endogenous synthesis and/or exogenous intake, creatine enters the systemic circulation and subsequently gains entry into energetically demanding tissues (e.g., skeletal muscle) through a creatine-specific transporter (Persky and Brazeau, 2001). Exercise-induced muscle contractions increase skeletal muscle blood flow (i.e., hyperaemia) (Tipton et al., 2001), which may augment creatine kinetics leading to greater intramuscular creatine accumulation over time (Harris et al., 1992;Persky and Brazeau, 2001;Forbes and Candow, 2018;Ribeiro et al., 2021). Co-ingestion of creatine with carbohydrates and protein also appears to increase creatine accumulation in muscle (Steenge et al., 1998(Steenge et al., , 2000, possibly due to insulin-stimulated sodium-potassium (Na +− K + ) pump activity (Ewart and Klip, 1995).
Creatine supplementation typically results in elevated intramuscular creatine stores (Harris et al., 1992); increased concentrations of intramuscular creatine lead to improvements in muscle mass and performance (i.e., strength) (Branch, 2003;Candow et al., 2014a;Devries and Phillips, 2014;Lanhers et al., 2015Lanhers et al., , 2017Chilibeck et al., 2017;Paiva et al., 2020;Forbes et al., 2021a). Several reported mechanisms support these improvements, including increased high-energy phosphate metabolism, H + ion buffering, calcium exchange across the sarcoplasmic reticulum, glycogen resynthesis, cell swelling, satellite cell, and myogenic transcription factor activity, and decreases in muscle protein degradation, inflammation, and oxidative stress (Chilibeck et al., 2017;Candow et al., 2019a). These mechanisms may help explain why creatine supplementation during a resistance training program has been consistently shown to increase measures of muscle mass and performance compared to resistance training alone (Chilibeck et al., 2017;Forbes et al., 2021a). Based on the increase in skeletal muscle hyperemia and creatine transport kinetics in response to muscle contractions, speculation exists that ingesting creatine supplementation in close proximity to resistance training sessions may be a strategy to further augment muscle mass and performance over time. The purpose of this narrative review is to (i) discuss the purported mechanisms and variables that possibly justify creatine timing strategies, (ii) to critically evaluate research examining the strategic ingestion of creatine during a resistance training program, and (iii) provide future research directions pertaining to creatine timing.
PURPORTED MECHANISMS TO JUSTIFY CREATINE TIMING STRATEGIES
The purported mechanisms underlying the potential effects of creatine timing to augment resistance training adaptations are currently only hypothetical (Figure 1). First, one could speculate that exercise-induced muscle hyperemia could favor creatine delivery to skeletal muscles, possibly affecting both uptake and retention (Forbes and Candow, 2018;Ribeiro et al., 2021). Therefore, pairing the exercise-mediated increase in blood flow with the rise in circulating creatine following supplementation could, theoretically, be beneficial. Despite being an interesting concept, coupling of these events must consider both the time taken for creatine to be digested, absorbed, and reach maximum concentration in circulation and the magnitude and duration of muscle hyperemia. Peak plasma concentration (Cmax) following creatine supplementation (∼5 g) typically occurs ≤ 2 hafter ingestion and remains elevated in circulation for ∼4 h (area under the concentration-time curve) (Harris et al., 1992). Meanwhile, exercise has the potential to increase blood flow up to 100-fold from rest, however, the magnitude and duration of this effect are modulated by factors such as exercise type, volume, and intensity. Furthermore, blood flow is typically restored to resting values within 30 min after the cessation of exercise, although it can remain elevated for much longer periods of time (Joyner and Casey, 2015). In the context of enhancing resistance training adaptations and considering a typical training session that lasts ∼70 min (Hackett et al., 2013), pre-exercise creatine ingestion would, in theory, be more conducive to matching exercise-induced increases in blood flow with the increase in blood creatine concentration, theoretically favoring uptake and retention, as compared to post-exercise creatine ingestion. In addition, it is possible that digestion and absorption of creatine would be reduced during exercise due to reduced splanchnic blood flow resulting from exercise hyperemia (Perko et al., 1998).
Exercise can also modulate the Na + -K + pump activity, which is another purported mechanism to justify the theoretical importance of creatine timing. Because creatine transport occurs against a Na + dependent gradient (via a Na + -Cr co-transport system) (Odoom et al., 1996), exercise-mediated upregulation of skeletal muscle Na + -K + pump activity may contribute to creatine transport and subsequent creatine accumulation in muscle. Similar to the effects on hyperemia, pre-exercise creatine supplementation and elevated concentration in circulation could coincide with maximal Na + -K + pump activation during exercise, though the latter may last for much longer periods, meaning that post-exercise creatine may provide muscle benefits from the same mechanism (Holloszy, 2005). In addition, there is evidence to suggest that chronic exercise may upregulate the Na + -K + pump activity, suggesting that exercise per se may be key to optimizing increases in muscle creatine stores, however, in theory, creatine ingested in close proximity to training (immediately before and/or after sessions) may be ideal. These mechanisms are at least partially supported by seminal work conducted by Harris et al. examining the interaction of exercise and creatine uptake (Harris et al., 1992). Briefly, five participants performed 1 h of maximal unilateral cycling during a creatine loading phase (20-30 g·day −1 in 4 to 6 separate doses for 3.5-7 days). The non-exercise leg acted as a control. Creatine supplementation significantly increased total creatine content following supplementation in both legs, but there was a 25.7% increase in the control leg and 37.3% increase in the exercise leg (non-exercise leg: + 30.4 mmol/kg of dry muscle; exercise leg: + 44.1 mmol/kg of dry muscle). These initial findings provided evidence that exercise-induced muscle contractions enhance creatine uptake, at least over the short-term. Subsequent work by Robinson et al. (1999) further corroborated these findings. Participants performed single leg cycling exercises to exhaustion. Greater total creatine accumulation was achieved in the exercised limb (∼ 68% greater accumulation in total creatine content) following 5 days of creatine supplementation (5 g·day −1 ). It is presently unknown whether this would impact creatine content over longer periods of time or once "saturation" has been achieved.
Creatine kinetics may be modified if creatine is coadministered with insulin (Steenge et al., 1998(Steenge et al., , 2000. Steenge et al. (1998) infused seven male participants with various doses of insulin after creatine supplementation (12.4 g). Insulin enhanced creatine accumulation but only at high physiological concentrations. They also monitored blood flow during the experiment and noted that the enhanced uptake of creatine was likely associated with the insulin-mediated effect on muscle creatine kinetics. Furthermore, insulin can also increase the Na + -K + pump activity (Ewart and Klip, 1995), which could theoretically increase creatine transport, as discussed earlier. In FIGURE 1 | Exercise induced muscle hyperaemia paired with creatine absorption may alter creatine uptake and retention. Creatine peaks <2 h after ingestion and remains elevated for ∼4 h, while blood flow may return to baseline within 30 min after exercise. Based on this mechanisms, creatine before exercise may be ideal. Exercise modulates Na + -K − pump activity, therefore pairing creative absorption with maximal pump activity may enhance creatine uptake. However, Na + -K + activity is upregulated for much longer, therefore re tine in close proximity to exercise (either before or after) may be ideal compared to other time of the day. Creatine kinetics are modified by co-ingestion with carbohydrates alone or carbohydrates and protein. Therefore, timing of creatine may be partially dependent on the co-ingestion with marconutrients. Created with BioRender.com. a series of studies, , Green A. L. et al. (1996) showed that carbohydrates co-ingested with creatine enhanced muscle creatine uptake (∼60%) compared to creatine alone, most likely due to the carbohydrate induced secretion of insulin (Steenge et al., 2000(Steenge et al., , 1998. Steenge et al. (2000) compared a mixture of protein (50 g) and carbohydrates (47 g) co-ingested with creatine and found a significantly greater uptake compared to creatine alone. Greenwood et al. (2003) used a lower dose of carbohydrate (18 g) with creatine (5 g) and reported similar findings, that is, creatine uptake was significantly greater following co-ingestion compared to creatine alone. Similarly, Pittas et al. (2010) found that a lower-dose of protein mixed with carbohydrates (14 g protein hydrolysate, 7 g leucine, 7 g phenylalanine, and 57 g dextrose) co-ingested with creatine (5 g) augmented whole-body creatine retention over a 24 h period compared to a higher dose of carbohydrate (95 g) ingested with creatine, however, the uptake into skeletal muscle was not determined. Collectively, it appears that creatine co-ingested with carbohydrates and/or a mixture of carbohydrates and protein can elevate creatine stores and whole-body retention over the short term, and therefore any additional benefits associated with the timing of creatine supplementation may also be partially dependent on the co-ingestion with other macronutrients.
Furthermore, caffeine (1,3,7-trimethylxanthine) is a common ingredient found in multi-ingredient compounds containing creatine (O'Bryan et al., 2020). However, there is a potential interference effect from the co-ingestion of caffeine and creatine (Trexler and Smith-Ryan, 2015) compared to creatine alone (Vandenberghe et al., 1996;Hespel et al., 2002;Harris et al., 2005), potentially due to gastrointestinal distress impacting creatine uptake (Harris et al., 2005;Quesada and Gillum, 2013) or via opposing effects on calcium kinetics at the sarcoplasmic reticulum (Hespel et al., 2002;Trexler and Smith-Ryan, 2015). Vandenberghe et al. (1996) examined the effects of 6 days of creatine loading (0.5 g/kg/day) with and without caffeine (5 mg/kg/d) on muscle PCr content and performance. Creatine enhanced isometric contractions which were completely diminished by the co-ingestion of caffeine. Interestingly, there were no differences with regards to muscle PCr increases (Creatine = +4.3%; Creatine and Caffeine = +5.6%), suggesting that the interference effect is likely due to altered calcium kinetics. Furthermore, the co-ingestion of caffeine (3 mg/kg/day) and creatine (0.1 g/kg/day) during 6 weeks of resistance training resulted in similar gains in fat-free mass (air-displacement plethysmography) and limb muscle thickness (ultrasound), and muscle strength and endurance compared to creatine and caffeine supplementation alone (Pakulak et al., 2021). However, creatine alone increased knee extensor muscle thickness with no change over time when co-ingested with caffeine (Pakulak et al., 2021). In addition, a recent systematic review concluded that there is no ergogenic benefit or impairment when caffeine is co-ingested during a creatine loading period (Elosegui et al., 2022). Overall, it appears that caffeine co-ingested with creatine does not alter creatine uptake or creatine kinetics. However, there is some evidence that caffeine may blunt some of the ergogenic effects of creatine supplementation. Pragmatically, to limit a potential interference, caffeine may be ingested before and creatine after training.
In summary, there appear to be several factors that may influence the timing and uptake of creatine, including hyperemia, Na +− -K + pump activity, and insulin secretion. Future rigorously controlled experiments are required to substantiate these mechanisms and to better understand or predict the optimal time (if any) to ingest dietary creatine supplements.
RESEARCH INVESTIGATING THE TIMING OF CREATINE SUPPLEMENTATION
The first study to indirectly address whether the timing or strategic ingestion of creatine could influence the physiological adaptations from resistance training was performed by Cribb and Hayes (Cribb and Hayes, 2006). Trained recreational male bodybuilders who were consuming >1.8 g·kg −1 ·day −1 of dietary protein and were not taking ergogenic aids which included creatine monohydrate for at least 12 weeks prior to the start of the study were enrolled. Using a single-blind strategy, participants were randomized to ingest 1 g·kg −1 ·day −1 of a multi-ingredient supplement (mixed in water) containing whey protein isolate (40 g), carbohydrate (glucose; 43 g), and creatine monohydrate (7 g) per 100 g serving immediately before and immediately after each training session (PRE-POST group: n = 8; 21 ± 3 yrs, 82 ± 9 kg, 178 ± 5 cm) or in the morning (in the fasted state and prior to breakfast) and pre-sleep (in the postprandial state) provided > 5 h before and after training (Morning-Evening group: n = 9; 24 ± 4 yrs, 78 ± 5 kg, 178 ± 2 cm) on training days (4 sessions per week for 10 weeks). On average, each participant consumed ∼12 g of creatine per day. All training sessions lasted ∼60 min in duration and were performed between 3:00 and 6:00 p.m. After 10 weeks of supplementation and training, the PRE-POST group experienced a greater increase in intramuscular total creatine (+30.2 mmol/kg DM or 24.6% vs. +9.2 mmol/kg DM or 7.1%) and PCr concentrations (+13.1 mmol/kg DM or 16.8% vs. +1.9 mmol/kg DM or 2.4%) (assessed by muscle biopsies and histochemical analyses), whole-body lean tissue mass (assessed by dual energy x-ray absorptiometry; DXA), muscle cross-sectional area of type IIa and IIx muscle fibers and total protein content (assessed by muscle biopsies and histochemical analyses), and muscle strength (assessed by 1-repetition maximum squat and bench press) compared to the MOR-EVE group. These results suggest that the ingestion of a creatine-containing supplement in close proximity to resistance training sessions has a greater effect on intramuscular creatine accumulation and measures of muscle morphology and strength compared to the ingestion several hours before and after training sessions. Unfortunately, methodological issues with the study design preclude any direct conclusion about the efficacy of timed creatine ingestion. First, creatine supplementation (alone) was not assessed. There is evidence that the combination of creatine and whey protein increases measures of muscle mass and strength compared to whey protein or creatine alone in young and older adults after 6-10 weeks of resistance training (Burke et al., 2001;Candow et al., 2008). Further, it is well-established that protein supplementation and resistance training increases measures of muscle mass and strength in young adults (Morton et al., 2018). In addition, as previously mentioned, the combination of creatine and carbohydrate can result in greater intramuscular creatine accumulation compared to creatine supplementation alone . Second, no placebo group (control) was used which negates a comparison between resistance training alone and the combination of the multi-ingredient compound containing creatine and resistance training. Third, while habitual dietary intake (total energy and macronutrient composition) was subjectively estimated through food recall records, no direct measure of dietary creatine intake was made. The responsiveness to creatine supplementation is influenced by dietary sources of creatine (i.e., red meat and seafood) (Candow et al., 2019b). Finally, creatine was consumed twice (morning or immediately before training session) and following (immediately after training sessions or pre-sleep) which further eliminates the ability to conclude when the optimal time is to consume a creatinecontaining compound to increase muscle mass and performance. Importantly, no adverse effects were reported from consuming the creatine-containing compound.
In the most recent study examining creatine timing, 14 female athletes were randomized to supplement with creatine (0.3 g·kg −1 ·day −1 for 5 days followed by 0.03 g·kg −1 ·day −1 for 79 days) after performing resistance training sessions in the morning (n = 7; 26 ± 4 yrs, 65.3 ± 5.9 kg, 173.8 ± 6.5 cm; training occurred between 8:00 a.m. and 12:00 p.m.) or evening (n = 7; 23 ± 4 yrs, 63.2 ± 9.1 kg, 169.4 ± 7.5 cm; training occurred between 6:00 p.m. and 10:00 p.m.) on training days (3 days·week −1 for 12 weeks). On non-training days, participants consumed the creatine at their leisure. After 12 weeks of supplementation and training, there was a significant increase in upper-body muscular power (assessed by medicine ball throw distance) and lowerbody strength (assessed by 1-repetition maximum squat), with no differences between creatine ingestion strategies (Jurado-Castro et al., 2021). It is unknown how much dietary protein these participants were consuming per day prior to or during the intervention or if they had consumed dietary products containing creatine prior to the start of the study. To date, only four studies have been performed directly comparing the effects of creatine immediately before (∼5 min) vs. immediately after (∼5 min) resistance training sessions on measures of muscle mass and performance (summarized in Table 1). Recently, Forbes et al. (2021b) used a within-participant design and randomized recreationally active participants [n = 10 (3 males, 7 females); 23 ± 5 yrs, 73.5 ± 10 kg, 174 ± 9 cm; who were consuming 1.5 g·kg −1 ·day −1 of protein and had not consumed dietary supplements containing creatine for 12 weeks before the start of the study] to ingest creatine (0.1 g·kg −1 ·day −1 or ∼7 g) immediately prior to performing unilateral elbow flexor and knee extensor resistance training (3-6 sets at 80% baseline 1-repetition maximum) on one side of their body (2 days per week on alternating days) and creatine immediately after training the opposite side of their body (2 days per week on alternating days) for 8 weeks. Results showed that pre-and postexercise creatine supplementation resulted in similar increases in elbow flexor and knee extensor muscle thickness (assessed by ultrasound) and strength (assessed by 1-repetition maximum protocol) over time. Antonio and Ciccone (2013) compared the effects of 5 g of creatine immediately before resistance training sessions to 5 g of creatine immediately after training sessions in recreational male bodybuilders (n = 19; 23 ± 3 yrs, 80 ± 10 kg, 166 ± 23 cm) who were consuming ∼1.9 g·kg −1 ·day −1 of protein and had not consumed dietary supplements containing creatine monohydrate for 4 weeks prior to the start of the study. On non-training days, participants consumed creatine at their leisure. After 4 weeks of training, changes in fat-free mass (assessed by air-displacement plethysmography) and benchpress strength (assessed by 1-repetition maximum) were similar between creatine ingestion strategies. Candow et al. (2014b) examined the effects of creatine (0.1 g·kg −1 ·day −1 or ∼8 g) immediately before vs. immediately after resistance training sessions (3 days·week −1 for 12 weeks) in healthy, untrained older adults [creatine before group: n = 11 (4 male, 7 female); 56 ± 4 yrs, 77 ± 19 kg, 167 ± 7 cm; creatine after group: n = 11 (5 male, 6 female); 55 ± 2 yrs, 79 ± 14 kg, 170 ± 10 cm] who were consuming ∼1.4 g ·kg −1 ·day −1 of protein and had not consumed products containing creatine for ≥ 6 weeks prior to the start of the study. After 12 weeks of creatine supplementation and training, changes in fat-free mass (assessed by air-displacement plethysmography), regional (limb) muscle thickness (assessed by ultrasound), strength (leg press and chest press; assessed by 1-repetition maximum), and muscle protein catabolism (measured by urinary 3-methylhistidine excretion) were similar between creatine ingestion strategies. Collectively, results across these studies indicate that creatine supplementation immediately before and immediately following resistance training sessions (5-12 weeks) are both viable and safe strategies to augment the gains in muscle mass and performance over time. However, the respective study designs do not provide definitive clarification as to whether the timing of creatine supplementation is important. For example, no measures of intramuscular creatine content, habitual dietary intake of creatine, or assessment of muscle fiber morphology were made. These are major limitations in establishing the efficacy of creatine timing, as initial (presupplementation) intramuscular creatine stores, dietary intake of creatine, age, sex, and type II muscle fiber content and size play an important role in determining an individual's responsiveness to creatine supplementation (Syrotuik and Bell, 2004;Candow et al., 2019b). For example, a meta-analysis performed by Chilibeck et al. (2017) found greater muscle creatine content in the vastus medialis in younger compared to older adults. Furthermore, no sex sub-analysis was made in the Candow et al. (2014b) or Forbes et al. (2021b) studies which may have influenced their findings (Dos Santos et al., 2021;Smith-Ryan et al., 2021). There is some evidence, albeit questionable, that females may not respond as favorably to creatine supplementation compared to males, possibly due to females having higher presupplementation intramuscular creatine levels (Dos Santos et al., 2021). The uncertainty may largely be impacted by the timing of measurements around the menstrual cycle, which have not been previously accounted for and may have influenced the results (Smith-Ryan et al., 2021). In addition, participants in the studies by Antonio and Ciccone (2013) and Jurado-Castro et al. (2021) consumed creatine on non-training days which may have also influenced their findings. Perhaps the biggest limitation across the reported studies was that there were no placebo (control) groups incorporated into the respective designs which eliminates the ability to determine whether resistance training and/or timing of creatine ingestion was the driving force behind the gains in measures of muscle mass and strength over time.
In the only study to include a placebo (control) group, Candow et al. (2015) determined whether the strategic ingestion of creatine (immediately before vs. immediately after resistance training sessions) influenced changes in muscle mass and strength from resistance training over time. Non-resistance trained, healthy older adults who were consuming ∼0.9 g·kg −1 ·day −1 of protein and had not consumed dietary products containing creatine for at least 12 weeks prior to the start of the study were enrolled. Participants were randomized to ingest creatine (0.1 g·kg −1 ·day −1 ) immediately before [n =15 (7 males, 8 females), 53 ± 3 yrs, 77.2 ± 15.6 kg, 170.1 ± 9.9; ∼8 g·day −1 of creatine] and placebo (0.1 g·kg −1 ·day −1 of corn-starch maltodextrin) immediately after each session, placebo before and creatine immediately after training sessions [n = 12 (7 males, 5 females); 56 ± 4 yrs, 87.9 ± 20.1 kg, 173.4 ± 8.3 cm; ∼9 g·day −1 of creatine] or placebo immediately before and after training sessions [n = 12 (3 males, 9 females); 57 ± 7 yrs, 77.9 ± 22.8 kg, 170.5 ± 10.8 cm]. Supervised resistance training sessions were performed 3 days per week for 32 weeks. After supplementation and training, individuals supplementing with creatine (independent of the timing of ingestion) experienced similar increases in lean mass (assessed by DXA) and muscle strength (leg press and chest press; assessed by one-repetition maximum protocol). Furthermore, creatine supplementation (before and after training sessions) resulted in greater gains in upper-and lower-body strength compared to placebo. Interestingly, the increase in lean mass in the postexercise creatine group (pre: 46.6 ± 10.8 kg; post: 49.6 ± 11.8 kg) was significantly greater than in the placebo group (pre: 41.7 ± 8.7 kg; post: 42.2 ± 9.1 kg). There were no differences between the pre-exercise creatine group (pre: 43.6 ± 10.5 kg; post: 45.3 ± 12.7) and the placebo group. This is the only line of evidence to suggest that post-exercise creatine supplementation may offer a slight advantage in regard to muscle accretion compared to pre-exercise creatine supplementation in relation to resistance training alone. These results provide further evidence that the timed ingestion of creatine (immediately before or immediately after resistance training sessions) leads to similar gains in measures of muscle mass and strength across different age ranges. Extrapolation of findings from this study is also limited as no measures of pre-supplementation intramuscular creatine content, habitual dietary intake of creatine, or assessment of muscle fiber morphology were made.
CONCLUSIONS AND FUTURE DIRECTIONS
It is becoming quite clear that creatine supplementation (∼5-9 g·day −1 for up to 32 weeks) during a resistance training program is a well-tolerated (no adverse events reported), effective strategy to augment measures of muscle mass and strength. To date, it appears that pre-exercise (several hours before or immediately prior to training sessions) and post-exercise (immediately following or several hours after training sessions) creatine ingestion produce similar muscle benefits in young and older adults. Unfortunately, the limited number of studies that have been performed have potential methodological limitations (primarily the lack of a placebo control) eliminating the ability to determine when the optimal time (if any) is to consume creatine to maximize muscle and performance gains. To truly determine whether there is an optimal time, in relation to training, to consume creatine, future research is required to directly compare the effects of creatine supplementation several hours before, immediately before, intra-workout, immediately after, and several hours after training sessions. It is currently unknown whether the strategic (timed) ingestion of creatine differs from consuming creatine sporadically throughout the day on resistance training days or whether advantages exist in consuming creatine only on training days vs. daily (includes rest days) during a resistance training program. Further, whether the timed co-ingestion of creatine with other compounds such as carbohydrates and protein compared to creatine alone influences muscle mass and performance remains to be determined. Finally, research is needed to directly determine the time course for accelerated creatine uptake (if any) during a resistance training program and whether sex differences exist regarding creatine ingestion strategies.
To conclude, the current body of research does not support timed creatine supplementation prescription in relation to long(er) term training or in combination with other ingredients.
AUTHOR CONTRIBUTIONS
DC and SF: conceptualization. All authors: original draft and revised preparation. All authors have read and agreed to the published version of the manuscript.
Brandon
University Research Committee (BURC) provided a knowledge mobilization grant to fund this work.
|
2022-05-20T13:29:06.484Z
|
2022-05-20T00:00:00.000
|
{
"year": 2022,
"sha1": "4da5f4f05488872ce2b23ad77a15e75e17805d8e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "4da5f4f05488872ce2b23ad77a15e75e17805d8e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
115753810
|
pes2o/s2orc
|
v3-fos-license
|
Preventive Effect of Lactobacillus Plantarum CQPC 10 on Activated Carbon Induced Constipation in Institute of Cancer Research ( ICR ) Mice
Chinese Paocai is a traditional fermented food containing an abundance of beneficial microorganisms. In this study, the microorganisms in Szechwan Paocai were isolated and identified, and a strain of lactic acid bacteria (Lactobacillus plantarum CQPC10, LP-CQPC10) was found to exert an inhibitory effect on constipation. Microorganisms were isolated and identified via 16S rDNA. Activated carbon was used to induce constipation in a mouse model and the inhibitory effect of LP-CQPC10 on this induced constipation was investigated via both pathological sections and qPCR (quantitative polymerase chain reaction). A strain of Lactobacillus plantarum was identified and named LP-CQPC10. The obtained results showed that, as compared to the control group, LP-CQPC10 significantly inhibited the amount, weight, and water content of faeces. The defecation time of the first tarry stool was significantly shorter in LP-CQPC10 groups than in the control group. The activated carbon progradation rate was significantly higher when compared to the control group and the effectiveness was improved. LP-CQPC10 increased the serum levels of MTL (motilin), Gas (gastrin), ET (endothelin), AchE (acetylcholinesterase), SP (substance P), and VIP (vasoactive intestinal peptide), while decreasing the SS (somatostatin) level. Furthermore, it improved the GSH (glutathione) level and decreased the MPO (myeloperoxidase), MDA (malondialdehyde), and NO (nitric oxide) levels. The results of qPCR indicated that LP-CQPC10 significantly up-regulated the mRNA expression levels of c-Kit, SCF (stem cell factor), GDNF (glial cell-derived neurotrophic factor), eNOS (endothelial nitric oxide synthase), nNOS (neuronal nitric oxide synthase), and AQP3 (aquaporin-3), while down-regulating the expression levels of TRPV1 (transient receptor potential cation channel subfamily V member 1), iNOS (inducible nitric oxide synthase), and AQP9 (aquaporin-9). LP-CQPC10 showed a good inhibitory effect on experimentally induced constipation, and the obtained effectiveness is superior to that of Lactobacillus bulgaricus, indicating the better probiotic potential of this strain.
Introduction
The preparation of Sichuan Paocai (fermented Chinese cabbage) follows a number of steps: fresh vegetables are washed and sealed in a jar for anaerobic fermentation in salt water [1].Salt water plays a very important role for the exudation of vegetable juice and the soluble components are metabolized by lactic acid bacteria (saccharides and nitrogen substance).Acidic substances are generated and flavor components are metabolized, causing the unique crispy taste of Paocai [2].An abundance of natural lactic acid bacteria can be found in Paocai, which play a central role in the formation of both flavor and quality [3].Several lactic acid bacteria are also used as probiotics with a number of excellent health benefits for humans, including the prevention of constipation, colitis, liver injury, and diabetes [4][5][6][7].The difference in the types of lactic acid bacteria is caused by many factors, such as areas, climate, and the method of preparation.To better utilize these microorganisms, methods for isolation and identification should be further developed in order to accumulate these abundant bacterial resources for the development of probiotics.The microorganisms in Sichuan Paocai include Lactobacillus plantarum, Lactobacillus casei, Saccharomyces cerevisiae, and Lactobacillus acidophilus, Brevibacterium spec.[8][9][10].Several Paocai variations are common in the East Asian region.The lactic acid bacteria in Paocai are good leavening agents and they can be used to ferment food, as well as to prepare functional foods due to their bioactivity.The microorganisms that were used in this study (lactic acid bacteria) were isolated and identified from Sichuan Paocai.
With the exception of probiotics, the intestinal tract contains several harmful bacterial species.Under normal conditions, these are at a state of equilibrium [11].The probiotics in the intestinal tract participate in the digestion, preventing dyspepsia and digestive tract dysfunction [12].Mediated by the lactic acid metabolism, lactic acid bacteria can effectively inhibit both the growth and reproduction of harmful bacteria in the gastrointestinal tract and maintain the intestinal ecological balance and normal function.Imbalance within the intestinal lactic acid bacteria is related to chronic diarrhea, constipation, abdominal distension, and dyspepsia [13].Lactic acid bacteria cannot only activate phagocytosis of macrophages, but they also play an important role in intestinal colonization.Lactic acid bacteria stimulate peritoneal macrophages, induce interferon, promote cell division, generate antibodies, promote cellular immunity, improve both the non-specific and specific immune response, and improve the body's ability to restore tissue damage and malfunction [14,15].Constipation is a problematic medical condition that manifests as a difficulty to defecate and dryness of faeces [16].Constipation leads to slow intestinal peristalsis and an increase of harmful bacteria, which cause additional intestinal tract diseases [17].Lactic acid bacteria have been used to remedy constipation, because they can generate organic acids within the intestinal tract, repair and promote the intestinal function, reduce the pH in the enteric cavity, regulate neuromuscular activity, improve the peristalsis function of the intestinal tract, and promote both digestion and absorption.Moreover, they can effectively inhibit the proliferation of putrefying bacteria in the intestinal tract and improve the intestinal environment, softening the faeces, thus facilitating defecation [18].
Constipation is more of a bad state of life, intestinal regulation by probiotics can effectively prevent constipation, but there are not many probiotics with a better constipation prevention effect, from the traditional fermented food to find more effective strains of bacteria is the focus of current research.In this study, activated carbon influenced the normal physiology of the small intestine, leading to constipation in mice.An inhibitory effect of LP-CQPC10 on constipation was observed, and the mechanism of this effect was investigated with experiments of molecular biology, which provided a theoretical basis for the application of this bacterial strain.
Isolation and Identification of Lactic Acid Bacteria
In this experiment, pickled vegetables were collected from naturally fermented pickles that were sold in the market of Nan'an District, Chongqing, China.Paocai water solution (1 mL) was 10-fold diluted with sterile physiological saline to 10 −6 , and 100 µL of solutions were spread at 10 −4 , 10 −5 , and 10 −6 on a plate and incubated at 37 • C for 24-48 h.The morphology of bacterial colonies was recorded.Colonies with different morphologies were picked and streaked.After 48 h at 37 • C, a single colony was picked and streaked again, and the step was repeated three times until a single colony with consistent morphology was obtained.The pure colony was seeded with MRS culture medium (5 mL) and cultured at 37 • C for 24 h. 1 mL of the above culture medium containing bacteria was centrifuged at 4000 rpm/min for 10 min.The supernatant was discarded and bacteria were resuspended with sterile physiological saline and then stained.The suspicious purified target strain was seeded into MRS broth.After 18-24 h at 37 • C, DNA (Tiangen Biotech (Beijing) Co., Ltd., Beijing, China) was extracted.16S rDNA was amplified via polymerase chain reaction (PCR), using 1 µL upstream primer 27F (5 -AGAGTTTGATCCTG GCTCAG-3 , SEQ ID No. 1), 1 µL down-stream primer (Thermo Fisher Scientific, Inc., Waltham, MA, USA) 1495R (5 -CTACGGCTACCTTGTTACGA-3 , SEQ ID No. 2), 12.5 µL 2×Taq plus Buffer, and 1 µL template DNA.The system was filled to 25 µL with sterile ddH 2 O. Sterile ultrapure water was used to replace template DNA as negative control.Amplification conditions were: 94 C for 5 min.5 µL of the amplified product was used for agarose gel electrophoresis (agarose concentration 1.5%, electrophoresis 110 V, 45 min, SimpliAmp Thermal Cycler, Thermo Fisher Scientific, Inc., Waltham, MA, USA).Then, the amplification products of 16S rDNA were sequenced [19].Meanwhile, the abilities of microorganisms to resist artificial gastric acid and bile salt were detected by Chen's method [6].
Animal Experiment
Sixty SPF grade Institute of Cancer Research (ICR) female mice (six weeks old) were purchased from Chongqing Medical University.The mice were raised at room temperature 25 ± 2 • C, relative humidity 50 ± 5%, 12 h day/night cycle, and acclimated for one week.After acclimation, 50 mice were divided into five groups (n = 10): normal group, model group, Lactobacillus bulgaricus (LB) group, Lactobacillus plantarum CQPC10 low concentration group (LP-CQPC10-L) group, and Lactobacillus plantarum CQPC10 high concentration group (LP-CQPC10-H).The experimental cycle was 18 days.Physiological saline was administrated to mice in normal and model groups daily via gavage.Those that were in the LB group were administrated with LB at 1.0 × 10 9 CFU/kg, and those in LP-CQPC10-L group and LP-CQPC10-H group were administrated with LP-CQPC10 at 1.0 × 10 8 CFU/kg and 1.0 × 10 9 CFU/kg, respectively.From Day 15 to 17, except for mice in the normal group, mice were administrated with 10% activated carbon ice water (0.2 mL) via gavage every day.All the mice were weighted every day.The faeces were collected and the fecal moisture content was calculated.At Day 17 after gavage, all the mice were fasted but had free access to water for 24 h.At Day 18, all mice were administrated with 0.2 mL ice water via gavage containing 10% activated carbon.The mice in each group were divided into two groups, the defecation times of the first tarry stool of five mice were observed after the administration of activated carbon ice water.The remaining five mice were sacrificed 30 min after gavage, and their plasma was collected.The small intestine from the pylorus to the ileocecal junction was taken and the length of the small intestine and the forward distance of activated carbon in the small intestine were measured.The progradation rate was calculated following published procedures, progradation rate (%) = length of gastrointestinal (GI) transit (cm)/length of small intestine (cm) × 100% [20].The protocol for these experiments was approved by the Animal Ethics Committee of Chongqing Medical University in 5 March 2015 and the animal permit number is SYXK (Yu) 2017-0001.
Detection for Serum MTL, Gas, ET, AchE, SP, and VIP Levels
The plasma was left to settle for 1 h and then centrifuged at 4500 rpm/min for 15 min.The serum levels of MTL, Gas, ET, SS, AChE, SP, and VIP were measured with relevant kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China).
Detection of MPO, NO, MDA, and GSH Levels in Small-Intestine Tissue
Small-intestine tissues were homogenized and the levels of MPO, NO, MDA, and GSH were measured with relevant kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China).
Histopathology of the Small Intestine
Small intestine tissue with a size of 0.5 cm 2 was taken and immediately placed into 10% formalin for 48 h.Tissue samples were dehydrated, cleared via xylene, immersed in wax, embedded, sectioned, and stained via H&E.Their morphology was observed under an optical microscope (BX43, Olympus, Tokyo, Japan).
Quantitative PCR (qPCR) Assay
The total RNA from the small intestine was extracted while following the instructions of the Trizol kit (Thermo Fisher Scientific, Inc., Waltham, MA, USA).Both the purity and concentration of total RNA were measured via ultramicrospectrophotometer and the RNA concentration was standardized to one level (1 µg/µL).An RNA sample (1 µg/µL) was mixed with 1 µL (oligo) primer dT and 10 µL ultrapure water (Thermo Fisher Scientific, Inc., Waltham, MA, USA) and then mixed at 65 • C for 5 min.Then, the system was MIXED with 1 µL Ribolock RNase Inhibitor, 2 µL 100 mM dNTP mix, 4 µL 5×Reaction buffer, and 1 µL Revert Aid M-mu/v RT, respectively.The whole 20 µL mixture was used to synthesize cDNA at 42 • C for 60 min, and 70 • C for 5 min.The target genes (Table 1, Thermo Fisher Scientific, Inc., Waltham, MA, USA) were reversely transcribed and amplified.The reaction conditions were: 95 • C degeneration for 15 min, 60 • C annealing for 1 h, 95 • C extension for 15 min, for a total of 40 cycles.GAPDH was used as a housekeeping gene and the relative expression level of the target gene was calculated via 2 −∆∆CT method [21].
Table 1.Sequences of primers used in this study.
Statistical Analysis
The standard deviation was obtained after calculating the data of each group.Then, the significant differences (p < 0.05) between the data of each group were analyzed by one-way ANOVA with Duncan's multiple range tests, this analysis used the SAS v9.1 statistical software (SAS Institute Inc., Cary, NC, USA).
Isolation and Identification of LP-CQPC10
The lane of the strain shows a band with about 1500 bp (Figure 1), meeting the expectation of amplification fragment length.BLAST (Basic Local Alignment Search Tool) from NCBI was used to compare sequences.The results showed that LF-CQPC10 was lactic acid bacteria with a homology of 99% with Lactobacillus plantarum CQPC10 (LP-CQPC10) (Gene Bank, NC_010610.1).
Statistical Analysis
The standard deviation was obtained after calculating the data of each group.Then, the significant differences (p < 0.05) between the data of each group were analyzed by one-way ANOVA with Duncan's multiple range tests, this analysis used the SAS v9.1 statistical software (SAS Institute Inc., Cary, NC, USA).
Isolation and Identification of LP-CQPC10
The lane of the strain shows a band with about 1500 bp (Figure 1), meeting the expectation of amplification fragment length.BLAST (Basic Local Alignment Search Tool) from NCBI was used to compare sequences.The results showed that LF-CQPC10 was lactic acid bacteria with a homology of 99% with Lactobacillus plantarum CQPC10 (LP-CQPC10) (Gene Bank, NC_010610.1).
Defecation of Mice
In this study, 11 kinds of lactic acid bacteria were found, including five kinds of Lactobacillus plantarum and six kinds of Lactobacillus fermentum.As shown in Table 2, the LP-CQPC10 showed higher survival rates in pH 3.0 artificial gastric juice and 0.3% bile salt.LP-CQPC10 had good anti gastric acid and bile salt effects, and was worthy of further probiotic potential experiments.
Defecation of Mice
As shown in Table 3, at Day 14, the weight, amount, and water content between groups did not significantly change.After activated carbon caused constipation, except in the mice in the normal group, other mice had significantly reduced defecation amount, weight, and water content.The reduction in the control group was highest, while that in the LF-CQPC010-H group was the lowest.The indices in the LF-CQPC010-L were significantly higher than those in the LB group.
Defecation of Mice
In this study, 11 kinds of lactic acid bacteria were found, including five kinds of Lactobacillus plantarum and six kinds of Lactobacillus fermentum.As shown in Table 2, the LP-CQPC10 showed higher survival rates in pH 3.0 artificial gastric juice and 0.3% bile salt.LP-CQPC10 had good anti gastric acid and bile salt effects, and was worthy of further probiotic potential experiments.
Defecation of Mice
As shown in Table 3, at Day 14, the weight, amount, and water content between groups did not significantly change.After activated carbon caused constipation, except in the mice in the normal group, other mice had significantly reduced defecation amount, weight, and water content.The reduction in the control group was highest, while that in the LF-CQPC010-H group was the lowest.The indices in the LF-CQPC010-L were significantly higher than those in the LB group.
Defecation Time of the First Black Stool
To evaluate the influence of LF-CQPC10 on the defecation of mice induced by activated carbon, the defecation times of the first black stool were recorded for all mice after the mice received activated carbon via gavage for the last time.As shown in Figure 2, the time in the control group was the longest, which was significantly (p < 0.05) longer than those in the normal group.For the other three groups, although the time was longer than in the normal group, only the model group showed a significant difference (p < 0.05).The defecation time of the first tarry stool in the mice that were administrated with LF-CQPC10-H was significantly shorter than in the group administrated with LF-CQPC10-L and LB.
Defecation Time of the First Black Stool
To evaluate the influence of LF-CQPC10 on the defecation of mice induced by activated carbon, the defecation times of the first black stool were recorded for all mice after the mice received activated carbon via gavage for the last time.As shown in Figure 2, the time in the control group was the longest, which was significantly (p < 0.05) longer than those in the normal group.For the other three groups, although the time was longer than in the normal group, only the model group showed a significant difference (p < 0.05).The defecation time of the first tarry stool in the mice that were administrated with LF-CQPC10-H was significantly shorter than in the group administrated with LF-CQPC10-L and LB.
Activated Carbon Progradation Rate in the Small Intestine
At 30 min after 10% activated carbon ice water administration via gavage, the mice were sacrificed in order to observe the forward distance of the activated carbon in the small intestine, and thus calculate the progradation rate.As shown in Table 4, the length of the small intestine between each group did not show a significant difference, suggesting that activated carbon modeling does not influence the small intestine length.The progradation rate in the control group was significantly lower than that in the normal group (p < 0.05).The activated carbon progradation rates in mice from
Activated Carbon Progradation Rate in the Small Intestine
At 30 min after 10% activated carbon ice water administration via gavage, the mice were sacrificed in order to observe the forward distance of the activated carbon in the small intestine, and thus calculate the progradation rate.As shown in Table 4, the length of the small intestine between each group did not show a significant difference, suggesting that activated carbon modeling does not influence the small intestine length.The progradation rate in the control group was significantly lower than that in the normal group (p < 0.05).The activated carbon progradation rates in mice from LB, LF-CQPC10-L, and LF-CQPC10-H groups were significantly improved as compared to those of the control group (p < 0.05), and the rate of the LF-CQPC10-H group was the closest to that of the normal group.This suggests that LF-CQPC10 could promote small intestine peristalsis, accelerate the forward speed of activated carbon in the small intestine, reduce the detention time in the small intestine, and improve constipation.The concentration was positively correlated to effectiveness.As shown in Table 5, the serum MTL, Gas, ET, AchE, SP, and VIP levels in mice of the normal group were highest; however, the SS level was the lowest.However, those in the control group follow opposite trends.When compared to the model group, MTL, Gas, ET, Ach E, SP, and VIP levels in the LB, LF-CQPC10-L and LF-CQPC10-H groups are significantly improved (p < 0.05), and the SS level is significantly reduced (p < 0.05), especially in the LF-CQPC10-H group.
MPO, NO, MDA, and GSH Levels in the Small-Intestine Tissue
The GSH levels in the control group were the lowest, while the levels of MPO, NO, and MDA were the highest (Table 6).However, the normal group showed the contrary trend.LB, LF-CQPC10-L, and LF-CQPC10-H significantly improved the GSH level, while reducing the MPO, NO, and MDA levels in the constipation mice group (p < 0.05).When compared to LB and LF-CQPC10-L, LF-CQPC10-H greatly improved the GSH level and decreased MPO, NO, and MDA levels.
Morphological Observation of Small-intestine Tissue
As shown in Figure 3, the intestinal villus in the normal group was orderly arranged without rupture or shrinkage; however, the opposite result was found in the control group.Although the intestinal villus in the LB, LF-CQPC10-L, and LF-CQPC10-H groups showed rupture or shrinking to a certain extent, it was still more complete than in the control group.Moreover, the morphology of the intestinal villi in the LF-CQPC10-H group was consistent with that of the normal group.
Morphological Observation of Small-intestine Tissue
As shown in Figure 3, the intestinal villus in the normal group was orderly arranged without rupture or shrinkage; however, the opposite result was found in the control group.Although the intestinal villus in the LB, LF-CQPC10-L, and LF-CQPC10-H groups showed rupture or shrinking to a certain extent, it was still more complete than in the control group.Moreover, the morphology of the intestinal villi in the LF-CQPC10-H group was consistent with that of the normal group.
mRNA Expression Levels of Cu/Zn-SOD, Mn-SOD, and CAT in Small-Intestine Tissue
As shown in Figure 4, the expression of Cu/Zn-SOD, Mn-SOD, and CAT is the highest in the normal group.After constipation is induced by activated carbon, their expression was significantly reduced (p < 0.05).LB, LF-CQPC10-L, and LF-CQPC10-H significantly up-regulated the expression of Cu/Zn-SOD, Mn-SOD, and CAT (p < 0.05), particularly LF-CQPC10-H.
mRNA Expression Levels of Cu/Zn-SOD, Mn-SOD, and CAT in Small-Intestine Tissue
As shown in Figure 4, the expression of Cu/Zn-SOD, Mn-SOD, and CAT is the highest in the normal group.After constipation is induced by activated carbon, their expression was significantly reduced (p < 0.05).LB, LF-CQPC10-L, and LF-CQPC10-H significantly up-regulated the expression of Cu/Zn-SOD, Mn-SOD, and CAT (p < 0.05), particularly LF-CQPC10-H.
mRNA Expression Levels of c-Kit and SCF in Small-Intestine Tissue
As shown in Figure 5, c-Kit and SCF mRNA expression was highest in the normal group.Treatment with lactic acid bacteria up-regulated this expression and the capability of LF-CQPC10-H on the regulation was stronger than that of LB and LF-CQPC10-L.
mRNA Expression Levels of c-Kit and SCF in Small-Intestine Tissue
As shown in Figure 5, c-Kit and SCF mRNA expression was highest in the normal group.Treatment with lactic acid bacteria up-regulated this expression and the capability of LF-CQPC10-H on the regulation was stronger than that of LB and LF-CQPC10-L.
mRNA Expression Levels of Cu/Zn-SOD, Mn-SOD, and CAT in Small-Intestine Tissue
As shown in Figure 4, the expression of Cu/Zn-SOD, Mn-SOD, and CAT is the highest in the normal group.After constipation is induced by activated carbon, their expression was significantly reduced (p < 0.05).LB, LF-CQPC10-L, and LF-CQPC10-H significantly up-regulated the expression of Cu/Zn-SOD, Mn-SOD, and CAT (p < 0.05), particularly LF-CQPC10-H.
mRNA Expression Levels of c-Kit and SCF in Small-Intestine Tissue
As shown in Figure 5, c-Kit and SCF mRNA expression was highest in the normal group.Treatment with lactic acid bacteria up-regulated this expression and the capability of LF-CQPC10-H on the regulation was stronger than that of LB and LF-CQPC10-L.
mRNA Expression Levels of TRPV1 and GDNF in Small-Intestine Tissue
As shown in Figure 6, GDNF expression in the normal group was strongest, while TRPV1 expression was weakest.After induction of constipation, GDNF expression decreased in the control group, while TRPV1 expression increased.LB, LF-CQPC10-L, and LF-CQPC10-H could all up-regulate the expression of GDNF and down-regulate the expression of TRPV1.Moreover, the LF-CQPC10-H capability of up-regulation and down-regulation was higher than that of LB and LF-CQPC10-L.
mRNA Expression Levels of TRPV1 and GDNF in Small-Intestine Tissue
As shown in Figure 6, GDNF expression in the normal group was strongest, while TRPV1 expression was weakest.After induction of constipation, GDNF expression decreased in the control group, while TRPV1 expression increased.LB, LF-CQPC10-L, and LF-CQPC10-H could all up-regulate the expression of GDNF and down-regulate the expression of TRPV1.Moreover, the LF-CQPC10-H capability of up-regulation and down-regulation was higher than that of LB and LF-CQPC10-L.
mRNA Expression Levels of nNOS, eNOS, and iNOS in Small-Intestine Tissue
The expression of NOS1 (nNOS) and NOS3 (eNOS) in the normal group was highest, while it was the lowest in NOS2 (iNOS) (Figure 7).After treatment with activated carbon, both nNOS and eNOS expression significantly decreased (p < 0.05), while the iNOS expression significantly increased (p < 0.05).Lactic acid bacteria inhibited the influence of activated carbon on the expression.The influence of LF-CQPC10-H on nNOS and eNOS was stronger than LB and LF-CQPC10-L.However, the effect of LF-CQPC10-H on iNOS expression was lower than in LB and LF-CQPC10-L.
mRNA Expression Levels of nNOS, eNOS, and iNOS in Small-Intestine Tissue
The expression of NOS1 (nNOS) and NOS3 (eNOS) in the normal group was highest, while it was the lowest in NOS2 (iNOS) (Figure 7).After treatment with activated carbon, both nNOS and eNOS expression significantly decreased (p < 0.05), while the iNOS expression significantly increased (p < 0.05).Lactic acid bacteria inhibited the influence of activated carbon on the expression.The influence of LF-CQPC10-H on nNOS and eNOS was stronger than LB and LF-CQPC10-L.However, the effect of LF-CQPC10-H on iNOS expression was lower than in LB and LF-CQPC10-L.
mRNA Expression Levels of TRPV1 and GDNF in Small-Intestine Tissue
As shown in Figure 6, GDNF expression in the normal group was strongest, while TRPV1 expression was weakest.After induction of constipation, GDNF expression decreased in the control group, while TRPV1 expression increased.LB, LF-CQPC10-L, and LF-CQPC10-H could all up-regulate the expression of GDNF and down-regulate the expression of TRPV1.Moreover, the LF-CQPC10-H capability of up-regulation and down-regulation was higher than that of LB and LF-CQPC10-L.
mRNA Expression Levels of nNOS, eNOS, and iNOS in Small-Intestine Tissue
The expression of NOS1 (nNOS) and NOS3 (eNOS) in the normal group was highest, while it was the lowest in NOS2 (iNOS) (Figure 7).After treatment with activated carbon, both nNOS and eNOS expression significantly decreased (p < 0.05), while the iNOS expression significantly increased (p < 0.05).Lactic acid bacteria inhibited the influence of activated carbon on the expression.The influence of LF-CQPC10-H on nNOS and eNOS was stronger than LB and LF-CQPC10-L.However, the effect of LF-CQPC10-H on iNOS expression was lower than in LB and LF-CQPC10-L.As shown in Figure 8, AQP3 expression was lowest in the normal group, while AQP9 was highest.The expression trend of AQP3 and AQP9 was opposite in the control group.The effect of LF-CQPC10-H on AQP3 and AQP9 was closest to the normal group.AQP3 was significantly lower than in mice that were treated with LB and LF-CQPC10-L (p < 0.05), while AQP9 was significantly higher than in mice treated with LB and LF-CQPC10-L (p < 0.05).
mRNA Expression Levels of AQP3 and AQP9 in Small-Intestine Tissue
As shown in Figure 8, AQP3 expression was lowest in the normal group, while AQP9 was highest.The expression trend of AQP3 and AQP9 was opposite in the control group.The effect of LF-CQPC10-H on AQP3 and AQP9 was closest to the normal group.AQP3 was significantly lower than in mice that were treated with LB and LF-CQPC10-L (p < 0.05), while AQP9 was significantly higher than in mice treated with LB and LF-CQPC10-L (p < 0.05).
Discussion
Constipation can influence normal life and long-term constipation will induce other diseases, eventually severely threatening health [4].Better colonization of probiotics in the intestines will play a better role in probiotics; Probiotics need to enter more intestines, which required good anti gastric acid and bile salt, the survival rates in pH 3.0 artificial gastric juice and 0.3% bile salt could preliminarily determine whether lactic acid bacteria have probiotic potential [6].In this study, LP-CQPC10 had the better in vitro anti gastric acid and bile salt than commercial used LB.LP-CQPC10 is a kind of strain that might have potential for probiotics.The level of harmful microorganisms increases in response to constipation and the intestinal wall tissue is injured.When this happens, peristalsis will be negatively influenced.Slow intestinal tract peristalsis is one of the causes of constipation [6].Thus, the integrity of intestinal villi is very important for the evaluation of constipation, which can be preliminarily judged via pathological observation [17].In this study, LF-CQPC10 was preliminarily shown to have an inhibitory effect on constipation.
Observation of the faeces status is the most direct method.Constipation can lead to a decrease of the defecation amount, and the faeces are dry [4].In this study, we found that LF-CQPC10 could significantly remit these effects.The effectiveness was superior to that of the commonly used LB.Furthermore, the defecation time of the first tarry stool was used to evaluate the severity of the resulting constipation.Peristalsis was slowed down and the detention time in the intestinal tract
Discussion
Constipation can influence normal life and long-term constipation will induce other diseases, eventually severely threatening health [4].Better colonization of probiotics in the intestines will play a better role in probiotics; Probiotics need to enter more intestines, which required good anti gastric acid and bile salt, the survival rates in pH 3.0 artificial gastric juice and 0.3% bile salt could preliminarily determine whether lactic acid bacteria have probiotic potential [6].In this study, LP-CQPC10 had the better in vitro anti gastric acid and bile salt than commercial used LB.LP-CQPC10 is a kind of strain that might have potential for probiotics.The level of harmful microorganisms increases in response to constipation and the intestinal wall tissue is injured.When this happens, peristalsis will be negatively influenced.Slow intestinal tract peristalsis is one of the causes of constipation [6].Thus, the integrity of intestinal villi is very important for the evaluation of constipation, which can be preliminarily judged via pathological observation [17].In this study, LF-CQPC10 was preliminarily shown to have an inhibitory effect on constipation.
Observation of the faeces status is the most direct method.Constipation can lead to a decrease of the defecation amount, and the faeces are dry [4].In this study, we found that LF-CQPC10 could significantly remit these effects.The effectiveness was superior to that of the commonly used LB.Furthermore, the defecation time of the first tarry stool was used to evaluate the severity of the resulting constipation.Peristalsis was slowed down and the detention time in the intestinal tract elongated [17].In this study, the time in the control group was longer than in other groups, and the time in the LF-CQPC10 group was significantly decreased, thus showing constipation remission.
It has been reported that the neurotransmitter level will change in some patients with constipation (such as MTL, Gas, ET, SS, Ach, SP, and VIP).MTL has been used to evaluate gastrointestinal tract peristalsis and it has been widely considered to promote the mobility of the gastrointestinal tract, while decreased release will slow peristalsis [22].Gas is an important gastrointestinal hormone, which has been shown to promote gastric secretion, improve peristalsis, accelerate gastric emptying, and promote pyloric sphincter relaxation [23].Currently, AchE has been considered as one of two neurotransmitters that play a very important role in the motility of the intestinal tract.AchE promotes peristalsis by binding to the receptor [24].SP is an excitatory transmitter in gastrointestinal motor neurons.It greatly promotes the shrinkage of the smooth muscle in the digestive tract, stimulates water and electrolyte secretion in the small intestine and colonic mucosa, and promotes peristalsis [25].The obtained results indicated that MTL, Gas, AchE, and SP levels in the model group were significantly lower than in the normal group; however, the neurotransmitter levels significantly increased in the LP-CQPC10 group.This suggests that the decrease in these levels was related to the constipation.The increase in the levels by LP-CQPC10 indicated that it could remit the constipation.ET is a multi-functional peptide that plays a very important role in cardiovascular and intestinal tract function.SS can inhibit the release of gastrointestinal hormones, slow down gastric emptying, and reduce smooth muscle contraction, which might cause constipation [26].VIP is an inhibitory neurotransmitter that stimulates peristalsis and promotes gastrointestinal motility [27].The SS level was highest in the model group, while it was significantly decreased in the LP-CQPC10 group (p < 0.05), suggesting that LP-CQPC10 could have a preventive effect on constipation.
In the metabolism, superoxide anions (O 2 − ) and oxygen radicals participate in physiological reactions in the body.Imbalance will greatly increase O 2 − and oxygen radicals, which will disorder the metabolism [28].It has been reported that the Cu/Zn-SOD activity in patients with constipation is lower than in normal people.The likely reason is that long-term retention and stimulation of hard faeces causes inflammation in the intestinal tract [29].Mn-SOD can reduce the activity under inflammation [30].SOD can transform harmful superoxide radicals into hydrogen peroxide.Although hydrogen peroxide is still harmful to the body, CAT can degrade it into water.SOD and CAT will form into an oxidation resistant chain to remit the damage to the intestinal tract that is caused by constipation [31].The results of our study indicate that LP-CQPC10 could effectively remit the reduced activity of Cu/Zn-SOD, Mn-SOD, and CAT caused by constipation.Cajal cells (ICC) are a type of special mesenchymal cells.Colon ICC amount, morphological change, and abnormality of the cellular network structure will slow the peristalsis rate, further leading to slow transit constipation [32].C-Kit is one of the specific markers for ICC, and SCF is the natural ligand of the C-kit receptor [33].It has been reported that ICC density in the small intestine of patients with constipation is decreased, indicating that reduced ICC is related to the down-regulation of the c-Kit gene in the sigmoid colon, and reduced c-kit protein and mRNA expression [34].The results showed that the mRNA expression of both c-Kit and SCF in the LP-CQPC10 group was significantly increased (p < 0.05), suggesting that LP-CQPC10 could increase the ICC amount and thus remit constipation.
TRPV1 is closely related to defecation and activation of TRPV1 could trigger the release of neurotransmitters, thus leading to intestinal motility dysfunction.The increase in TRPV1 expression is an important manifestation of intestinal injury and the damage that is caused by gastrointestinal tract diseases can increase TRPV1 expression in patients with constipation [35].GDNF regulates ganglion cells and aids the repair of the damaged intestinal tract, while also preventing constipation.Constipation is related to the intestinal nervous system, leading to muscular tension and weak gastrointestinal motility [36].Regulation of TRPV1 and GDNF expression is one of the important mechanisms to remit constipation.LP-CQPC10 also exerts such an effect of constipation remission.
NOS participates in the regulation of gastrointestinal motility.An increase in NOS will lead to an increase in the NO content, influencing intestinal function and leading to constipation [37].Continuous increase of NO can cause a more severe colonic motility disorder [38].Endothelial dysfunction can cause constipation and the decrease in NO bioavailability is an important factor for this dysfunction [39].NO is synthesized via NOS catalysis.Three subtypes for NOS have been described, including NOS1 (nNOS), NOS2 (iNOS), and NOS3 (eNOS) [40].Under normal physiological conditions, NO in vascular endothelial cells mainly originates from eNOS, whose main effect is the regulation of the normal physiological function [41].The expression of nNOS is greatly decreased in the small intestine of animals with constipation [42].iNOS is not expressed under the resting-state, while a high level of iNOS and NO will be generated when the body is injured or under other pathological conditions [43].A decrease in NO content by controlling NOS is a feasible method to control constipation [42].Here, we found that LP-CQPC10 could significantly up-regulate the expression of eNOS and nNOS, down-regulate expression of iNOS, and remit constipation.
Aquaporin (AQP) has been reported to specifically transport water and some AQPs could participate in constipation by influencing the over-absorption of water in the colon and/or reduce the secretion of intestinal juice [44].AQP3 in constipation rats has been reported to be significantly increased when compared to normal mice, and AQP3 participated in the water absorption of the colon in the enteric cavity.This suggested that the overexpression of AQP3 aggravated the water adsorption of colonic mucosa, leading to constipation [45,46].AQP9 in the intestinal tract of constipated rats was significantly reduced.AQP9 participated in the secretion of colonic mucus, protected the mucous membrane, and promoted defecation.This suggested that AQP9 was expressed at a low level and the secretion of goblet cell mucus was reduced, leading to constipation [46].LP-CQPC10 could remit the influence of constipation on AQPs, reduce the expression of AQP3, and increase the expression of AQP9.
Conclusions
This study investigated the effect of LP-CQPC10 that was isolated from Sichuan Paocai on experimentally generated constipation (induced with activated carbon).The results indicate that LP-CQPC10 could remit the influence of constipation on defecation.The results of serum indices and small intestine clarified the effect of LP-CQPC10 on the constipation inhibition and the mechanism was further elucidated via molecular biological tests.LP-CQPC10 improved the intestinal motility function, maintain intestinal health, inhibit the influence of constipation on intestinal nerve, and improve the normal physiological capability.The effect of LP-CQPC10 is positively correlated with bacterial dose, and the effect on the constipation is superior to that of the commonly used LB.The results of this study can guide the application of probiotics.
Figure 7 .Figure 6 .
Figure 7. mRNA expression levels of nNOS, eNOS, and iNOS in the small-intestine tissue of mice.Values presented are the mean ± standard deviation.a-e Mean values with different letters in the
Table 2 .
Abilities of stanins to resist artificial gastric acid and bile salt.
Table 2 .
Abilities of stanins to resist artificial gastric acid and bile salt.
Table 3 .
Stool status of mice treated with soybean milk during the experiment.
Table 3 .
Stool status of mice treated with soybean milk during the experiment.
Table 6 .
MPO, NO, MDA, and GSH levels in the small-intestine tissue of mice with activated carbon-induced constipation.
|
2019-04-16T13:28:48.815Z
|
2018-09-01T00:00:00.000
|
{
"year": 2018,
"sha1": "0c38619661e3667b7c16916a81756cfd39784bec",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/8/9/1498/pdf?version=1535766237",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0c38619661e3667b7c16916a81756cfd39784bec",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
81194007
|
pes2o/s2orc
|
v3-fos-license
|
Issues of Implied Trust in Ethical Hacking Issues of Implied Trust in Ethical Hacking
This paper discusses the issues of implied trust in ethical hacking. Ethical hackers are considered to be professionals and experts in their field. It is well documented that there is an implied trust toward professionals who are entrusted to undertake a task. Like many similar professions, such as ICT and computer forensics, there is no uniform or mandated code of ethics that an ethical hacker must adhere to. Given the nature of hacking and the potential for misuse and access to sensitive and confidential information, the need to ensure professionalism is maintained through ensuring competence and ethical behaviour is critical.
Introduction
According to the 2017 Verizon Data Breach Investigations Report (DBIR), 62% of breaches feature hacking (Verizon, 2017).Similarly, the 2017 Telstra Cyber Security report predicts that 59.6% of threats in Asia and 52.6% in Australia will be from external hackers (Telstra, 2017).
Furthermore, the Identify Theft Resource Center identified over 16 million records exposed as a result of over 850 breaches (Identity Theft Resource Center, 2017).Although this sees a drop compared to 2016 where over 36 million records were exposed, it can't be concluded that breaches are on the decline; a single breach can expose millions of records and with the personal information of potentially hundreds of thousands, if not millions of individuals, this is a significant issue.
In addition to data breaches, there has been a series of ransomware attacks.A hacking group called Shadow Brokers is believed to have leaked an NSA exploit named Eternalblue (Goodin, 2017).This exploit, was used as one of the mechanisms to spread two strains of ransomware in 2017; the ransomware known as WannaCry in May, and Petya in June.The effect on victims of the malware was largely devastating, with some organisations forced to shut-down until systems could be restored resulting in significant lost revenue and potential litigation.Some victims have permanently lost data and systems, impacting the business and their clients (Coyne, 2017).
Whether it is a data breach or an attack of some other kind, a vulnerability needs to be exploited in order for it to be successful.These vulnerabilities could be with specific systems and applications or with people and processes.These vulnerabilities are typically discovered by security researchers and hackers.They are then exploited either directly, by the hacker, or using malicious software (malware) that is designed to seek out and exploit the flaws.
In order to defend against these types of attacks, a multilayered approach is generally adopted.Traditionally a defensive approach of implementing technical controls has been used (Thomas, 2017).The implementation of firewalls, anti-virus software, and other access control systems has been the status quo.Over the past few years the focus has shifted from just the technical aspects of information security to include the "human factor" or people based aspects (Eminağaoğlu, Uçar & Eren 2009, p223).With the dramatic increase in phishing; using emails to trick users into divulging secret information, such as usernames and password, the traditional controls are less effective.According to Verizon (2017), phishing attacks were the most prevalent form of social engineering attack to take place.
The contribution of this paper is a better understanding of one of the human factors, namely, that of the ethical hacker.
What is Ethical Hacking?
The traditional approach of utilising multilayered technical defences has been augmented over the past decade through the implementation of a security culture within organisations, which included the implementation of security awareness programs.These programs helped users to identify suspicious emails, to use good password practices, and to safeguard their information.These strategies are all known as 'defensive' strategies, because they seek to defend a network or systems from attack by a malicious attacker.
There are, however, also a set of offensive strategies that can be undertaken.Instead of trying to stop an attack, an offensive strategy launches an attack against a network, with the aim of identifying weaknesses that can then be remediated.These offensive engagements are known as penetration tests or red-teaming and are conducted by a specific type of hacker, called an ethical hacker.Ethical hackers use the same tools and techniques as the malicious hackers, however, they do this to test the security of the target network (Graves, 2010, p3).What sets an ethical hacker apart from other types of hackers, is that an ethical hacker is given permission to conduct the hack by the owner of the network.
Types of hackers
When it comes to classifying hackers, there are generally three types of hacker.These hackers are classified based on their motives and into three categories, which are identified by a 'hat' colour.
Black Hats
Black hat hackers are the malicious hackers, also known as 'crackers' (Graves, 2010, p3).This type of hacker operates illegally and their motives are usually for personal gain or to cause mischief (Thomas, 2017).Often black hat hackers obtain confidential information, such as credit cards, or personal information that can then be sold on channels like the dark web (a set of websites not accessible through traditional search engines and effectively hidden (Egan, 2017)) and then used to commit fraudulent transactions or steal identities, to name a few uses.
White Hats
A white hat hacker, also known as ethical hackers are hired to hack into systems and networks for the purpose of identifying security weaknesses and vulnerabilities.After identifying these vulnerabilities, a white hat will report their finding back to the owner of the network they assessed, who can then work to remediate the findings.
Grey Hats
In between black and white hats is they grey hat hacker.The motives behind grey hats aren't generally personal, nor are they provided permission by the system owner to hack the target system.Instead, a grey hat hacker may be motivated by a cause, known as hacktivism (Hargrave, 2012), or be sanctioned by a nation state to attack an adversary or gain intelligence.
Methodology
An analysis of the current literature on implied trust and professionalism issues on ethical hacking was undertaken.To perform this review, Google Scholar was used to identify the currently available literature.The following search queries were performed: • "penetration testing" | "ethical hacking" | "red team" • ("penetration testing" | "ethical hacking" | "red team") & ("implied trust" | professionalism) • ("penetration testing" | "ethical hacking" | "red team") & ("implied trust") The first query, which was designed to identify all literature on penetration testing and other related terms (using an "OR" operator) that are indexed returned 17,300 records.To filter this further, the second query required either "implied trust" OR "professionalism" as part of the search.This reduced the search down to 677 records.Finally, a third search was performed to only look at articles that include "implied trust" as a key word.This final search resulted in 18 search results, which represents just 0.1% of the articles written on penetration testing.Papers that did not discuss ethical hacking and either implied trust issues, either directly, or indirectly were not included as part of this paper.
Penetration testing and trust
By nature, and to be effective, ethical hacking involves trying to gain access to a system to access confidential and sensitive information.This means, that a certain level of trust needs to be established between the ethical hacker and the party engaging them.Trust is conceptualised as the belief of a person that another party upon whom the individual is dependent will act in his/her interests (Tutzauer,n.d,p5).A professional has superior knowledge, requiring the other party to trust them (Al-Saggaf, Burmeister, and Schwartz, 2017).Li, Rong and Thatcher (2012) explain how one party has a willingness to be vulnerable to the other to carry out the task irrespective of the ability to monitor or control them (Li, Rong, Thatcher, 2012, p20).There are a number of ethical consideration and laws that various countries have regarding the safeguarding of privacy that need to be considered as well (Thomas, Duessel, Meier, 2017, p11), something that could be an issue for an ethical hacker that tests a multinational organisation.
Penetration testing is a highly technical and complex field.An ethical hacker requires deep knowledge across many areas, including, but not limited to software, hardware, networking, and even human behaviour.The knowledge required by a highly effective ethical hacker includes detail of how these areas work at their most basic level, such as the OSI model (the reference model that show the layers of how communication occurs on a network ("The OSI Model's Seven Layers Defined and Functions Explained", n.d.), software code, and even electronic signals.Because of this, it can be very difficult to evaluate the effectiveness of an ethical hacker, especially if this knowledge isn't possessed by the evaluator.Fabian (2009) highlights that the ability to evaluate a professional's abilities from the outside can be difficult, if not impossible and certain level of belief is required (Fabian, 2009, p54).
To date, there has been little research on ethical issues on ethical hacking.However, there has been some research around ethical issues and issues of professionalism on ICT professionals.Whilst not solely an ICT profession, ethical hacking crosses into the ICT domain as many of the systems involved in the hacking process are either ICT systems, or leverage the use of ICT systems.
As ICT is a relatively new profession (Burmeister 2015), it can also be perceived as immature.There is currently neither a mandatory or unified code of ethics that exists within ICT (Burmeister 2013;Capurro and Britz 2010;Whitehouse et al. 2016).The absence of a code of ethics, which has consequences for violations, increases the risk of a variety of inappropriate behaviours including misrepresentation, taking credit for others' work, privacy and confidentiality issues, and failure to comply with laws.Licensing is also not generally a requirement for ICT professionals (Fabian 2009).All of this is also true for information security professionals and ethical hackers.Although the mainstream certifications such as EC-Council's Certified Ethical Hacker (CEH), ISC2's Certified Information System Security Professional (CISSP), and ISACA's Certified Information Security Manager (CISM) certification all require the acceptance and adherence to each of their respective codes of ethics, they are not uniformed and only required for those that have achieved the certifications.
Although the title "Ethical Hacker" implies ethical behaviour, this may not always be the case.For instance, an ethical hacker needs to keep their knowledge of exploits up to date, and they will likely need to go "underground" to gain this knowledge (Conran 2014).Because ethical hackers may even utilise questionable means to gain intelligence it may result in a question of their professional ethics.Although in this sense it can be argued that ethical hackers are partaking in questionable activities, the rationale for which is likely justified as being for the greater good, it does raise the question: at what point may this justified ethical behaviour become blurred and the practices of the ethical hacker become unethical?Given the already identified need for a specialised skill set and experience to be an effective ethical hacker, it is not out of the question that an 'ethical hacker' may once have been a black hat/malicious hacker.A good example of this is Kevin Mitnick; Mitnick is now a 'white hat hacker' and security consultant, however, in the 1990's he was a notorious hacker who was arrested by the FBI and convicted of seven counts of wire and computer fraud.(Gengler, 1999, p6).Many organisations perform online background checks and review the social networking accounts of applicants as standard practices (Stuart et al. 2015).But this background checking assumes that there's something to find and isn't by any means foolproof.
Current Literature Analysis
Of the articles written on ethical hacking, only 0.1% discuss implied trust and 3.9% discuss professionalism.As shown in Figure 1, prior to 2001, there were no records returned for literature that discusses implied trust and ethical hacking.The largest spike was in 2013, where five articles were published.2013 saw a few significant large breaches, including the Target breach and Adobe breach, and over 822 million records exposed (Hawes, 2014), which could explain the spike during that year.
Figure 1 -Articles published year on ethical hacking and implied trust.
As described previously, there is no currently no uniform or mandatory code of conduct for ethical hacking.This same concern has been raised in regard to ICT professionals, where it has been recommended that ACS code of ethics is mandated through National regulations (Bowern, Burmeister, Gotterbarn, Weckert, 2006, p175).More closely, Gay (2012), discusses how implied trust exists between so-called experts and without any standard certification or code of conduct (Gay, 2012, p13).There is also generally no licensing requirement for ICT professionals (Fabian, 2009, p54) and this applies to ethical hackers too.It was highlighted that there is a level of incompetence in the field of digital forensics, which can lead to issues with investigations and that the lack of a standard code could contribute to the issue.Additionally, a survey of ICT professionals in the UK, found that one third of IT personnel misused their privileges and searched the corporate network for confidential information, including salary information, personal information, board minutes and personal emails (Survey Reveals Scandal of Snooping IT Staff, 2008, p24).Uncovering access to some these items may be part of an ethical hacker's engagement, but ensuring appropriate ethical behaviour through the handling of such confidential information could be a concern.
Ethical hacking, like digital forensics, fall into the "Information Security" field, they are simply different subsets, but still prone to the same issues and vulnerabilities such as misuse of information and the need to ensure competence of the professional.Much of the literature, although discusses ethical hacking and implied trust, does not actually correlate the two.The implied trust discussions in the existing literature are focused on the context of implied trust towards systems and platforms, such as trust toward security platforms (e.g.authentication systems) and well-known websites (e.g.Facebook) or how implied trust is taken advantage of by an attacker, such as spoofing an email as part of a phishing attempt (Cole, 2002, p51).
What is noteworthy, is that the same implied trust manipulation a malicious attacker uses to trick a victim, is how an ethical hacker manipulates a target as part of a test.Other literature simply discusses ethics on teaching ethical hacking to students.Students may use the techniques they have learned irresponsibly, inappropriately or in an illegal manner, which some security educators consider to be unethical and socially irresponsible (Trabelsi, McCoey, 2016, p3-5).Teaching students to hack provides them with knowledge of how to cause damage to computer networks (globally) with the help of university lecturers.This could pose an unimaginable threat (Jamil, Khan, 2011, p 3758).A study undertaken at a Canadian university, noted that there are concerns about the compromise of personal information by the ethical hacker that may result from conducting a penetration test (Abu-Shaqra, Luppicini, 2016, p67).
The focus on education however leaves out one area completely and it might prove fruitful grounds for further research.Namely, "Are these formally trained ethical hackers any match for the 'real' hackers?"This is an area that does not appear to be addressed in any of the literature reviewed, and yet would appear to be a logical extension of the 'educative' focus of several of the article.That is, testing the efficacy of the ethical certifications currently being spruiked.Jamil et. al (2011) suggest that mandatory security background checks should be undertaken for people who are part-taking in ethical hacking courses.Conducting these checks forms part of good due diligence activities, which many security frameworks such as the International Organization for Standardization (ISO) ISO27001 framework include (International Standards Organization, n.d).The adoption of such a framework by an organisation however, is not mandatory.Whilst some industries have regulatory bodies that mandate that background checks are completed, such as the Securities and Exchange Commission (SEC) and Financial Industry Regulatory Authority (FINRA) in the USA, and the Australian Securities and Investments Commission (ASIC) and the Australian Tax Office (ATO) in Australia, this requirement does not apply uniformly across all industries.Additionally, a background check is not likely to provide complete protection, but rather assist in lowering risk to an acceptable level.
Current Codes of Conduct
As described, there are currently a number of available codes of conduct that are available from various certification bodies around the world (Burmeister, 2017).
Australian Computer Society (ACS) Code of Ethics
Founded in 1966, the Australian Computer Society is a professional association for the information, communications, and technology (ICT) industry.Although historically focusing on specifically ICT professionals, the ACS launched its cyber security certification for ICT professionals in September 2017 (Pollitt, 2017).All members of the ACS must adhere to the code of ethics.
CREST Code of Conduct
CREST is a not for profit organisation that originated in the United Kingdom, but has since launched chapters across Europe, Middle East, Africa and India (EMEA), The Americas, Asia, and Australia and New Zealand.CREST's purpose is to provide a level of assurance that organisations and their security staff have a level of competence and qualification in conducting security work such as penetration testing, threat intelligence or incident response (CREST®, n.d.).CREST qualified professionals must abide by the CREST Code of Conduct.The CREST code of conduct is fairly detailed and covers requirements such ensuring regulatory obligations, adequate project management, competency, client interests, confidentiality, and ethics (CREST®, 2016).
EC-Council Code of Ethics
The International Council of E-Commerce Consultants, known as EC-Council was formed after the September 11, 2001 attacks in the United States to address cyber-attack threats (EC-Council, n.d.).EC-Council is best known for its' Certified Ethical Hacker (CEH) certification, which is recognised as a US Department of Defence (DoD) 8570 cyber security certification.The EC-Council Code of Ethics requires confidentiality of discovered information, ensuring that any process or software obtained is legal and ethical, ensuring proper authorisation, adequate project management, continuing professional development, ethical conduct, and not being convicted of any crimes (EC-Council, n.d.).
GIAC Code of Ethics
Global Information Assurance Certification (GIAC) provide some of the most well-known and highly regarded certifications in the security industry.These certifications include penetration testing, security management and digital forensic certifications.Established in 1999, GIAC was established to provide assurance of the skills of information security professionals (GIAC, n.d.).The GIAC Code of Ethics is broken into four sections; respect for the public, respect for the certification, respect for the employer, and respect for oneself.The code mandates that professionals will take responsibility and act in the public's best interests, ensure ethical and lawful conduct, maintaining confidentiality, competency, accurate representation of skills and certifications, and avoiding conflicts of interest (GIAC, n.d.).
ISACA Code of Professional Ethics
ISACA is a professional body established in 1969 with over 140,000 members worldwide that focuses on IT governance (ISACA, n.d.) .Formerly known as the Information Systems Audit and Control Association and focused on IT audit and assurance, ISACA now also includes training and certification for information security and cyber security professionals.The ISACA Code of Professional Ethics mandates that compliance with standards and procedures is maintained, due diligence and professional is taken, legal conduct, confidentiality is maintained, competency, and continuing professional development (ISACA, n.d).
ISC 2 Code of Ethics
ISC 2 is an international, non-profit organisation with over 125,000 members in the information security profession (ISC2, n.d.).ISC 2 's Code of Ethics consists of four directives; protecting society and public interest, act honourably, honestly, justly, responsibly and legally, be competent, and advanced to protect the profession (ISC2, n.d.).
As previously stated, all current codes of conduct are voluntary and only applicable to individuals who are members or certified individuals of the respective body.There are some certification bodies, however, that do not have a code of conduct requirement.An example is Offensive Security, who provide in-depth training and certification on ethical hacking; their examination is regarded as one of the most difficult and highly regarded certification involving successful passing of a hands-on lab test in order for a candidate to obtain the credential.For those codes that do exist; although they contain similar directives, they are all different and include different levels of detail.
Developing a Mandatory, Uniform Code of Conduct
As identified, there are codes of conduct and ethics available from numerous professional and certification bodies.These codes, however, are only mandatory to those who are members or certified by the respective body.There are many similarities between codes, but they are not completely in alignment.There is no identified direct conflict between codes and there are certainly useful attributes from each code that could be used to form a uniform code of conduct for ethical hackers and cyber security professionals alike.In order for the code to be effective, it would need to be mandatory and have adequate oversight.Examples of this include GIAC's Ethics Council and ISACA's Ethics Committee that review ethics matters that don't comply with their code and take action accordingly.
In other professions such as lawyers, doctors, and accountants we see such mandatory codes and the need for those codes to develop and adapt to economic changes, government influence, and changes within the profession (Backof, Martin, 1991).In Australia, legislation such as the Legal Profession Uniform Law is in force and must be adhered to (New South Wales Government, 2015).This legislation Australasian Conference on Information Systems Thomas et al. 2017, Hobart, Australia Issues of implied trust in ethical hacking applies to all practicing lawyers and must be complied with.The purpose of the legislation is to ensure all lawyers act ethically and comply with the provisions required and such a requirement of ethical hackers who can potentially access highly confidential and sensitive information and are entrusted to do so should have similar requirements applied.
Unlike most doctors, lawyers and accountants, many cyber security professionals engage with organisations across borders, either locally or internationally.This is especially true when engaged by multi-national companies to review and test their security.This increases the importance of a unified code that is suitable on a global scale and applies to all cyber security professionals engaging in practices such as ethical hacking.
Conclusion
The use of ethical hackers as part of a good security strategy is evident and the use of them is likely to increase.There are many ethical implications that need to be considered.Because ethical hackers use the same techniques as malicious attackers, such as the email spoofing example, and often research and gain intelligence through the same questionable challenges, there is a fine line between an ethical white hat hacker, and a malicious black hat hacker; this further highlights the importance of appropriate professionalism and ethical behaviour.
Because of the implied trust relationship between an ethical hacker and the client, the ethical hacker is effectively given permission to access any information they can, much of which could be confidential or sensitive in nature.It has been identified, that ICT professionals have snooped and misused their privileges, and there is no reason why an ethical hacker would not do the same and further research in this area is warranted.
It is clear that implied trust is an issue, and there is merit in further research in this area.This research could include identifying whether there is merit in developing a mandatory, unified code of conduct that applies to ethical hackers and helps ensure appropriate ethical behaviour and levels of competence before an ethical hacker can or should be engaged or some form of licensing requirement.
|
2018-12-07T21:36:24.888Z
|
2018-10-29T00:00:00.000
|
{
"year": 2017,
"sha1": "4803ecb41b3df641655c523fe2f3058acec0be71",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.29297/orbit.v2i1.77",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "330685caddcc29a21afdfd7e0019e091413ca22c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Sociology",
"Medicine"
]
}
|
28475531
|
pes2o/s2orc
|
v3-fos-license
|
Cognate Identification for a French - Romanian Lexical Alignment System: Empirical Study
This paper describes a cognate identification method, used by a lexical alignment system for French and Romanian. We combine statistical techniques and linguistic information to extract cognates from lemmatized, tagged and sentence-aligned parallel corpora. We evaluate the cognate identification model and we compare it to other methods using pure statistical techniques. We show that the use of linguistic information in the cognate identification system improves significantly the results.
Introduction
We present a new cognate identification module required for a French -Romanian lexical alignment system. This system is used for French -Romanian law corpora. Cognates are translation equivalents presenting orthographic or phonetic similarities (common etymology, borrowings, and calques). They represent very important elements in a lexical alignment system for legal texts for two reasons: -French and Romanian are two close Romance languages with a rich morphology; -Romanian language borrowed and calqued legal terminology from French. So, cognates are very useful to identify bilingual legal terminology from parallel corpora, while we do not use any external terminological resources for these languages. Cognate identification is one of the main steps applied for lexical alignment for MT systems. If we have several efficient tools for several Euro-pean languages, few lexically aligned corpora or lexical alignment tools (Tufi HW DO DUH available for Romanian -English or Romanian -*HUPDQ 9HUWDQ DQG *DYULO ,Q JHQHUDO few linguistic resources and tools for Romanian (dictionaries, parallel corpora, terminological data bases, MT systems) are currently available. Some MT systems use resources for the English -Romanian language pair (Marcu and Munteanu, ,ULPLD &HDXúX 2WKHU 07 systems develop resources for German -Roma-QLDQ *DYULO 9HUWDQ DQG *DYULO or for French -Romanian (NDYOHDDQG7RGLUDFX 2010). Most of the cognate identification modules used by these systems were purely statistical. No cognate identification method is available for the studied languages. Cognate identification is a difficult problem, especially to detect false friends. Inkpen et al. (2005) classify bilingual words pairs in several categories such as: -cognates (reconnaissance (FR) -recognition (EN)); -false friends (blesser µWRLQMXUH ¶)5bless (EN)); -partial cognates (facteur (FR) -factor or mailman (EN)); -genetic cognates (chef (FR) -head (EN)); -unrelated pairs of words (glace (FR) -ice (EN) and glace (FR) -chair (EN)). In our method, we rather identify cognates and partial cognates to improve lexical alignment. Thus, we aim to obtain a high precision of our method and to eliminate some false friends using statistical techniques and linguistic information. To identify cognates from parallel corpora, several approaches exploit the orthographic similarity between two words of a bilingual pair. A simple method is the 4-grams method (Simard et al., 1992). This method considers that two words are cognates if they contain at least 4 characters and at least their first 4 characters are identical. Other PHWKRGV H[SORLW DVVRFLDWLRQ VFRUHV DV 'LFH ¶V coefficient (Adamson and Boreham, 1974) or a variant of this coefficient (Brew and McKelvie, 1996). This measure computes the ratio between the number of common character bigrams of the two words and the total number of two words bigrams. Also, two words are considered as cognates if the ratio between the length of the maximum common substring of ordered (and not necessarily contiguous) characters and the length of the longest word is greater than or equal to a certain empirically determined threshold (Melamed, 1999;Kraif, 1999). Similarly, other methods compute the distance between two words, that represent the minimum number of substitutions, insertions and deletions used to transform one word into another (Wagner and Fischer, 1974). On the other hand, other methods compute the phonetic distance between two words belonging to a bilingual pair (Oakes, 2000). Kondrak (2009) proposes methods identifying three characteristics of cognates: recurrent sound correspondences, phonetic similarity and semantic affinity. We present a French -Romanian cognate identification module. We combine statistical techniques and linguistic information (lemmas, POS tags) to improve the results of the cognate identification method. We compare it with other methods using exclusively statistical techniques. The cognate identification system is integrated into a lexical alignment system. In the next section, we present our lexical alignment method. We present our parallel corpora and the tools used to preprocess our parallel corpora, in section 3. In section 4, we describe our cognate identification method. We present the UHVXOWV ¶HYDOXDWLRQLQVHFWLRQ2XUFRQFOXVLRQV and further works figure in section 6.
The Lexical Alignment Module
The output of the cognate identification module is exploited by a French -Romanian lexical alignment system. Our lexical alignment system combines statistical methods and linguistic heuristics. We use GI-ZA++ Ney, 2000, 2003) implementing IBM models (Brown et al., 1993). These models realize word-based alignments. Indeed, each source word has zero, one or more translation equivalents in the target language, computed from aligned sentences. Due to the fact that these models do not provide many-to-many align-ments, we also use some heuristics (Koehn et al., 2003;Tufi et al., 2005) in order to detect phrase-based alignments such as chunks: nominal, adjectival, verbal, adverbial or prepositional phrases. We use lemmatized, tagged and annotated at chunk level parallel corpora. These corpora are described in details in the next section. To improve the lexical alignment, we use lemmas and morpho-syntactic properties. We prepare the corpus in the input format required by GIZA++, providing also the lemma and the two first characters of the morpho-syntactic tag. This operation morphologically disambiguates the lemmas (Tufi et al., 2005). For example, the same French lemma traité (=treaty, treated) can be a common noun or a participial adjective: traité_Nc vs. traité_Af. This disambiguation pro-FHGXUH LPSURYHV WKH *,=$ V\VWHP ¶V SHUIRrmance. In order to obtain high accuracy of the lexical alignment, we realize bidirectional alignments (FR-RO and RO-FR) with GIZA++, and then we intersect them (Koehn et al., 2003). This heuristic only selects sure links, because these alignments are detected in the two lexical alignment process directions. To obtain sure word alignments, we also use a set of automatically identified cognates. Indeed, we filter the list of translation equivalents obtained by alignment intersection, using a list of cognates. To extract cognates from parallel corpora, we developed a method adapted to the studied languages. This method combines statistical techniques and linguistic information. The method is presented in section 4. We obtain sure word alignments using multiword expressions such as collocations. They represent polylexical expressions whose words are related by lexico-syntactic relaWLRQV 7RGLUDúFX HW DO 2008). We use a multilingual dictionary of verbo-QRPLQDO FROORFDWLRQV7RGLUDúFXHW DO to align them. This dictionary is available for French, Romanian and German. The dictionary is completed by data extracted from legal texts and it contains the most frequent verbo-nominal collocations from this domain. The external resource is used to align this class of collocations (for legal corpora), but it do not resolve the alignment problems of other classes (noun + noun, adverb + adjective, etc.). Finally, we apply a set of linguistically motivated heuristic rules (Tufiú HW DO LQ RUGHU WR augment the recall of the lexical alignment method: i. we define some POS affinity classes (a noun can be translated by a noun, a verb or an adjective); ii. we align content-words such as nouns, adjectives, verbs, and adverbs, according to the POS affinity classes; iii. we align chunks containing translation equivalents aligned in a previous step; iv. we align elements belonging to chunks by linguistic heuristics; At this level, we developed a supplementary module depending on the two studied languages. This module uses a set of 27 morphosyntactic contextual heuristics rules. These rules are defined according to morpho-syntactic differences between French and Romanian (Navlea and Todiraâcu, 2010). For example, in Romanian relative clause, the direct object is simultaneously realized by the relative pronoun care 'that' (preceded by the pe preposition) and by the personal pronoun îl, -l, o, îi, -i, le. In French, it is expressed by que relative pronoun. Thus, we define a morpho-syntactic heuristic rule to align the supplementary elements from the source and from the target language (see Figure 1). The rule aligns que with the sequence pe care (accusative) le. les problemele problèmes pe que care créerait lela ar publication genera publicarea Figure 1 Example of lexical alignment using morpho-syntactic heuristics rules (the case of relative clause)
French Romanian
We focus here on the development of the cognate identification method used by our French -Romanian lexical alignment system. In the next section, we present the parallel corpora used for our experiments.
These corpora are based on the Acquis Communautaire multilingual corpus available in 22 official languages of EU. It is composed of laws adopted by EU member states and EU candidates since 1950. For our project, we use a subset of 228,174 pairs of 1-1 aligned sentences from the JRC-Acquis, selected from the common documents available in French and in Romanian. We also use a subset of 490,962 pairs of 1-1 aligned sentences extracted from the DGT-TM. As the JRC-Acquis and the DGT-TM are legal corpora, we built other multilingual corpora for other domains (politics, aviation). Thus, we manually selected French -Romanian available texts from several websites according to several criteria: availability of the bilingual texts, reliability of the sources, translation quality, and domain. The used corpora are described in the Ta We preprocess our corpora by applying the TTL 2 tagger (Ion, 2007). This tagger is available for French and for Romanian as Web service. Thus, the parallel corpora are tokenized, lemmatized, POS tagged and annotated at chunk level. TTL uses the set of morpho-syntactic descriptors (MSD) proposed by the Multext Project 3 for French (Ide and Véronis, 1994) and for Romanian (Tufiú DQG %DUEX 77/ ¶V results are available in XCES format (see Figure 2). <seg lang="fr"><s id="ttlfr.3"> <w lemma="voir" ana="Vmps-s">vu</w> <w lemma="le" ana="Da-fs" chunk="Np#1">la</w> <w lemma="proposition" ana="Ncfs" chunk="Np#1">proposition</w> <w lemma="de" ana="Spd" chunk="Pp#1">de</w> <w lemma="le" ana="Da-fs" chunk="Pp#1,Np#2">la</w> <w lemma="commission" ana="Ncfs" chunk="Pp#1,Np#2">Commission </w> <c>;</c> </s></seg>
Figure 2 77/ ¶VRXWSXWIRU)UHQFK
In the example of the Figure 2, lemma attribute represents the lemmas of lexical units, ana attribute provides morpho-syntactic information and chunk attribute marks nominal and prepositional phrases. We exploit these linguistic information in order to adapt lexical alignment algorithm for French and for Romanian. Thus, we study the influence of linguistic information to the quality of the lexical alignment.
Cognate Identification
We did our lexical alignment and cognate identification experiments on a legal parallel corpus extracted from the Acquis Communautaire. We automatically selected 1,000 1:1 aligned complete sentences (starting with a capital letter and finishing with a punctuation sign). Each selected sentence has no more than 80 words. This corpus contains 33,036 tokens in French and 28,645 tokens in Romanian. We tokenized, lemmatized and tagged our corpus as mentioned in the previous section. Thus, to extract French -Romanian cognates from lemmatized, tagged and sentence-aligned parallel corpus, we exploit linguistic information: lemmas, POS tags. In addition, we use orthographic and phonetic similarities between cognates. To detect such similarities, we focus rather on the beginning of the words and we ignore their endings. First, we use n-gram methods (Simard et al., 1992), where n=4 or n=3. Second, we compare ordered sequences of bigrams (an ordered pair of characters). Then, we apply some data input disambiguation strategies, such as: -we iteratively extract sure cognates, such as invariant strings (abbreviations, numbers etc.) or similar strings (3-and 4-grams). At each iteration, we delete them from the input data; -we use cognate pairs frequencies in the studied corpus. We consider as cognates the words belonging to a bilingual pair simultaneously respecting the following linguistic conditions: 1) their lemmas are translation equivalents in two parallel sentences; 2) they have identical lemmas or have orthographic or phonetic similarities between lemmas; 3) they are content-words (nouns, verbs, adverbs, etc.) having the same POS tag or showing POS affinities. So, we filter out short words as prepositions and conjunctions to limit noisy output. Thus, we do not generally restrict lemmas length.
We also detect short cognates as LO ¶KH ¶ vs. el (personal pronouns), cas 'case' vs caz (nouns). We avoid ambiguous pairs such as lui 'him' (personal pronoun) (FR) vs. lui (possessive determiner) (RO), ce 'this' (demonstrative determiner) (FR) vs. ce 'that' (relative pronoun) (RO). We classify French -Romanian cognates (detected in the studied parallel corpus) in several categories: 1) cross-lingual invariants (numbers, certain acronyms and abbreviations). In this category, we also consider punctuation signs; 2) identical cognates (civil vs civil); 3) similar cognates (at the orthographic or phonetic level) : a) 4-grams (Simard et al., 1992); The first 4 characters of lemmas are identical. The length of these lemmas is greater than or equal to 4 (autorité 'authority' vs. autoritate). b) 3-grams; The first 3 characters of lemmas are identical and the length of the lemmas is greater than or equal to 3 (mars 'March' vs. martie); c) 8-bigrams; Lemmas have a common sequence of characters (eventually discontinuous) among the first 8 bigrams. At least one character of each bigram is common to the two words. This condition allows the jump of a non identical character (fonctionne-ment µIRQFWLRQ ¶ YV fuQFLRQDUH). This applies only to long lemmas, with the length greater than 7. d) 4-bigrams; Lemmas have a common sequence of characters (eventually discontinuous) among the 4 first bigrams: rembourser 'refund' vs. rambursa; objet 'object' vs. obiect. This applies to long lemmas (length > 7) but also to short lemmas (length less than or equal to 7).
Our method mainly follows three stages. In the first place, we apply a set of empirically estab-lished orthographic adjustments between French -Romanian lemmas, such as: remove diacritics, detect phonetic mappings, etc. (see Table 2). As French uses an etymological writing and Romanian has a phonetic writing, we identify phonetic correspondences between lemmas. We make some orthographic adjustments from French to Romanian. For example, cognates phase 'phase' cinq -cinci équilibre -echilibru marquer -marca qualité -calitate pratique -practic intervocalic s v + s + v v + z + v présent -prezent w w v wagon -vagon y y i yaourt -iaurt Table 2 French -Romanian cognates orthographic adjustments Secondly, we apply seven cognate extraction steps (see Table 3). To extract cognates from parallel corpora, we aim to improve the precision of our method. Thus, we extract cognates by applying the categories 1 -3 (a-d) (see Table 3). Moreover, in order to decrease the noise of cognate identification method, we apply two supplementary strategies. We filter out ambiguous cognate candidates (a same source lemma occurs with several target candidates), by computing their frequencies in the corpus. Thus, we keep the most frequent candidate pair. This strategy is very effective to augment the results precision, but it might decrease the recall in certain cases. Indeed, there are cases when French -Romanian cognates have one form in French, but two various forms in Romanian (information 'information' vs. informaLH or informare; manifestation 'manifestation' vs. PDQLIHVWDLH or manifestare). We recover these pairs by using regular expressions based on specific lemma ending (ion (FR) vs. ie|re (RO)). Then, we delete the reliable cognate pairs (high precision) from the input data at the end of the extraction step. Thus, we disambiguate the data input. For example, the identical cognates transport vs. transport, obtained in a previous extraction step and deleted from the input data, eliminate the occurrence of candidate transport vs. tranzit as 4-grams cognate, in a next extraction step.
These strategies allow us to increase the precision of our method. We give below some examples of correct extracted cognates: DXWRULWp µDu-WKRULW\ ¶)5 -autoritate (RO); disposition µODy-RXW ¶)5-GLVSR]LLH52 GLUHFWLYHµGLUHFWLYH ¶ (FR) -GLUHFWLY 52 We also eliminate some false friends: We apply the same method for cognates having POS affinity (N-V; N-ADJ). We keep only 4gram cognates, due to a significant decrease of the precision for the other categories (3-grams, 8bigrams and 4-bigrams). Finally, we recover initial cognates lemmas for both languages.
Evaluation
We evaluate our method on a parallel corpus of 1,000 sentences described in the previous section. We compare the results with another two methods (see Table 4): a) the method exclusively based on 4grams; b) a combination of the 4-gram approach and the orthographic adjustments. We manually built a reference list of cognates containing 2,034 pairs from parallel studied sentences. Then, we compare extracted cognate list to this reference list. Our method extracted 1814 correct cognate pairs (from a total of 1914 extracted pairs), which represents a precision of 94,78 %. The 4-grams method has good precision (90,85%), but low recall (47,84%). The orthographic adjustment method significantly improves the recall of the 4-grams method. The various extraction steps using statistical techniques and linguistic filters, applied after the orthographic adjustment step, improve both recall (89,18% from 72,42%) and precision (94,78% from 91,55%). These results show that the use of some linguistic information provides better results than purely statistical methods.
Conclusions and Further Work
We present here a cognate identification module for two morphologically rich languages such as French and Romanian. Cognates are very important elements used by a lexical alignment system. Thus, we aim to obtain high precision and recall of our cognate identification method by combining statistical techniques and linguistic information. We show that an orthographic adjustment step between French -Romanian lemmas bilingual pairs and linguistic filters improve significantly module's performance. The cognate identification method is integrated into a French-Romanian lexical alignment module. The alignment module is part of a larger project aiming to develop a French -Romanian factored phrase-based statistical machine translation system. Adamson, George W., and Jillian Boreham. 1974
|
2017-07-08T21:00:43.281Z
|
2011-01-01T00:00:00.000
|
{
"year": 2011,
"sha1": "4ef3ee0ff4460b6db352fd3d40358e860100f898",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "4ef3ee0ff4460b6db352fd3d40358e860100f898",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
268607643
|
pes2o/s2orc
|
v3-fos-license
|
Angiotensin Receptor Blockers and the Risk of Suspected Drug-Induced Liver Injury: A Retrospective Cohort Study Using Electronic Health Record-Based Common Data Model in South Korea
Introduction Angiotensin receptor blockers are widely used antihypertensive drugs in South Korea. In 2021, the Korea Ministry of Food and Drug Safety acknowledged the need for national compensation for a drug-induced liver injury (DILI) after azilsartan use. However, little is known regarding the association between angiotensin receptor blockers and DILI. Objective We conducted a retrospective cohort study in incident users of angiotensin receptor blockers from a common data model database (1 January, 2017–31 December, 2021) to compare the risk of DILI among specific angiotensin receptor blockers against valsartan. Methods Patients were assigned to treatment groups at cohort entry based on prescribed angiotensin receptor blockers. Drug-induced liver injury was operationally defined using the International DILI Expert Working Group criteria. Cox regression analyses were conducted to derive hazard ratios and the inverse probability of treatment weighting method was applied. All analyses were performed using R. Results In total, 229,881 angiotensin receptor blocker users from 20 university hospitals were included. Crude DILI incidence ranged from 15.6 to 82.8 per 1000 person-years in treatment groups, most were cholestatic and of mild severity. Overall, the risk of DILI was significantly lower in olmesartan users than in valsartan users (hazard ratio: 0.73 [95% confidence interval 0.55–0.96]). In monotherapy patients, the risk was significantly higher in azilsartan users than in valsartan users (hazard ratio: 6.55 [95% confidence interval 5.28–8.12]). Conclusions We found a significantly higher risk of suspected DILI in patients receiving azilsartan monotherapy compared with valsartan monotherapy. Our findings emphasize the utility of real-world evidence in advancing our understanding of adverse drug reactions in clinical practice. Supplementary Information The online version contains supplementary material available at 10.1007/s40264-024-01418-4.
Introduction
Angiotensin receptor blockers (ARBs) are first-line treatments for hypertension, and they are one of the most used antihypertensive drugs in South Korea [1][2][3].They lower blood pressure by antagonizing the effect of angiotensin II on AT1 receptors, effectively blocking the downstream renin-angiotensinaldosterone system [4,5].Common adverse drug reactions associated with ARBs include hyperkalemia, hypotension, and renal dysfunction [5].
In 2020, a request for national compensation for a azilsartan-induced liver injury case was submitted to the Korea Institute of Drug Safety and Risk Management (KIDS).The case was investigated according to the national regulations on the Hyunjoo Kim, Nayeong Son have contributed equally to this work.
Extended author information available on the last page of the article adverse drug reaction relief system [6], and the need for compensation was acknowledged in 2021.Azilsartan is the newest among ARBs, approved by the US Food and Drug Administration on 25 February, 2011, and by the Korea Ministry of Food and Drug Safety on 26 May, 2017 [7,8].Moreover, this drug has been approved by the European Medicines Agency [9], and Japanese Pharmaceuticals and Medical Devices Agency [10] and is used worldwide.As of 2022, azilsartan is the only drug in its class in South Korea with no warning against possible liver dysfunction on its product label [8].The situation is similar in other countries, including the USA and several European countries [9,11].
Drug-induced liver injuries (DILI) are rare, with an incidence of 2.4-13.9 per 100,000 globally [12].It is a major concern in the pharmaceutical industry as it is the most frequent cause of post-market safety-related drug withdrawals yet rarely detected in randomized controlled trials [12,13].Some forms of DILI can be life threatening, which are often unpredictable and independent of the dose, route, or duration of exposure [12,13].Drug-induced liver injury mimics a wide spectrum of liver diseases, and their biological mechanisms are poorly understood [12,13].
Little is known about ARB-induced liver injuries, and only a handful of individual case reports are available [14][15][16].Motivated by the authority's decision to compensate for the azilsartan-induced liver injury, we aimed to compare the risk of DILI among specific ARBs against valsartan by analyzing an electronic healthcare record-based common data model (CDM) database.The findings of this study can provide methodological insights into comparing the risk of DILI using big data and provide real-world evidence for enhancing patient safety.
Data Source
This study utilized the medical record observation and assessment of drug safety (MOA) CDM, a standardized and distributed data network that allows for a multicenter data analysis using de-identified electronic healthcare record data collected from university hospitals (MOA CDM data partners) in South Korea [17].The MOA CDM is coordinated by KIDS, a national organization affiliated with the Korea Ministry of Food and Drug Safety, and contains medical records of over 37 million patients from 30 data partners as of 2023 [17].Patients are registered in the database on the day of their first visit to a hospital.For this study, we specifically collaborated with 20 data partners of which 10 are in the capital city (Seoul) and 7 in the nearby province (Gyeonggi-do).
Study Design and Population
We performed a retrospective cohort study with incident ARB users who were at least 18 years of age and had initiated ARB treatment between 1 January, 2018 and 30 June, 2021.The first prescription date of the ARB was defined as the index date.Patients were censored when they experienced the study outcome, switched to another ARB, or at 6 months from the treatment initiation, as the azilsartan-induced liver injury case submitted to KIDS occurred approximately 6 months after the first use of the drug [18].
We excluded patients who were younger than 18 years of age or were prescribed multiple ARBs at the index date.Additionally, patients were excluded if they were prescribed any ARBs 3 months before the index date, had a suspected DILI (study outcome) or clinically significant conditions that may interfere with the interpretation of the study results 1 year before the index date, or had a serious hepatobiliary condition or were pregnant, which is a contraindication for ARB use 1 year before the index date or during the study period (study figure available in the Electronic Supplementary Material [ESM]).The decision to exclude pregnant patients was based on the significant contraindication of ARBs during pregnancy, as discontinuation of ARBs is a common practice when planning pregnancy.The exclusion was applied to mitigate a potential serious mis-estimation of follow-up time in these cases.To further validate this approach, we examined the number of pregnant patients during the entire data period, which was 0.028% among ARB users and would not have a significant impact on the overall study results.
Exposure Variable (ARBs)
Patients were assigned to treatment groups according to the ARB prescribed at the index date.Nine ARBs were marketed in South Korea during the study period: azilsartan, eprosartan, telmisartan, fimasartan, valsartan, olmesartan, losartan, irbesartan, and candesartan.
Outcome Variable (Suspected DILI)
We operationally defined DILI by adapting the clinical chemistry criteria provided by the International DILI Expert Working Group: alanine aminotransferase (ALT) ≥ 5× upper limit of normal (ULN), alkaline phosphatase (ALP) ≥ 2× ULN, or ALT ≥3× ULN and total bilirubin (TBL) > 2× ULN.After investigating the ULN standards of the data partners, the following values were selected: ALT, 40 U/L; ALP, 117 U/L; and TBL 1.2 mg/dL.Aspartate aminotransferase levels were not assessed as they may not specifically indicate liver injury [19].
The type of DILI was classified using the R ratio ([ALT/ALT ULN]/[ALP/ALP ULN]) as follows: hepatocellular (R ratio ≥ 5), mixed (R ratio 2-5), or cholestatic (R ratio ≤ 2) [19].The severity of DILI was classified as follows: mild (ALT ≥ 5× ULN or ALP ≥2× ULN and TBL < 2× ULN), moderate-severe: (ALT ≥ 5× ULN or ALP ≥2× ULN and TBL ≥ 2× ULN), and fatal: any all-cause death within 1 year after the incident DILI [19].The operational definitions for DILI were further reviewed by clinical experts and researchers with expertise in liver injury.As any patient identification information is pseudonymized in the CDM database, medical charts reviews were not feasible.Therefore, we inform that the cases detected in our study are all suspected cases for which no additional validation was conducted.
Alternative Causes of Liver Injury
As the diagnosis of DILI mostly depends on the exclusion of alternative causes of liver injury, we listed clinical conditions that could be potential alternative causes of liver injury based on the previous literature [13].With a review by clinical experts, the conditions were categorized as follows: to be excluded at baseline and adjusted for during the follow-up (clinically significant conditions), to be completely excluded at baseline and the follow-up (serious hepatobiliary conditions), and others to be adjusted for at baseline and the follow-up.In addition, we adjusted for hepatotoxic drugs by class at baseline and the follow-up, which were defined as drugs with a LiverTox DILI-likelihood score of A (well known) or B (known or highly likely) [ESM].
Covariates
Patient demographics (sex, age, and enrollment year), encounter records (hospitalizations, days outpatient visits, and emergency room visits), Charlson Comorbidity Index, comorbidities, and prescription histories were included as baseline covariates.Additionally, predetermined potential alternative causes of liver injury and anti-hypertensive drugs class prescribed during the follow-up were also included as covariates.
Statistical Analysis
Descriptive analyses were conducted to summarize the patient characteristics, treatment patterns, and characteristics of DILI.The number of patients by status over time was collected from each data partner to calculate pooled incidence rates.Cox proportional hazards models were used to derive hazard ratios (HRs) of DILI and to compare the risk among specific ARBs against valsartan.Subgroup analyses were conducted by treatment patterns and data partners to compare study populations.Valsartan was selected as the reference drug as it is the most used ARB in Korea [21].To minimize selection bias, we applied propensity scorebased inverse probability of treatment weighting (IPTW) and derived the average treatment effect [22,23].Additionally, covariates collected during the follow-up were included in the regression model to reduce confounding.From each data partner, coefficients, standard errors, and confidence intervals (CIs) from Cox proportional hazards models were collected to conduct the meta-analysis.We evaluated the average treatment effect by a random-effect model in the meta-analysis to consider population variance of each data source.
Characteristics of the Study Population
A total of 229,881 ARB users were included in this study.Of these, valsartan (21.9%) was most used, followed by telmisartan (17.9%), olmesartan (14.6%), and losartan (14.6%); azilsartan (1.1%) and eprosartan (0.4%) were least common (Fig. 1).The mean age of the study population was 64.8 years, with a range from 61.3 years (azilsartan) to 68.7 years (eprosartan).There were slightly more men than women (53.2%).More than half of the patients had a history of hospitalization (59.2%) and fewer than five outpatient visits (53.7%), whereas the majority had no history of emergency room visits (89.4%).The mean Charlson Comorbidity Index of the total population was 1.5, which was within the range of mild severity [38].Overall, diabetes without chronic complications (16.9%) was the most common comorbidity, followed by diabetes with chronic complications (15.7%), malignant tumors (14.4%), and cerebrovascular diseases (11.2%).In the treatment groups, there were significant differences in the prevalence of comorbidities likely reflecting indications approved for each ARB (p < 0.001).Overall, the patients took a mean of 17 prescribed medications (Table 1).
After IPTW, standardized mean differences between treatment groups were well within the recommended ranges, indicating that the baseline characteristics were well balanced, except in the eprosartan group likely owing to the small sample size, which was less than 1% of the study population (Table 1).Across data partners, the median follow-up duration varied from 61.5 days to 96 days (details on the study population comparison by data partners available in the ESM).
Anti-Hypertensive Treatment Patterns During the Follow-Up
In total, 37.1% had less than 1 defined daily dose (DDD), 35.5% had 1-2 DDDs, and 27.4% had more than 2 DDDs of ARBs.Notably, the proportion of high-dose prescriptions (>2 DDDs/day) was significantly higher in the azilsartan and candesartan groups, and the proportion of long-term prescriptions (≥60 days) was the highest in the azilsartan group.The overall proportion of monotherapy was 27.5%, and the proportion was significantly higher in the irbesartan and azilsartan groups than other groups.In total, 41,270 patients (18.0%) were lost to the follow-up owing to switching within the ARB class; the proportion was the highest in the eprosartan group (38.5%) [all p < 0.001] (Table 2).
Safety of ARBs Versus Valsartan
Overall, the crude incidence of DILI was 48.4 (per 1000 person-years) in ARB users.Most were cholestatic and of mild severity (Table 3).Notably, DILI frequency was the highest within the first 4 weeks except in the azilsartan group (ESM).As a result of the IPTW Cox proportional hazards model analysis with additional covariate adjustment at the followup, the risk was significantly lower in olmesartan users than in valsartan users (HR: 0.73 [95% CI 0.55-0.96]).No significant differences were observed among the other treatment groups (Table 4).
By anti-hypertensive treatment patterns, in patients receiving azilsartan monotherapy, the risk was significantly higher than that in those who received valsartan monotherapy (HR: 6.55 [95% CI 5.28-8.12]).A significantly higher risk was also found in patients who received azilsartan and diuretics instead of valsartan and diuretics (HR: 1.63 [95% CI 1.27-2.09]).No significant dose-dependent trend was observed across the treatment groups (see Table 5).
Discussion
In this large-scale observational study of 229,881 ARB users from 20 university hospitals, the most common type of liver injury was cholestatic.Importantly, we found the risk of DILI was significantly higher in patients receiving azilsartan monotherapy compared with valsartan monotherapy.Our findings add new value to current anti-hypertensive therapies as post-market data on ARB-induced liver injury are scarce.Although ARBs are generally considered safe with a low risk of liver injury, our findings can help understand ARB-induced liver injury.We found the comparative risk of DILI was significantly higher in patients receiving azilsartan monotherapy compared with valsartan monotherapy, which may be associated with the higher proportion of long-term and high-dose prescriptions found in this group.Azilsartan has been known to have greater antihypertensive effects than other ARB, owing to its unique binding behavior to the AT1 receptor with its 5-oxo-1,2,4-oxadiazole moiety that induces stronger inverse agonism [39,40].The moiety also makes it more lipophilic than other ARBs and requires metabolism via cytochrome P450 2C9 [7,41].Despite its hepatic metabolism, no notable hepatic adverse events have been detected in randomized clinical trials and the safety profiles obtained were similar to those of other ARBs in this class [41,42].It is possible that the low incidence of DILI made their detection difficult in the trials.
We also found that the comparative risk of DILI was significantly lower in olmesartan users than in valsartan users.Previously, olmesartan prevented hepatic steatosis and fibrosis in diabetic mice via inhibition of apoptosis signaling.In another pre-clinical study, the administration of olmesartan significantly improved liver function and decreased hepatic oxidative stress and inflammatory cytokines [43].Because of the current lack of real-world evidence in patients, further interpretation of the findings is limited.
As a result of the assessment of characteristics of suspected DILI by ARBs, most were cholestatic and of mild severity.Importantly, we found no differences in the type or severity of DILI by ARBs.Cholestatic liver injuries are more common in older adults and require longer days of recovery than other types.Our findings highlight close monitoring after ARB use may help prevent unnecessary progression to chronic liver disease [39].As a result of the subgroup analysis based on the prescribed dose, no significant dosedependent trend was observed across the treatment groups, suggesting ARB-induced liver injuries are most likely idiosyncratic, unlike DILI secondary to drug overdose [19].The pathophysiology of idiosyncratic DILI is yet poorly understood, unexpected given the drug's pharmacological action, and is largely dependent on patient-specific factors that increase their susceptibility to a liver injury [12,19].We also assessed the temporal pattern of DILI occurrence and found that the highest number of DILI occurred within the first 4 weeks except in the azilsartan group, suggesting the pathophysiology of liver injury from azilsartan may differ from that of other ARBs.
This study has some limitations.First, the results of our observational study may have been affected by residual confounding factors and biases.For example, information on over-the-counter drugs, traditional medicine, and alcohol use that was unavailable in the hospital database could have affected our study results.To minimize such risks, we applied IPTW, conducted balance diagnostics, and adjusted for patient characteristics during the followup.Second, our study objective was to compare the risk of DILI among specific ARBs against valsartan, which served as the control ARB in our analyses; therefore, the results provided are only a relative comparison.We acknowledge the inclusion of negative controls could have enhanced the interpretability of the study results, and their inclusion can be considered in future investigations.Third, because no further adjudication was conducted for the detected DILI, our statistics were based on the number of suspected DILI.The specificity of the detected DILI could have been improved by further adjudication using medical chart reviews.Furthermore, given the rarity of the outcome, a formal phenotyping to reflect local setting would have added value.To overcome such limitations, we have applied stringent criteria to detect DILI provided by the international DILI Expert Working Group for which superior performance was previously demonstrated when compared with other algorithms like the Council for International Organization of Medical Sciences and the Drug-Induced Liver Injury Network [20].In addition, because of the inability to share a patient identifier across data partners, the same patient visiting more than two hospitals may result in double counting.Finally, the results should be interpreted with caution owing to the limited sample size, especially for eprosartan and azilsartan users, and the fact that our study population included patients who visited tertiary hospitals, which could limit generalizability.Despite the limitations of this study, we have successfully assessed the characteristics of suspected DILI in ARB users and derived the relative risks among ARBs in a real-world clinical setting in Korea.
Conclusions
We found a significantly higher risk of suspected DILI in patients receiving azilsartan monotherapy compared with valsartan monotherapy.Our findings underscore the valuable role of real-world evidence in regulatory decision making.
3
Drug-induced liver injury incidence per 1000 person-years by angiotensin receptor blocker treatment groups Characteristics Total
4 N
, unless otherwise indicated.P-value from the chi-square test for categorical variables and t-test for continuous variables, unless otherwise indicated.Missing values not indicated CI confidence interval a All-cause death within a year after incident drug-induced liver injuryTable Relative risks of drug-induced liver injuries by angiotensin receptor blocker treatment groups CI confidence interval, HR hazard ratio Bold indicates statistical significance (p<0.05)Model 1: crude HR Model 2: inverse probability of treatment weighting and hepatic, biliary, and pancreatic conditions, use of hepatotoxic drugs, and antihypertensive drugs collected during the follow-up Treatment groups Model
Table 1
Baseline characteristics by angiotensin receptor blocker treatment groups bPrescribed at least once within 3 months before the index date Table1
Table 2
Anti-hypertensive treatment patterns during follow-up by ARB treatment groups
Table 5
Comparison of risk of drug-induced liver injuries by ARB treatment patternsModel: inverse probability of treatment weighting and hepatic, biliary, and pancreatic conditions, use of hepatotoxic drugs, and antihypertensive drugs collected during the follow-up
|
2024-03-23T06:18:55.556Z
|
2024-03-21T00:00:00.000
|
{
"year": 2024,
"sha1": "f80080712b33cd813500fa1e344634d21c4cabcc",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40264-024-01418-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "13f9b975dc11ff83147b4a5de8f8aaa23d227677",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253294248
|
pes2o/s2orc
|
v3-fos-license
|
What did the digital age mean for privacy in the United States?
Over the course of the last 3 decades, the world has seen monumental shifts in how information is collected, transmitted, and disseminated. Every aspect of our personalities that live on the internet, including our browser history, photos we post to social media, our shopping decisions and our selection of online friends, has been collated, quantified, and assimilated into a digital profile, which has skyrocketing value to an increasing number of businesses. With these developments in technology come the inevitable questions of ownership of such data, its use, misuse and even possible theft. This paper takes a comprehensive and comparative look at the data privacy legislature in the two largest data hubs in the world, namely the United States and the European Union. The paper also seeks to address the shortcomings of certain, past legislative decisions, and makes a recommendation for the future. To do this, we analyze the events of the past, using the 2016 Facebook/Cambridge Analytica data scandal as a focal point. On analyzing the major differences between American privacy law and the preeminent document on data privacy at the time, namely the Global Data Privacy Regulations (GDPR), we conclude that data privacy in the United States is in its nascent stages, in dire need of an overhaul. The California Consumer Privacy Act is the legislature that comes close to mimicking the function of the GDPR, albeit at a much smaller scale. The other remedies include the American Data Privacy and Protection Act (ADPPA), which is already under consideration by Congress, or a state-by-state approach.
Introduction
With legally challenging privacy questions arising frequently in major news headlines and the judiciary system of the United States, it becomes routine for the public to find subjectively easy answers to objectively hard cases. However, there is nothing simple about drawing lines around privacy. In the volatile age of ransomware, government surveillance, and big data two things remain true. First, Americans feel unprotected and out of control when it comes to their personal data and online privacy. Research conducted by Pew found that 81% of U.S. adults say that "they have very little or no control over the data that… companies collect about them," yet these results do not end here (Auxier et al. 2019). Pew also found that a similar percent of Americans are concerned with the risk of their data being collected, the lack of control of their data, and the tracking of their actions online (Auxier et al. 2019). While Pew research has also found that 75% of U.S. adults believe there should be more government regulation protecting consumer data, our second truth highlights legal privacy is in constant flux (Auxier et al. 2019).
Throughout history it has been held that the magnitude of rights expands and contracts according to the will of the people and those in power. In one of the earliest writings on American privacy, future Supreme Court Justice Louis Brandeis stated that the "development of law was inevitable" (Warren et al. 1890). In accordance with Brandeis statement, the law is never static; instead, it is constantly in a tug of war between parties with different agendas. The question with privacy is not whether it is in a state of contention, but rather if the necessary policy change to pull privacy in the favor of the individual will be implemented in time. However, before a discussion on the future of privacy in the United States can occur it is necessary to understand the current judicial and legislative position on privacy and the critical cases that established the principle.
Background Data Privacy in the Courts
Griswold v. Connecticut, 381 U.S. 479 (1965) was the landmark decision in which the Supreme Court found that the right to privacy is established from penumbras found in the Bill of Rights. The Court rules that there existed a "zone of privacy" created by the inferred Right to Privacy, and an individual could not be forced to release this by the government. Katz v. United States, 389 U.S. 347 (1967) furthered the right to privacy by extending the interpretation of the Fourth Amendment to "protect people, not places." A bound to these privacy rights is found in Whalen v. Roe, 429 U.S. 589 (1977). It was here that the Supreme Court found that collecting and storing sensitive patient information is not a violation of privacy covered by the Fourteenth Amendment. It was also found that the doctor-patient relationship is not within the zone of privacy.
The following cases document a relatively new extension of privacy litigation that focuses on the unconstitutional procurement of data. The concept of the "third party doctrine" is established by United States v. Miller, 425 US 435 (1976). Under this reasoning, an individual should not "reasonably expect privacy in information they willingly disclose to third parties." Kyllo v. United States, 533 US 27 (2001) found that technological searches of a home, by the government, are unconstitutional under the 4th amendment when the device is not in "general public use." This finding was to protect individuals from "the mercy of advancing technology." In Carpenter v. United States, 585 U.S. ____ (2018) it was held that the warrantless seizure of Timothy Carpenter ' s cell-site evidence violated his Fourth Amendment right against search and seizure. Carpenter simultaneously restricts the power of the "third-party doctrine" by deciding that simply because data is "held by a third party does not by itself overcome the user ' s claim to" protections under the Fourth Amendment, but instead these protections must voluntarily be reduced.
This very simple history is aimed to prepare the reader for the complex and contrasting nature of privacy within the federal courts. As shown in the examples above, the courts have longstanding legal precedents that did not envision the technological privacy battles that are currently making front pages. This forces the courts to find creative rulings from outdated provisions and tests that do not always put the protection of the people at the center of the decisions.
Data Privacy in the Legislatures
Legislatures across the globe have voiced growing concerns over citizens' rights to control their own personal data. These concerns give rise to a multitude of questions: Is an individual's personal data considered that individual's property? If so, should individuals required to be compensated when their data is used for the economic gain of a third party? Do individuals maintain ownership of their data when personal information is used without their knowledge? Are individuals allowed to demand their data be erased from databanks or archives at their discretion? (What is personal data?, 2022) It is not easy to answer these questions under the purview of existing legislation to form a map to what future legal framework concerning the privacy of citizens on the internet must enshrine. Indeed, this is a complex question based in technology that evolves multitude faster than any law that is passed to protect those individuals. However, there are foundational principles that can guide the discussion.
In a seminal article on the right to privacy written in 1890, future Supreme Court Justice Louis D. Brandeis put it this way: "The common law secures, to each individual, the right of determining, ordinarily, to what extent his thoughts, sentiments and emotions shall be communicated to others." In the next 100 years, the concept grew to include, "[t]he right to informational privacy is succinctly defined as the right of the individual to maintain control over personal information concerning one's 'physical and individual characteristics, knowledge, capabilities, beliefs and opinions." Because of that principle, it is a natural extension to say that an individual also has a right to claim certain rights. Specifically, privacy is the "claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others." Perhaps the most fundamental question to ask remains one of property rights: is data property at all? While there is no comprehensive federal law related to data privacy in the United States, we can look to the European Union's General Data Protection Regulations (GDPR) for potential guidance. Article 4, Clause 1 of GDPR defines data as: 'any information relating to an identified or identifiable natural person ("data subject"); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person' Classifying an individual's personal data has multiple positive ramifications. Personal data includes data to which they have a reasonable expectation of privacy, such as their preferences on various aspects of their life such as religion and sexuality, information regarding ethnicity on online employment applications, and conversations that may be had over the internet (for example, via a text messaging service or a social media network like Instagram). This may even extend to more confidential information like credit card numbers, as an individual's personal property. Perhaps the most significant of these is the well-established right of an individual over personal property, an institution of thought that began with jurists like Bartolus of Sassoferatto, who wrote in the fourteenth century. Bartolus defined property (dominium) as the "right of complete control over a physical object, to the extent not prohibited by law" (ius de re corporali perfecte dispnonendi nisi lege prohibeatur). This very definition was later expanded by Bartolus himself, into one that has widespread implications in today's world. Property, he said, "may be used to refer in the broadest sense to every incorporeal right, as in 'I have property in an obligation, for example in a usufruct'" (potest appellari largissime pro omni iure incorporali, ut habeo dominium obligationis, utputa usufructus). 1 This establishes a natural right of privacy over an individual's personal property (in this case, personal data). Most notably this has been expanded in "The Right to Privacy" by Warren and Brandeis in the Harvard Law Review in 1890, as the "right to be let alone," arguing that "the principle which protects personal writings and any other productions of? the intellect or the emotions, is the right to privacy." What modern technology has created is a situation where the law is forever behind the bounds of technology. In practice this means that the definitions of privacy and property are not matched with the actual way we articulate and use them. The challenge has been expanding definitions of property into currently existing frameworks, to accommodate data and what it includes. All it takes is one data breach to remind each of us how important these protections and their lack impact each of us.
Is Data Property?
Before we can fully address whether there is a privacy right in an individual's data, we should first examine whether data is property. That requires a clear understanding of what we mean by data.
Data takes multiple forms, some classified as general facts or information. Data generally has no restrictions imposed that defend individuals regarding collection and publication of data. However, the data that concerns one's "physical and individual characteristics, knowledge, capabilities, beliefs and opinions" as in Downing v. Municipal Court of San Francisco that is of note here. The word 'property' has been the subject of innumerable definitions, and in Downing, the court took the position that "the word property is all embracing, so as to include every intangible benefit and prerogative susceptible of possession or disposition".This interpretation of property was expanded in Kremer v. Cohen, where the Ninth Circuit applied a three-part test regarding the existence of property rights. "First, there must be an interest capable of precise definition; second, it must be capable of exclusive possession or control; and third, the putative owner must have established a legitimate claim to exclusivity." Data meets this test because it has a precise definition (See the GDPR definition above). It is also exclusively controlled by the owner with a license to those the data is given, sold, or shared with. And finally, personal data is personal by its very nature. It is owned by the person the data describes unless an alternative agreement is reached. As a property I own, I can sell it to someone if I decide it is valuable. The corollary is that I continue to own that data unless I choose to sell or license it to someone.
Because data is property, the rights that define property are then naturally extended to data, including the right to "use it as one wishes, to sell it, give it away, leave it idle or destroy it". These rights tend to entail the following:
The Right to Use as One Wishes
When personal data (ie. the data used to identify an individual on the internet, also called a digital fingerprint) is communicated to a third party, the user/owner has a reasonable expectation that the third party will keep said data confidential. It is more interesting to look at the expectation that the law has of the third party. "The common law secures to each individual the right of determining, ordinarily, to what extent his thoughts, sentiments, and emotions shall be communicated to others." The law has also upheld that the provision of personal data to a third party does not transfer the ownership of the data from the user to said third party, as evidenced by the Ninth Circuit's ruling in HiQ Labs v. LinkedIn, wherein the appellate court held that the members had a privacy interest in their data that LinkedIn had to protect. The court stated that "LinkedIn has no protected property interest in the data contributed by its users, as the users retain ownership of their profiles." While the United States doesn't have a broad data privacy law, we can look to California and European Union laws for some guidance on how the federal government could structure a law that clarifies the rights individuals have to control the use of their personal data. First, the CCPA Right to Opt-Out law gives consumers in California limited rights against data selling businesses. Specifically, it affords consumers the "right to, at any time, direct a business that sells personal information about the consumer to third parties not to sell the consumer's personal information. This right may be referred to as the right to opt-out." Under the GDPR, consumers have expanded rights that include the right to be forgotten. This regulation provides that the data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay. . ."
The Right to Sell
If we can reasonably assume that an individual has the right to sell property and this right extends to personal data, this brings into question the concept of assigning value to an individual's personal data. It also brings into consideration who decides the value and, the parameters that are used to arrive at said value. Another concept of note is how prospective damages are to be calculated, in the case of theft of an individual's personal data.
The Right to Give Away
A corollary to the right to sell is the Right to Give Away, or transfer, one's personal data. This may be the area of property law where arguably the largest differences exist between tangible physical property and intangible personal data. The concept of transferability states that property can be "assigned, sold, transferred, conveyed or pledged", leading to the natural conclusion that that which cannot be transferred is not property. In the matter of intangibles, the concept of 'transferability' has been interpreted to apply to a relinquishment of the owner's rights to said property, which could allow others to use data whose use may have been restricted by a right to privacy. This principle runs into obstacles when taken in context of recent data privacy and regulation legislature, such as the CCPA and the GDPR. The wording of the laws and regulations are an indication of the intent that a data subject cannot relinquish their rights over personally identifiable information (PII) in their entirety. Instead, they many license to the business the right to use that data for certain purposes, such as when Netflix uses your geographic location to recommend a movie that is trending in your country, or Amazon uses your location to alert you that a product that may not reach your address within a stipulated timeframe. However, these restrictions do The Cambridge Analytica scandal created global shockwaves for a multitude of reasons, some wellknown, such as the stark lack of privacy that a user of the popular social media network Facebook could expect, or the sheer volume of users (approximately 50-65 million according to Ingram, 2018) who had data mined, processed, and misappropriated. While some reasons are not as well known, one of them is the nascent stages that American Federal Data Protection legislature was in when compared to that of the European Union or the United Kingdom.
Why does this matter? The Facebook/Cambridge Analytica data breach has been alleged to have contributed to Trump winning the presidency in 2016. Beginning in 2013, Aleksandr Kogan, a professor, and data researcher at Cambridge University, developed an application for a personality quiz, named "This is Your Digital Life". The application would appear on the social media network, Facebook in 2014, and claimed to its users that the "results of the quiz would be used for academic purposes". Approximately 270,000 people consented to divulging personal data, and data about their Facebook friends, which was permitted at the time under Facebook's policies (subject to user's individual privacy settings). Most didn't realize they were divulging access to their personal data stored on the app as well as giving access to their friends' data, although that was included in the terms and conditions of using the quiz.
An article in The Guardian in December 2015 alleged that Kogen sold confidential information mined through the "This is Your Digital Life" app to Cambridge Analytica through his company Global Science Research. This was a clear violation of Facebook's policies. In the weeks and months that followed, Cambridge Analytica developed psychological profiles for tens of millions of US voters to support Ted Cruz's presidential campaign, using the data sold to them by Kogan. Following the publishing of the article, Facebook removed the application from its site and privately asked GSR and Cambridge Analytica to delete the data stored about the users and was assured that the pertinent information had been deleted. However, Facebook did not take steps to confirm this. Three years later, stories published by the New York Times and The Guardian alleged that the Cambridge Analytics lied when it said the data had been deleted and instead had used it in connection with President Donald Trump's campaign. Cambridge Analytica, its parent company, and relevant employees were suspended from the Facebook Platform.
These revelations resulted in three major legal actions, which have three different perspectives to the current state of data privacy legislature within the United States. First, David Carroll's formal legal claim against Cambridge Analytica's parent company, SCL, through a UK based human rights lawyer, on the advice of Swiss research specialist and the founder of a digital rights non-profit, Paul-Olivier Dehaye. Carroll's claim was pursuant to the provisions of the UK's Data Protection Act of 2018, which states that a data subject has the right to access personal data that was being processed or stored by the government or a company. Approximately a month later, Carroll was served with a response to his claim, which consisted of information including his opinion on issues like national debt, immigration, and gun rights; however, the information was nowhere near the 5000 data points that Cambridge Analytica claimed to have on every American voter through the data it had collected. As the movement gained international attention and Carroll pursued legal action against the company, his data was never turned over to him. Over the course of the next two years however, the involvement of the UK Information Commissioner's Office and the FBI proved to be a significant catalyst to expedite the process. By providing Carroll with a portion of his information, SCL had agreed that the UK's Data Protection Act applied to non-British citizens, if the data was processed within the UK, as Cambridge Analytica did. In addition, by refusing to provide Carroll with the data in its entirety, SCL violated the Act and was liable.
As of January 2020, Carroll was quoted saying "I haven't had my data back yet. We are awaiting the report from the UK Information Commissioner's Office, the organization responsible for regulating these matters. It is a process in which we may have to wait for notifications from the FBI and the British parliament" (Fischer, 2019). The GDPR has made considerable strides on the data protection front, and "applies to the processing of personal data of data subjects who are in the Union by a controller, or a processor not established in the Union". The CCPA is the first step in the right direction for a US law but only applies in California. The lack of a federal law regarding data privacy leads to several gray areas, with little to no consistency on the rules with which organizations must comply.
A second lawsuit involved the Facebook Inc. Securities Litigation. The lawsuit was filed by purchasers of Facebook common stock between 3rd February 2017 and 25th July 2018, alleging that Mark Zuckerberg, Sheryl Sandberg, and David M. Wehner deliberately misled investors about the course of dealings with Cambridge Analytica, in violation of Section 10(b), 20(a) and 20A of the Securities Exchange Act. The suit further argues the investors were led to believe that omissions "concerning Facebook's privacy and data privacy practices" would not have a negative implication on Facebook's stock prices during the time periods of March and July 2018. A third lawsuit complemented the securities lawsuit. In the Facebook Inc. Consumer Privacy User Profile Litigation, an action by social media users against Facebook, the plaintiffs alleged Facebook shared the user's personal information with third parties when Facebook did not have a right to share the information. Facebook filed a barrage of motions to dismiss, some of which were accepted by the court.
In both lawsuits, the court held that Facebook had no obligation to confirm the deletion of data by Cambridge Analytica and SCL, since nowhere in Facebook's data policy was there a representation that Facebook would confirm deletion. Instead, the policy only represents that Facebook would "require data to be deleted", with no guarantees about how Facebook would enforce that requirement.
This judicial ruling highlights the need for federal legislation regarding data privacy, storage, processing, etc. This is especially relevant as one seeks to draw parallels between the offenses that Facebook was found guilty for in the US, and the policies that it would have been found to be in contravention under the GDPR. A fairly well-established provision of the GDPR under Article 17 is the Right to Erasure of the Right to be Forgotten. Clauses (1) and (2) state that when a data subject has made a request of erasure of personal data concerning them, the controller shall have the obligation to erase said personal data without undue delay, taking account of available technology and cost of implementation, taking reasonable steps to accomplish the same. On the surface, this seems to be fairly in line with Facebook's data policy concerning deletion. What sets the GDPR and the UK Data Protection Act of 2018 as a higher standard is the clear definition of penalties to be imposed is requests are not reasonably complied with. Article 77 grants a data subject the right to lodge a complaint with a supervisory authority, in the member state of their habitual residence, detailing the alleged infringement committed, following which they are to be kept updated on the progress or the outcome of the complaint, which may result in fines (to be established by the relevant member state) (Article 83 & 84), judicial action agains the defendant (Article 78 & 79), and the right to compensation for the plaintiff (Article 82). In the case of Cambridge Analytica, the UK's Information Commissioner's Office issued an order directing the firm to supply Carroll with his data within thirty days, failure to comply with which would result in criminal charges.
The fourth action occurred at the Federal Trade Commission (FTC). The FTC found Cambridge Analytica liable on multiple counts, including its practices concerning the collection of Personally Identifiable Information, its claims regarding its participation in Privacy Shield -a framework designed by the US Department of Commerce, the European Commission, and the Swiss Administration in order to provide companies on both sides of the Atlantic with a mechanism to comply with data protection requirements when transferring personal data from the EU and Switzerland to the United States in support of transatlantic commerce -and the subsequent adherence to the provisions of the framework. The first count, that of misrepresentation, arose out of a statement that anyone who downloaded the Cambridge Analytica survey on Facebook would see, which stated that, "In this part, we would like to download some of your Facebook data using our Facebook app. We want you to know that we will NOT download your name or any other identifiable information-we are interested in your demographics and likes." The court found the statement was misleading, following evidence that the company had in fact harvested, downloaded, and misappropriated the user's PII. The counts regarding the Privacy Shield framework and the subsequent compliance with its principles, stemmed from the fact that Cambridge Analytica did not renew their certification of compliance with the Privacy Shield, and therefore was in contravention of the policies codified by the framework. The lawsuit, although monumental, and may set precedent for the future, was born out of the lack of a federally regulated data privacy and protection statute and is one of the most indicative signs that the United States has fallen behind the EU and the United Kingdom in this aspect, and a federally regulated statute would serve as the broadest possible authority with regard to data privacy, as opposed to the currently used ad hoc patchwork system.
Why isn't everything Data Misappropriation?
In many large cases with claims of data misuse and misappropriation, it is easy for an individual to find the accused party guilty at first examination, but quick glances are not always accurate. In 2009 the American Recovery and Reinvestment Act gave the Department of Energy the ability to provide funds to cities through the Smart Grid Investment Grant program with the goal of modernizing the nation's energy grid. Naperville, Illinois was one of the cities selected under this grant program to receive $11 million to modernize their own grid (Naperville Smart Meter Awareness v. City of Naperville 2018). In this modernization, Naperville replaced their old energy meters with "smart meters." The traditional meters would measure monthly "energy consumption in a single lump figure once per month," but the new smart meters recorded energy consumption data in " fifteen-minute intervals." Because of distinct "load signatures" exhibited by appliances in these data measures, it can be predicted with great accuracy what appliances are in each home and at what times they are being used. Upon learning about this perceived breach of privacy, a group of citizens whose homes were now using the new smart meters created Naperville Smart Meter Awareness to bring suit to the program. Their argument alleged that the smart meter system implemented by the City of Naperville was a direct breach of the Fourth Amendment and was therefore an unlawful search and seizure of data. The United States Seventh Circuit Court looked at the following two questions to measure the validity of the plaintiffs claim. First, is the data collection in this case truly a search? Second, was the search unreasonable as stated in the Fourth Amendment?
For the first question, the court looked specifically to the previous mentioned case Kyllo v. United States [2001] 533 U.S. 27. In Kyllo the Supreme Court ruled that when sophisticated technology provides information that would be "unknowable without physical intrusion, the surveillance is a 'search.'" As mentioned by Smart Meter Awareness, the collection of data through the smart meters provides extreme personal data and routines that would not be accessible without a physical search. The court also notes that in Kyllo the 'search' was via thermal imaging tools and provided more crude data than the constant stream of 15-minute datapoints collected by Naperville. From these arguments the court found that the non-voluntary implementation of smart meters was indeed a 'search' of the residents' homes. However, the court still had to decide if the collection of this data met the Fourth Amendment requirement of being unreasonable?
For this second question, the court mainly points to the precedent of Camara v. Municipal Court [1967] 387 U.S. 523. to examine the reasonableness of the search. While the court finds the smart meter's collection of data to be a warrantless search, the court also must consider that the City of Naperville had "no prosecutorial intent" when committing the search. In Camara the Supreme Court takes note of this intent and states that it "is a less hostile intrusion" since it is not to find criminal evidence, which allowed the court to examine fewer protections and only focus on the "right to be secure from intrusion into personal privacy." While this situation is like Naperville, the court found that the threat posed by smart meters is not as high as the situation threats in Camara, like lack of physical entry into the homes and the diminished chance of situational prosecution. These distinct differences in relative chance of prosecution separated the two cases from receiving the same outcome. The court also explained the need to weight privacy concerns against the "government interest in data collection." In this situation the court held that the role smart meters play in the modernization of the electrical grid is high enough to warrant the collection of data from the public. Because of these two reasons, the court ruled that the warrantless 'search' of property through the smart meters was not unreasonable because it served a genuine government interest without being unreasonably intrusive. However, the court mentions that this ruling is a narrow one and that if minor details of this case were changed, the ruling would change with it. Nevertheless, this case shows that there are many situations in which, at a first glance warrantless searches through innovational technologies look unreasonable yet are found reasonable through the review of the courts. Narrow rulings, such as Naperville Smart Meter Awareness v. City of Naperville, play a large role in the general, undefined, and murky world of tech privacy in the United States.
Discussion: What does the future hold?
Data privacy is a complicated and cutting-edge issue that has been thrust further into focus by recent cases like Cambridge Analytic. The United States is currently tasked with developing a rigorous legislative backbone that defends individuals' data across the nation. Building on the regulatory successes of the GDPR in Europe and the CCPA in California, two logical regulatory approaches arise.
Congressional Legislation
The United States could create federal legislative policy that promotes and protects data privacy in a top-down approach. This style of regulation is already underway in Congress under the title of the American Data Privacy and Protection Act (ADPPA). Following very closely to the groundwork set by the GDPR, the ADPPA outlines consumer data rights and corporate accountability measurements to create regulation protecting consumer data under the authority of the Federal Trade Commission (FTC). The ADPPA would cover the lack of a unified federal data protection legislature, which is undoubtedly data privacy's biggest weakness in the United States. It is what leads to the current sectoral approach, which allows independent industries to draft and enforce data privacy legislature, with little to no uniformity leading to contradictory and overlapping protection for citizens. However, there is another opportunity for privacy reform in the United States.
Code Regulation
If the ADPPA becomes stalled and does not pass via Congress, another opportunity to create nationwide regulation comes from enacting a code on the State Legislative level. Like the regulatory code of the UCC, the United States could hire independent experts from institutions, such as the American Law Institute, to develop a set of regulatory codes for data privacy. This set of regulatory code would then be given to every state legislature to make individual revisions to and ultimate vote into law. While there is always the risk that multiple states could reject the code created by this body, there are many strengths found from this model. Under this system, every state would be able to implement regulatory laws that protect their citizens in a broad and definitive manner, but also allow for freedom to individual expand the regulations as data becomes more complex. This adaptability allows for data regulation to continually change with new problems, instead of remaining dormant by a gridlocked Congress or other pressing federal matters. The individual changes of the states would also create a regulatory umbrella California is perhaps the closest replicable example for the American legislators, as the CCPA enshrines some of the strictest data privacy laws ever seen in the United States. It is not only significant for the fact that it required identical controller/processor requirements to the GDPR, but for the way it views an individual's right to data privacy. Just as the EU regards data privacy as a fundamental human right and seeks the build the provisions of the GDPR around that central right, the California Constitution views "privacy" as an inalienable right, not to be limited by other rights.
Whether the United States' data privacy regulation is formed through congressional legislation or regulatory codes is of minor importance when compared to the necessity of any form of regulation.
|
2022-11-04T18:34:27.025Z
|
2022-10-25T00:00:00.000
|
{
"year": 2022,
"sha1": "df1ac0e9906581f8c961e91c600d4840a98ab3fc",
"oa_license": null,
"oa_url": "http://jbrmr.com/cdn/article_file/2022-10-31-16-54-59-PM.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c4970ed9643d8bf849bea12767ec4529ccb82779",
"s2fieldsofstudy": [
"Law",
"Political Science"
],
"extfieldsofstudy": []
}
|
248146334
|
pes2o/s2orc
|
v3-fos-license
|
Multi-Bolted Connection for Pultruded Glass Fiber Reinforced Polymer’s Structure: A Study on Strengthening by Multiaxial Glass Fiber Sheets
Pultruded Glass Fiber Reinforced Polymers (PGFRPs) are becoming a new mainstream in civil construction because of their advantageous properties. One of two main elements, glass fibers, have been constructed by unidirectional glass roving in applicate progress. PGFRPs do not have high shear strength, which is determined by another element is the matrix. In the future, the demand for enhanced serviceability of existing PGFRP structures could be seen as unavoidable. Therefore, multi-bolted connection being the most typical type of connecting member, strengthening the connection performance of PGFRPs through connection is necessary. Previous researchers have studied several methods for improving connection capacity, including pasting glass fiber sheets (GFS). However, experimental research is lacking for multi-bolted connection. This study investigated several strategies of specimens, including the quantity of bolts (two bolts, four bolts, and five bolts); the end distance/diameter ratio (e = 2d; e = 3d) under tensile load; and three types of glass fiber sheets (GFS) (0°/90°, ±45° and chopped strand mat (CSM)). The experiment’s results showed the strengthening effects and the failure mode on the specimens. These findings could address the gap in knowledge that needs to be resolved with respect to PGFRPs’ composite design, through evaluation and discussion of their behavior.
Introduction
Pultruded glass fiber reinforced polymers (PGFRPs) have become the most popular FRP material, widely used in industry and construction. The advantageous properties that are the counterpart to changing from conventional materials to PGFPRs consist of light weight, high strength, stiffness, etc. [1,2]. Pultruded techniques were reviewed by Bank [3]. Manufacturers use glass fiber constituents to improve the stiffness and strength of plastics.
Several advanced properties of PGFRPs, such as their resistance to chemicals, their nonmagnetic nature, their isothermal properties, their electrical conductivity, their fatigue resistance, and their easy installation, make PGFRPs an exciting to alternative traditional construction materials [3,4]. One of the largest markets for PGFRPs in the construction field is for pedestrian bridges [5,6]. PGFRPs have shown long durability in various situations when subjected to long-term environmental effects [7], which allows reduction in expenditures on maintenance work. Recently, pultruded GFRP reinforcement was used to manufacture railway sleepers [8] and concrete slabs [9], and in other general applications [10]. Other typical applications of PGFRPs included building structures and elements [11,12] and a marine construction/wastewater treatment plant, overcoming the corrosion problem in a severe sea or chemical environment [13,14].
Several the standard for the design of PGFRP materials are "the Pre-Standard for Load & Resistance Factor Design (LRFD) of Pultruded Fiber Reinforced Polymer (FRP) Structures" (ASCE 2010 submitted to the American Composites Manufacturers Association (ACMA)); "Prospect for new guidance in FRP design" in 2016, which reviewed the previous guidebook; "Structural Design of Polymer Composites," (the EUROCOMP Design Code and Handbook in 1989) [15][16][17].
The application of PGFRPs has convenience and economy, and the bolted connection is the most popular joint connection type for PGFRPs. During the development of PGFRPs' applications, more issues appeared in the structural design of bolted connections. Some studies investigated the connection problem, and their results have been highlighted [18,19]. To identify aspects of PGFRPs' bolted connection failure modes, several authors implemented experimental, and some studies by theoretical method [20][21][22][23].
Ascione et al. [24] investigated the effects of fiber direction on bearing failure strength on GFRPs that were pin bearing bolted. Three kinds of laminate were studied, with several values of angle created to form fiber direction and external force. There were sixteen values of angle for type 1 laminate and seven values for types 2 and 3. The result showed a linear decrease in the ultimate load, depending on the bolt diameter. The authors proposed a formula for predicting the ultimate bearing load for directional fiber angles and bolt diameters. Prabhakaran et al. [25] also conducted an experiment to study the pultruded direction effect on multi-load direction. Despite differences in the types of PGFRP (bonded by vacuum and pultruded) and in the off-axis angles (different values), the results of these two studies were similar.
Other authors have investigated other parameter inputs. Chao Wu et al. [26] and Persson and Eriksson [27] researched static and fatigue performance on steel and blind bolts. Cooper and Turvey [28] investigated clamping force. Wang [29] studied bolt-hole size and clearance aspects.
Bearing failure is preferably due to its progressive failure process [25,26,29]. The other failure modes are brittle and catastrophic [25]. However, experimental results also showed that a pseudo-ductile shear failure became possible by increasing the end distance (Abd-El-Naby and Hollaway 1993a) [32]. Mottram and Turvey (2003) [33] demonstrated that failure modes could be changed by varying the geometric parameters, such as the end distance to bolt diameter ratio and the edge distance to bolt diameter ratio.
The major material issues, such as bolted connection or mechanical properties, were also summarized in several papers [18,34,35]. Some authors have reviewed the recent research and development trend regarding general issues of PGFRPs in civil and structure applications [35][36][37].
The joint strength is commonly estimated by the bolt connection in PGFRPs, rather than the profile member. In contrast, the capacity of the connections is determined by the shear or bearing strength of the material. In this study, strengthening by advanced material was investigated as a potential method for increasing the strength of PGFRP connections, in addition to end distance and bolt quantity. Some authors have developed strength of structure by using strengthening layers and pasting them to ordinary materials, sometimes combined with increased bolt number or end distance. Nhut et al. (2021) [38,39] implemented an experiment in strengthening a single bolt connection by using the glass fiber sheet (GFS). The result showed a noticeable increase in the development of connection strength. GFS, which is made from glass fiber and epoxy resin, as explained in Section 3, was considered a cost-effective material for upgrading the strength of PGFRPs by Uddin (2004) [40]. Other authors have investigated other materials, including carbon nanotube, nano clay, or metal inserts, to improve the performance of bolt connections in composite structures [41][42][43][44].
In summary, the review of the research literature shows clearly the advantageous properties and applicability of PGFRPs. Many studies have tried to improve the performance of materials in various aspects, including the important factor of bolted-connection strength. However, there has been no article that has investigated strengthening PGFRPs by GFSs with multi-bolted connection. Therefore, it was necessary to implement testing and evaluation of the effectiveness of the strengthening method by GFSs for bolted-connection structures.
In addition, the parameters of specimens were considered by referring to the previous studies. Some studies concluded that the bearing load of a connection is enormous when the direction load-fiber angle reduces [24,25]. This study is focused on a connection test with a direction load-fiber angle of zero. Moreover, many authors demonstrated that failure modes could be changed by varying the geometric parameters, such as the edge distance to bolt diameter ratio and the edge side distance to bolt diameter ratio. In this study, we tried to apply the GFS as a potential strengthening material in several conditions, including the two most crucial aspects: the number of bolts and the end distance of the connection area. The input parameters of the specimens included GFSs and end distance. Based on the testing results, this article evaluated and proposed an effective method for strengthening by using GFSs for the multi-bolt connection of PGFRP structures.
Connection System
A bearing-type connection is one where the transfer of the connection force is entirely by the bearing between the shaft(s) of the bolting and the connecting components [15]. In this study, a 21 N.m torque force was applied when setting up the bolt connections for the specimens (ISO 6789-1:2017). Nevertheless, for the design of bearing-type connections, it was assumed that there is no force transferred through friction between the connected elements in the connection.
Bolts and Bolt Holes
ASCE standards [15] instruct those bolts shall be of carbon or stainless steels with specifications in accordance with ASTM standards A307, A325, or F593. Bolts shall be in the range of diameters, d, from 3/8 of an inch (9.53 mm) up to, and including, 1 inch (25.4 mm). The bolt length shall be such that the end of the bolt extends beyond or is at least flush with the outer face of the nut when properly installed. The length of the bolt shank with thread that is in bearing with FRP material should not exceed one-third of the thickness of the plate component. Bolts shall be torqued to the snug-tightened condition. The slope parts in contact with the washer, the bolt head, and the nut shall be equal to or less than 1:20 with respect to a plane that is perpendicular to the bolt axis.
The nominal hole diameter, d n , shall be 1/16 of an inch (1.6 mm) larger than the nominal bolt diameter, d. Holes must be drilled or reamed. Oversized holes greater than 1/16 of an inch (1.6 mm) larger than bolt shall not be permitted, and slotted holes shall not be aligned in the primary direction of connection force.
Bolts, bolt holes, and connection geometries were determined based on the minimum requirements of the ASCE standard [15], as shown in Figure 1 and Table 1. In this study, the bolt is M12 and bolt hole size is 13.5 mm. Figure 2 shows the primary in-plane failure of plate-to-plate connection, with (a) to (e) showing different failure modes of single-bolted connections [23,28] or multi-bolted connection [25,31]. Where d is the nominal diameter of bolt. Minimum e min may be reduced to 2d when the co member has a perpendicular element attached to the end that the connection force is acting t Figure 2 shows the primary in-plane failure of plate-to-plate connection, wit (e) showing different failure modes of single-bolted connections [23,28] or multi connection [25,31]. Where d is the nominal diameter of bolt. Minimum e min may be reduced to 2d when the connected member has a perpendicular element attached to the end that the connection force is acting towards. Where d is the nominal diameter of bolt. Minimum e min may be reduced to 2d when the connected member has a perpendicular element attached to the end that the connection force is acting towards. Figure 2 shows the primary in-plane failure of plate-to-plate connection, with (a) to (e) showing different failure modes of single-bolted connections [23,28] or multi-bolted connection [25,31]. The other failure modes illustrated in Figure 2 are not desirable because their failure mechanisms are sudden. Under most geometrical arrangements it is found that bolted connections with two and three rows of bolts will have faster failure modes, either of nettension (Hassan et al., 1997) [30] or a form of block shear (Prabhakaran et al., 1996) [45]. The other failure modes illustrated in Figure 2 are not desirable because their failure mechanisms are sudden. Under most geometrical arrangements it is found that bolted connections with two and three rows of bolts will have faster failure modes, either of net-tension (Hassan et al., 1997) [30] or a form of block shear (Prabhakaran et al., 1996) [45].
PGFRP Material
A commercial product of the Fukui Fibertech Co., Ltd. (Toyohashi, Aichi, Japan), which is named FS1005, comprises three phases of constituents, continuous direction glass roving (CD), fiber glass fiber mat (GFM), and unsaturated polyester resins, which were used to make specimens. The manufacturer used a special bond to combine those parts into a PGFRP profile sheet.
The original plate, shown in Figure 3, has an average thickness of 5 mm. The 3D model shown in Figure 4 also describes the detail of a PGFRP, which includes 0.5 mm of the outside GFM part's thickness and 4 mm thickness of the inside CD part. The dimensions of the specimens were determined to meet minimum criteria that corresponded with bolt diameters and row bolts based on the ACSE pre-standard [15]. The center part of the PGFRP sheet was cut to 84mm in width to make specimens for the tensile test. Then, the GFSs were bonded onto both sides of the PGFRP plate using E250 adhesive (product of Konishi, Osaka, Japan) to finish creating the specimens.
PGFRP Material
A commercial product of the Fukui Fibertech Co., Ltd. (Toyohashi, Aichi, Japan), which is named FS1005, comprises three phases of constituents, continuous direction glass roving (CD), fiber glass fiber mat (GFM), and unsaturated polyester resins, which were used to make specimens. The manufacturer used a special bond to combine those parts into a PGFRP profile sheet.
The original plate, shown in Figure 3, has an average thickness of 5 mm. The 3D model shown in Figure 4 also describes the detail of a PGFRP, which includes 0.5 mm of the outside GFM part's thickness and 4 mm thickness of the inside CD part. The dimensions of the specimens were determined to meet minimum criteria that corresponded with bolt diameters and row bolts based on the ACSE pre-standard [15]. The center part of the PGFRP sheet was cut to 84mm in width to make specimens for the tensile test. Then, the GFSs were bonded onto both sides of the PGFRP plate using E250 adhesive (product of Konishi, Osaka, Japan) to finish creating the specimens.
PGFRP Material
A commercial product of the Fukui Fibertech Co., Ltd. (Toyohashi, Aichi, Japan), which is named FS1005, comprises three phases of constituents, continuous direction glass roving (CD), fiber glass fiber mat (GFM), and unsaturated polyester resins, which were used to make specimens. The manufacturer used a special bond to combine those parts into a PGFRP profile sheet.
The original plate, shown in Figure 3, has an average thickness of 5 mm. The 3D model shown in Figure 4 also describes the detail of a PGFRP, which includes 0.5 mm of the outside GFM part's thickness and 4 mm thickness of the inside CD part. The dimensions of the specimens were determined to meet minimum criteria that corresponded with bolt diameters and row bolts based on the ACSE pre-standard [15]. The center part of the PGFRP sheet was cut to 84mm in width to make specimens for the tensile test. Then, the GFSs were bonded onto both sides of the PGFRP plate using E250 adhesive (product of Konishi, Osaka, Japan) to finish creating the specimens.
Strengthening by Fiber Sheet
The study used three types of glass fiber sheet (GFS), represented by the green sheet in Figure 4, to investigate the effect and failure models of specimens after strengthening. Two types of original glass fiber sheets used were 0 • /90 • woven roving (ERW580-554A) and CSM (ECM450-501) (products of the Central Glass Co., Ltd., Tokyo, Japan with weights of 580 (g/m 2 ) and 450 (g/m 2 ), respectively). From the first type, three layers of 0 • /90 • were stacked, then cut to [0/90] or rotated onto ±45 • to make [±45] lamination. [CSM] was made by a similar method from CSM. These three layers were made adhesive by the VaRTM molded method, as shown in Figure 3. The VaRTM method can reduce the thickness of various layers of fiber content. In a previous study, Nhut (2021) [39] proposed detailed GFSs procedure making.
Setup and Instrumental for Connection Tests
In this study, a tensile test was conducted to investigate the strength of the bolted connection. Table 2 shows the test program for the PGFRP connection with a list of 24 specimen types, combined from three parameters: quantities of the bolt, material of GFS, and end distance. Each type was included in three samples, which meant a total of 72 samples were used in the test. The thicknesses of the GFSs were measured after molding and before sticking them on the PGFRP surfaces. In the table:
Expanding the Strengthening Area for the Connection Tests
An additional test for determining failure mode occurred in [±45] and [0/90] GFS specimens when a GFS's area was extended. The distance from the edge in the loaded end to the nearest row bolts was equivalent to four and five times the bolt-dimension (denoted by 4d and 5d). Table 3 provides a list of the details for testing specimens with an expanded GFS area. The experiment used a 1000 kN Maekawa tensile testing machine, as shown in Figure 5.
Expanding the Strengthening Area for the Connection Tests
An additional test for determining failure mode occurred in [±45] and [0/90] GFS specimens when a GFS's area was extended. The distance from the edge in the loaded end to the nearest row bolts was equivalent to four and five times the bolt-dimension (denoted by 4d and 5d). Table 3 provides a list of the details for testing specimens with an expanded GFS area. The experiment used a 1000 kN Maekawa tensile testing machine, as shown in Figure 5.
Failure Modes of the Specimens in the PGFRP Connections
Five main types of failure modes occurred in the connection strength experiment. The typical failure modes are simulated as 3D views in Figure 6. Pictures resulting from the experiment are provided in Figure 7 with perspective and front views, which were observed for each typical specimen.
Failure Modes of the Specimens in the PGFRP Connections
Five main types of failure modes occurred in the connection strength experiment. The typical failure modes are simulated as 3D views in Figure 6. Pictures resulting from the experiment are provided in Figure 7 with perspective and front views, which were observed for each typical specimen. The failure modes were combined from two or three elements' details, as shown in Table 4.
Before explaining the reason, the definition of failure modes is briefly described, as follows: MODE 1 was a shear-out failure in both the GFM and CD layers in two bolts and four bolts with non-strengthened specimens (NS). MODE 2 is a two-element failure mode: shear-out inside (CD layer) and block shear failure outside (GFM layer), which occurred in five-bolt NS specimens. MODE 3 is a combined failure mode with shear-out in the CD layer as GFM and GFS de-bonded together. This failure mode occurred in four-and five-bolt specimens with [0/90] and [±45] GFS. MODE 4 was obtained in all CSM-strengthened specimens (two, four, and five bolts). It consists of net-tension in the GFS and GFM parts and shear-out in the GFM part. The MODE 5 failure type was a bearing in the GFS/CD part and shear-out in the CD part. This mode was taken in [0/90] and [±45] GFS specimens with two bolts.
x" indicates the type of failure mode that occurred in each component. The failure modes were combined from two or three elements' details, as shown in Table 4. Before explaining the reason, the definition of failure modes is briefly described, as follows: • MODE 1 was a shear-out failure in both the GFM and CD layers in two bolts and four bolts with non-strengthened specimens (NS). • MODE 2 is a two-element failure mode: shear-out inside (CD layer) and block shear failure outside (GFM layer), which occurred in five-bolt NS specimens. The failure mechanism was evaluated based on the two components of the strengthening specimens, CD on the inside and GFM/GFS combined layer on the outside. The failure tended to happen at the weakest component strength. The ACSE standard [15] proposed measure was used to calculate the nominal strength of the bolted connections with two or three rows of bolts. The nominal connection strength, Rn, was taken as the minimum of R bt , Rtt, R br , R nt , f , R sh , and R bs , where: The estimated values of component strength are shown in Appendix A and the results of load-cross head displacement are shown in Figure 8. The failure mechanism was evaluated based on the two components of the strengthening specimens, CD on the inside and GFM/GFS combined layer on the outside. The failure tended to happen at the weakest component strength. The ACSE standard [15] proposed measure was used to calculate the nominal strength of the bolted connections with two or three rows of bolts. The nominal connection strength, Rn, was taken as the minimum of Rbt, Rtt, Rbr, Rnt,f, Rsh, and Rbs, where:
The other mode in NS is MODE 2, the block shear failure mode, which occurred with three-bolt rows in five-bolt specimens. As shown in Appendix A, block shear strength was considered as the weakest. After block shear failure occurred, the second component failure came with shear-out of the inside layer (CD), corresponding with the order of strength size. The debonding failure witnessed in MODE 3 occurred in the whole GFS strengthening area. As indicated by the ASCE [15] principle, bonding strength tended to increase to the combined strength of the bearing or shear-out strength of GFM/GFS before debonding. However, due to debonding occurring in the whole surface of the GFM/GFS area, only in the CD layer, which weakest with shear-out strength, was failure consequently. By a similar method, MODE 4 failure in the [CSM] specimens can be explained. After loading reached the lowest combined strength (the tensile strength) the net-tension failure occurred. Consequently, the CD layer inside also demonstrated shear-out. In e = 2d and two-bolt specimens, the tensile and shear strength in GFM/GFS were equivalent, thereby leading to the "hybrid mode" in which shear-out and net-tension failure co-occurred. With reference to Appendix A, the combined bearing strength of GFM/GFS was lower than others. Therefore, MODE The tendencies of the failure modes are explained as follows: • MODE 1 occurred in all thicknesses of NS two and four bolts. The results met with previous studies' results that investigated the failure mode in the base plate PGFRP. The shear-out strength of the CD layer is much less in comparison with the bearing or tensile strength. Therefore, the shear-out failure mode has appearance in CD and lead to GFM layer shear-out meanwhile the loading increases.
From MODE 2 to MODE 5, based on the observation, debonding failure occurred in whole specimens. During the developing of loading, each component failed with the mode, depending on the order of its component strength size, as indicated in Appendix A.
•
The other mode in NS is MODE 2, the block shear failure mode, which occurred with three-bolt rows in five-bolt specimens. As shown in Appendix A, block shear strength was considered as the weakest. After block shear failure occurred, the second component failure came with shear-out of the inside layer (CD), corresponding with the order of strength size.
•
The debonding failure witnessed in MODE 3 occurred in the whole GFS strengthening area. As indicated by the ASCE [15] principle, bonding strength tended to increase to the combined strength of the bearing or shear-out strength of GFM/GFS before debonding. However, due to debonding occurring in the whole surface of the GFM/GFS area, only in the CD layer, which weakest with shear-out strength, was failure consequently. Figure 8 shows the crosshead loads-displacements relation diagram of all specimens in the PGFRP connections. All types of GFS or non-strengthening specimens were divided into groups in which the specimens had the same parameters of end distance/bolt diameter ratio (e/d) and number of bolts.
Maximum Load
There were six groups: 1. Two bolts and e = 2d; 2. Two bolts and e = 3d; 3. Four bolts and e = 2d; 4. Four bolts and e = 3d; 5. Five bolts and e = 2d; 6. Five bolts and e = 3d The average values of displacement were obtained from the crosshead, as shown in Figure 5. The numbers 1, 2, and 3 at end of the name code represent three samples for each type of specimen. The initial points in the lines were moved and adjusted in the graph to provide a better overall view of all the load-relative displacement relationships. Figure 8a,b shows the load-displacement relations of two-bolt specimens. After reaching the maximum load, loading in the [0/90] and [±45] GFS specimens with two bolts was maintained for a period before dropping. This is because bearing failure occurred in the GFSs (MODE 5). In the other failure modes, the bearing load rapidly decreased after reaching the ultimate load. The maximum load corresponding to reduction in the point of stiffness was called the damage load [10]. In the case of four-bolt and five-bolt specimens, which are illustrated by Figure 8c Figure 8 shows the crosshead loads-displacements relation diagram of all specimens in the PGFRP connections. All types of GFS or non-strengthening specimens were divided into groups in which the specimens had the same parameters of end distance/bolt diameter ratio (e/d) and number of bolts.
Maximum Load
There were six groups: 1. Two bolts and e = 2d; 2.
Five bolts and e = 3d The average values of displacement were obtained from the crosshead, as shown in Figure 5. The numbers 1, 2, and 3 at end of the name code represent three samples for each type of specimen. The initial points in the lines were moved and adjusted in the graph to provide a better overall view of all the load-relative displacement relationships. Figure 8a,b shows the load-displacement relations of two-bolt specimens. After reaching the maximum load, loading in the [0/90] and [±45] GFS specimens with two bolts was maintained for a period before dropping. This is because bearing failure occurred in the GFSs (MODE 5). In the other failure modes, the bearing load rapidly decreased after reaching the ultimate load. The maximum load corresponding to reduction in the point of stiffness was called the damage load [10]. In the case of four-bolt and five-bolt specimens, which are illustrated by Figure 8c , it can be concluded that the bonding strength was smaller than the bearing strength in the four-or five-bolt specimens. A quantitative investigation to clarify bond strength will be conducted in the next study. Table 5 shows the obtained ultimate loads in the connection strength test. The average results of three samples for each designed specimen is illustrated by the line graphs in Figure 10. The maximum load of the GFSs was higher than the load in the NS specimens in all types of GFSs (the other parameters, the number of bolts and the end distance, were fixed). The effectiveness of the specimens after strengthening was also demonstrated by the [P st /P NS ] ratio, which varied from 1.4 to 2.1. As shown in Table 6, the [CSM] effective ratio was lower than in any of other GFSs, at 40% with five-bolt specimens. The increasing ultimate load in the strengthening specimens proved the effectiveness of the solution for enhancing the serviceability of the PGFRP connection structure. Instead of increasing the volume of the material (length, width, or thickness), the use of GFS could be considered an advantageous method, especially with respect to the existing PGFRP structure.
Evaluating the Strengthening Effect by Number of Bolts
There was a significant increase in connection strength when changing the bolt quantity from two bolts to four bolts. The effectiveness was also noticeable in NS in the case of changing four bolts to five bolts. However, the strengthening effect was trivial in GFS specimens when changing from four to five bolts. In the [0/90] and [±45] GFS types, the ultimate load in four-bolt connection specimens was higher than in five-bolt specimens because the area of bonding was decreased by one more bolt hole area. In [CSM] specimens, the tensile strength of GFS did not significantly change when adding one more bolt, from four bolts to five bolts. Due to the cross area of the failure section, the main factor causing net-tension failure, there was no change, and the ultimate load in [CSM] was not changed in these cases. On the other hand, the NS specimens obtained a failure mode change from MODE 1 (two and four bolts) to MODE 2 (five bolts, block shear). The length along the shear area was increased in the case of five bolts. Consequently, this made for better strength in comparison with two or four bolts specimens. Figure 10. Average ultimate load of specimens.
Evaluating the Strengthening Effect by Number of Bolts
There was a significant increase in connection strength when changing the bolt quantity from two bolts to four bolts. The effectiveness was also noticeable in NS in the case of changing four bolts to five bolts. However, the strengthening effect was trivial in GFS specimens when changing from four to five bolts. In the [0/90] and [±45] GFS types, the ultimate load in four-bolt connection specimens was higher than in five-bolt specimens because the area of bonding was decreased by one more bolt hole area. In [CSM] specimens, the tensile strength of GFS did not significantly change when adding one more bolt, from four bolts to five bolts. Due to the cross area of the failure section, the main factor causing net-tension failure, there was no change, and the ultimate load in [CSM] was not changed in these cases. On the other hand, the NS specimens obtained a failure mode change from MODE 1 (two and four bolts) to MODE 2 (five bolts, block shear). The length along the shear area was increased in the case of five bolts. Consequently, this made for better strength in comparison with two or four bolts specimens.
Strengthening Effect Related to End Distance
In addition to the effect of the number of bolts and the type of GFS, the end distance e was also investigated in this study. Table 7 provides the percentages of increasing strength when changing from end distance e = 2d to e = 3d. In the case of two-bolt specimens, all of the specimens were shown to have a high strengthening effect, with an increasing ratio ranging from 10.9% to 30.9%. The adding of end distance meant that the failure-out section of the CD layer was longer. Thus, the maximum load was stronger in e = 3d specimens.
Strengthening Effect Related to End Distance
In addition to the effect of the number of bolts and the type of GFS, the end distance e was also investigated in this study. Table 7 provides the percentages of increasing strength when changing from end distance e = 2d to e = 3d. In the case of two-bolt specimens, all of the specimens were shown to have a high strengthening effect, with an increasing ratio ranging from 10.9% to 30.9%. The adding of end distance meant that the failure-out section of the CD layer was longer. Thus, the maximum load was stronger in e = 3d specimens.
In the four-or five-bolt specimens, only the [±45] specimens with four bolts showed an increase in connection strength (around a 12% increase).
In addition, the relative increasing values in the ultimate load trended lower in the four-or five-bolt specimens in comparison with the two-bolt specimens. This was because the absolute value of the ultimate load in the two-bolt specimens was much lower than that in the others. Therefore, it was more effective when increased by extending the end distance in the two-bolt specimens than in the four-or five-bolt specimens.
The bonding strength of the CD and the GFS layer was a major element when evaluation MODE 2 and MODE 5. These represented a failure mode that occurred in the fouror five-bolt specimens (except for the [CSM] specimens). The distribution and the area of effective bonding will be further investigated as a supplement to this study, for an increased understanding of this issue.
Strengthening the Effect of Expanded GFS Areas
To investigate the effect of the bonding area, the GFS [±45] and [0/90] specimens were tested, as described in Section 4.2. The maximum loads in the connection testing are shown in Table 8. Table 5. Although the failure modes changed, the values of the maximum loads remained steady.
Based on the values of the ultimate loads and the failure modes, it can be concluded that the tensile strength and the bonding strength before expansion of the GFS area were approximately equal. The tensile strength depends only on the cross-section of the GFS, while the bonding strength ratio depends on the length of GFS in the specimens. Unlike the bonding strength, which is distributed in the whole GFM and the CD layer surface, the tensile strength is dependent on the minimum cross-section. Therefore, if the unloaded end was 3d, the debonding failure gradually came first and net-tension did not occur. Then, when there was an increase in the length of the GFS in the unloaded end at 4d or 5d, the failure mode changed from debonding to net-tension in the [±45] specimens. This was because bond strength became higher than the tensile strength.
Among of failure modes, bearing failure is the safest for connections. This is because deformation develops gradually over a long period of loading increase. After reaching the ultimate load and when failure has occurred, the connection continues displacement but is not damaged immediately. The dimensions of the GFS can adjust to adapt to the design requirements. Increasing the thickness of the strengthening GFS sheet can prevent nettension. Nevertheless, the debonding strength only depends on the properties of the PGFRP product. These criteria need to be calculated in the strengthening PGFRP connection.
This study has only explained the failure modes by reference to the maximum loads due to the complex working between the GFS and PGFRP components in the specimens. The bonding strength of PGFRPs will be quantitively investigated in future to completely demonstrate the tendency of the failure mode.
Conclusions
This study investigated the effectiveness of strengthening multi-bolted PGFRP connections by three kinds of GFSs. In the experiment, specimens were divided into groups according to the number of bolts, the end distances (e/d ratio), and the types of GFSs. Based on the results and the observed failure modes, there are some major conclusions, as follows: The effectiveness of strengthening by GFSs was demonstrated by the results of the tests. The maximum loads in all the GFS specimens were higher than those of the NS specimens, ranging from 1.4 to 2.1 times higher. Therefore, the number of bolts in the NS specimens could be reduced by GFS strengthening (from four and five bolts to two bolts) in application. Furthermore, the end distance (connection area) in the NS specimens could be reduced by GFS-strengthening (from e = 3d to e = 2d). The effectiveness of the increasing numbers of bolts was also investigated. There was an effectiveness in the NS specimens and the GFSs in cases of increasing from two to four bolts. However, this was an unremarkable result in regard to the GFS specimens with an increase from four to five bolts. This means that an increase in the number of bolts could be considered as a strengthening method for NS specimens. • Increasing the end distance was shown to be an effective method for improvement in the case of two bolts for all NS and GFSs specimens.
•
The failure mode is one of the safety factors for connections. Debonding failure depends on bond strength, which is a property of the PGFRP products. Therefore, it is necessary to investigate bond strength when designing the strengthening of bolted connections in PGFRPs.
The observed failure modes in the multi-bolt specimens were shown to be quite complicated, with five types of failure. It is necessary to conduct further investigation to analyze and sufficiently explain the failure tendency.
Data Availability Statement:
The data required to reproduce these findings cannot be shared at this time, as they also form part of an ongoing study.
Acknowledgments:
The authors wish to acknowledge all members of Structural Engineering Laboratory, Department of Architecture and Civil Engineering, Toyohashi University of Technology, for supporting the experimental work.
Conflicts of Interest:
The authors declare no conflict of interest. The component strength can be obtained by principal equation:
Appendix A
where: τ i : component strength in Table A1: properties of material that referred from Nhut [44,46] and the material testing A: is the net area subject to each component strength: F sh = Characteristic in-plane shear strength of FRP material appropriate to the shear-out failure F t L = Characteristic tensile strength of the FRP material in the longitudinal A ns = Net area subjected to shear A nt = Net area subjected to tension, where the bolts are staggered the total deducted in c determining Ant shall be the greater of (a) the maximum of the sectional area in any cross-section perpendicular to the member axis, or (b) t(nd n − ∑bs) where:
|
2022-04-14T15:15:37.782Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "16d845ccbda28817800f5bd94be280503567f445",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/14/8/1561/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "55f2f0892190aa46c9b33bd2eb56f0eec14beb6a",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
216080698
|
pes2o/s2orc
|
v3-fos-license
|
The influence of the distribution function of ferroelectric nanoparticles sizes on their electrocaloric and pyroelectric properties
We consider a model of a nanocomposite based on non-interacting spherical single-domain ferroelectric nanoparticles of various sizes embedded in a dielectric matrix. The size distribution function of these nanoparticles is selected as a part of the Gaussian distribution from minimum to maximum radius (truncated normal distribution). For such nanocomposites, we calculate the dependences of the reversible part of the electric polarization, the electrocaloric temperature change, and the dielectric permittivity on the external electric field, which have the characteristic form of hysteresis loops. We then analyze the change in the shape of the hysteresis loops relative to the particle size distribution parameters. We demonstrate that for the same mean-square dispersion, the remanent polarization, coercive field, dielectric permittivity maximums, maximums and minimums of the electrocaloric temperature change depend most strongly on the most probable radius, moderately depend on the dispersion, and have the weakest dependency on the nanoparticle maximum radius. We calculated and analyzed the dependences of pyroelectric figures of merit on the average radius of the nanoparticles in the composite. The dependences confirm the presence of a phase transition induced by the size of the nanoparticles, which is characterized by the presence of a maxima near the critical average radius of the particles, the value of which increases with increasing dispersion of the distribution function.
I.Introduction
From the second half of the 20th century to the present, ferroelectric (FE) materials have been the objects of intense experimental and theoretical studies due to their use as active media in a number of converting devices, in particular in pyroelectric (PE) [1,2,3] and electrocaloric (EC) [1,2,4,5] converters. For many years, pyroelectric converters have been used in many applications from gas detectors to thermal imaging [6], however, only the recently discovered "giant" EC effect in thin films [7] opened up the prospect for using the EC effect in solid-state microcoolers. The PE and EC properties of thin ferroelectric films, multilayers, and other low-dimensional materials can differ greatly from those of bulk materials. In particular, the prospects of using FE nanocomposites for EC converters [8,9,10] and PE sensors [11] are more compelling. Therefore, studies of low-dimensional FE materials, such as thin films and nanocomposites, are very relevant [3,5,11,12,13]. The study of EC cooling is of great importance to finding solutions to environmental problems [5,12] and energy efficiency [ 14 ] of currently available cooling technologies.
Further progress in this direction is hindered by a number of technological and theoretical difficulties [15,16]. These difficulties relate to the appearance of a practically unremovable electric field of depolarization, which is not taken into account when considering EC and PE effects [17].
Modern methods allow precise selection of nanoparticles by size and shape, however, nanocomposites made on their basis, as a rule, contain nanoparticles with a more or less symmetric distribution in size within certain limits around the average size [18,19,20].
As indicated in [ 21 ], it is still unclear what effect the size distribution of ferroelectric nanoparticles has on the EC properties of nanocomposites based on them. In his case, the properties of the composite depend on the predominance of the contribution of particles of one size or another. The numerical and analytical models developed to date are mainly aimed at the description of composites with nanoparticles of the same size and certain shape [8,22,23,24].
This article is essentially a semi-analytical and semi-numerical description of the EC and PE properties of nanocomposites based on ferroelectric nanoparticles with the most realistic Gaussian size distribution function.
II.Problem Statement
We consider a nanocomposite consisting of an isotropic dielectric matrix with permittivity e and immersed ferroelectric nanoparticles with permittivity . Each ferroelectric nanoparticle is surrounded by a semiconductor shell with a dielectric constant IF , which acts as a layer screening the ferroelectric polarization of a particle with a thickness equal to the "effective" screening length Λ [25].
The spread of the radii of the nanoparticle sizes is in a range from minimum to maximum max R .
3
A schematic representation of the model of the nanocomposite under consideration is shown in Fig. 1.
Due to the screening, the interaction between the particles in a nanocomposite can be neglected if the relative fraction of the volume of the nanoparticles is small (less than 10%). However, we note that if the degree of screening is very high, the interaction between the nanoparticles disappears, and the interaction of the nanoparticles with an external electric field is weakened. It is believed that the degree of screening is independent on the particle concentration, which is true up to very high concentrations.
Ferroelectric nanoparticles were previously polarized by a strong electric field while the polymer was in the liquid phase and the particles could rotate almost freely in it. At that the Curie temperature of ferroelectric nanoparticles should be significantly higher than the polymer melting temperature, and the poling field should be significantly smaller than the breakdown field of the liquid polymer. After polymer solidification, it can be assumed that all nanoparticles are single domain with the only component of spontaneous polarization ( ) 3 P r directed along axis 3 of the perovskite unit cell.
The model structure of the core-shell nanoparticle 3 BaTiO under consideration is in accordance with the X-rays synchrotron radiation analysis [26] and scanning transmission electron microscopy observation [27] data, indicating the presence of an inner tetragonal core, gradient lattice strain layer, and surface cubic layer [28], which was used earlier [8,28] to evaluate the efficiency of EC conversion of these nanoparticles.
Nanoparticles
Effective medium For calculations, we assume that the radii distribution of the nanoparticles corresponds to a distribution function ) (R f , which is expressed by the normal Gaussian distribution: 4 where 2 is the dispersion characterizing the spread of R around the most probable radius m R , and R is the normalizing coefficient. Given that the particle radii vary from min R to max R , the normalization condition is satisfied: where the normalizing coefficient In mathematics, the parameter > 0 represents the normal deviation, however, in the physics literature, both quantities, 2 and , often represent dispersion, despite the different dimensions. Below, we will denote > 0 the dispersion for simplicity.
The average radius is calculated by the formula and differs from m R as the gaussoid is "cut off" in the range from min R to max R . .
where the first term * C T is Curie temperature (possibly renormalized by the surface stress [29]) and T is the inverse Curie-Weiss constant. The second term originates from a depolarization field. Polarization obeys the time-dependent LGD equation [25,22] ( ) where Γ is the Khalatnikov's kinetic coefficient, and Differentiation of the static equation (5) with respect to temperature leads to the equation . Using this equation, the analytical expression for the PE coefficient is: In the case of a ferroelectric with the linear temperature dependence of coefficient in LGD- Since the nanocomposite contains nanoparticles of different sizes, the required parameters should be averaged with the distribution function ) (R f : is defined as the derivative of the EC temperature change ΔTEC(Eext) with respect to the external electric field: Relative dielectric permittivity, , for the static or very low frequency dynamic case The heat capacity is [22]: LGD parameters for bulk ferroelectric 3 BaTiO are given in Table 1. The critical radius of the size induced ferroelectric-paraelectric phase transition, , was calculated in Ref. [30]. Table 1.
In this case, the Table 3 and Fig. 3b). In this case, the loop increases (see Table 3 and Fig. 3c).
With decreasing , the height of the ( ) NP Е loop maxima near с Е changes nonmonotonously (see Table 3 and Fig. 3d), and the narrowing of the Table 3 and , vary slightly (see Table 3 and Fig. 4c). With a decrease in max R , the height of the () NP Е loop maxima near EC increases slightly (see Table 3 and Fig. 4d), and the narrowing of In summary, we calculated and analyzed the changes in the shape of the hysteresis loops particles with the small most probable (5 nm) and large maximum (40 nm) radii, the above values decrease with decreasing standard deviation in the range of (1 -5 )nm.
B. Correlation of the shape and characteristic features of EC and PE hysteresis
The dependence of the pyroelectric and electrocaloric coefficients on the external electric (Fig. 3a). Since m R is smaller than the critical radius, some of the and loops are characterized by the presence of two positive and two negative maxima corresponding to positive and negative electric field. Other loops have only two maxima, one for the positive, and another one for the negative external field. The shape of the loop for = 1 is significantly different from the shape of the loops for = (3 − 7) . A decrease in (as well as a decrease of , (Fig. 4a). An increase in leads to the shift and splitting of the and maxima, which is associated with a decrease in the fraction of small nanoparticles with R < Rcr for parameters Rmin = 1 nm, Rm = 5 nm and σ = 5 nm. For instance, on curves 4, the splitting of the maxima has already begun, and the two maxima (for each E-sign) become clear for curves 1.
C. Nanocomposite Figures of Merit
In Ref.
[22], the following functions were considered for nanoparticles (NP) in the form: , , The absolute values of the functions FoM in the energy conversion mode [6,34], and the function PE K is the pyroelectric coupling constant [1,33,34]. For the theoretical study, not only the amplitude, but also the sign of the functions (12) are important.
In functions (12) (Fig. 3a). The shift of these maxima to the smaller R , the largest for f F , taking into account the dependence of R on ( ) fR, can be associated with the deformation of the distribution curve ( ) fR with a change (Fig. 3a). It is worth noting, that with a decrease in , the position of the This displacement is associated with the deformation of the distribution curve ( ) fR with decreasing m R and a given (Fig. 2a) and / or decreasing and a given m R (Fig. 3a). Thus, the action of a weak E-field ( Other parameters are the same as in Fig. 3 Fig. 6 and Fig. 8). In summary, for the structure under study, we calculated and analyzed the dependences of the figure of merit on the average particle radius. The characteristics indicate the presence of a phase transition induced by a change in particle size, which is characterized by the presence of a maxima near the critical radius. The value of this radius increases [in the range of (8-12) nm] with an increase in the standard deviation [in the range of (1-7 nm)].
Conclusion
For noninteracting spherical ferroelectric nanoparticles of various sizes embedded in a dielectric matrix, we calculated the hysteresis loops of polarization EC temperature change, PE and EC coefficient, and dielectric permittivity. We then analyzed the change in the shape of the loops at various values of the Gaussian particle size distribution parameters, namely, the most probable and maximum radii, as well as the mean-square dispersion (as a matter of fact, half-width) of the particle size distribution function.
(a) We have demonstrated that for the same dispersion, the remanent polarization, coercive field, maximums of dielectric permittivity and negative maxima of EC temperature changes strongly depend on the most probable radius, and weakly depend on the maximum radius.
(b) For nanoparticles with the most probable radius m R less than the critical radius cr R induced by the size of the phase transition at the same minimal and maximal radii, the dielectric permittivity maximums change only slightly, and the remanent polarization, the coercive field and the negative maxima of EC temperature change decrease with a decreasing dispersion of the size distribution function.
|
2020-04-24T01:01:00.502Z
|
2020-04-22T00:00:00.000
|
{
"year": 2020,
"sha1": "52924811f35d1fd521f6493554f6facf5b29e9c3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2004.10871",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "52924811f35d1fd521f6493554f6facf5b29e9c3",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Physics",
"Materials Science"
]
}
|
119719694
|
pes2o/s2orc
|
v3-fos-license
|
Distribution of complex algebraic numbers
For a region $\Omega \subset\mathbb{C}$ denote by $\Psi(Q;\Omega)$ the number of complex algebraic numbers in $\Omega$ of degree $\leq n$ and naive height $\leq Q$. We show that $$ \Psi(Q;\Omega)=\frac{Q^{n+1}}{2\zeta(n+1)}\int_\Omega\psi(z)\,\nu(dz)+O\left(Q^n \right),\quad Q\to\infty, $$ where $\nu$ is the Lebesgue measure on the complex plane and the function $\psi$ will be given explicitly.
Introduction
The problem of investigating the distribution of algebraic numbers has many aspects and goes back more than century of history. Let us give a brief overview of known results obtained in this area.
Investigations of algebraic numbers widely involve potential theory and probabilistic methods. Here, we can mention a result obtained by Pritsker [15], who studied Schur's problem on traces of algebraic numbers, and the asymptotic distribution of zeros of integral polynomials with growing degrees. The paper [15] also contains a number of references on this subject. Pritsker's results are closely related to the setting of random polynomials, where the degrees of the polynomials grow to infinity. Here, one of research aspects is to study the distribution of zeros of random polynomials on the complex plane. The landmark result by Erdős and Turán [8] states that the arguments of complex roots of random polynomials are uniformly distributed as the degree tends to infinity. For some general conditions on polynomial coefficients, these roots are clustered near the unit circle [9].
There is a number of papers, in which all algebraic numbers in certain field extensions of a given degree with bounded multiplicative height are asymptotically counted as an upper bound for its heights tends to the infinity. For example, results of such type are obtained by Masser and Vaaler in [14] and [13]. References and some results related to the topic also can be found in [12,Chapter 3,§5].
Baker and Schmidt [2] introduced the concept of a regular system and proved that the set of algebraic numbers of degree at most n forms a regular system, that is, there exists a constant c n depending on n only such that for any interval I for all sufficiently large Q ∈ N there exist at least c n |I| Q n+1 (ln Q) 3n 2 algebraic numbers α 1 , . . . , α k of degree at most n and height at most Q satisfying Their results about regularity of the set of real algebraic numbers were improved by Beresnevich in [3], who showed that the logarithmic factors can be omitted. In the paper [5], it was also shown that complex algebraic numbers α form a regular system with a function N (α) = H(α) −(n+1)/2 , in other words, there exists a constant c n depending on n only such that in any circle C contained in the unit circle C 0 ⊂ C for all sufficiently large Q ≥ Q 0 (C) there exist at least c n Q n+1 |C| algebraic numbers α 1 , . . . , α k of degree at most n and height at most Q such that distances between them are at least Q −(n+1)/2 . For a more detailed discussion of the literature we refer to the excellent survey monograph of Bugeaud [6]. In [4], one can find results concerning the distribution of distances between conjugate algebraic numbers. Note that the number of algebraic numbers of degree at most n and height at most Q is of the order Q n+1 as Q → ∞. Therefore these results show that for any fixed n the algebraic numbers are distributed quite regularly for sufficiently large height. However, results of such type describe the behaviour of a small part of algebraic numbers only.
An important question in this respect had been asked by K. Mahler in his letter to V. G. Sprindžuk in 1985: what is the distribution of algebraic numbers for a fixed degree n ≥ 2 ?
A possible answer to this question was suggested in [11] (see also [10] for the case n = 2). Namely, fix n ≥ 2 and denote by Φ Q (I) the number of algebraic numbers in the interval I of degree at most n and height at most Q. Then where ζ(x) denotes the value of the Riemann zeta function at x, the remainder term r Q satisfies as Q → ∞, and the limit density ϕ n is given by the formula The integration is performed over the region In particular, in some neighborhood of the origin (containing Note in passing that ϕ n coincides with the density of the real roots of a random polynomial with independent coefficients uniformly distributed in [−1, 1] (see, e.g., [18]). The aim of this note is to extend this result to complex algebraic numbers.
Notations.
Here we define all the notations which we will use in this paper. We always assume that the degree n is some arbitrary but fixed integer number not less than 2 and the upper bound Q of the height goes to infinity. Hence the constants in different asymptotic relations (as Q → ∞) in this paper might depend on n.
As in (1), it will be typical that the case n = 2 has the extra-factor log Q. Therefore, for the sake of conciseness we put by definition For a complex domain Ω ⊂ C denote by Ψ Q (Ω) the number of algebraic numbers in Ω of degree at most n and height at most Q. We always assume that Ω does not intersect real axis and that its boundary consists of a finite number of algebraic curves.
For any Borel set A ⊂ R m denote by Vol(A) the Lebesgue measure of A, denote by λ(A) the number of points in A with integer coordinates, and denote by λ * (A) the number of points in A with coprime integer coordinates.
The Riemann zeta function is denoted by ζ(·) and the Möbius function is denoted by µ(·).
Main result
Theorem 2.1. The following asymptotic approximation holds where ν is the Lebesgue measure on the complex plane. The limit density ψ n is given by the formula The integration is performed over the region Remark. The implicit constant in the big-O-notation in (2) depends only on the degree n, and on the maximal degree and the number of algebraic curves that form the boundary ∂Ω.
The proof of Theorem 2.1 is given in Section 3. Now let us derive several properties of the limit density ψ n . Proposition 2.2. The function ψ n is positive on C and satisfies the following functional equations: Proof. The positiveness as well as the first relation are trivial. To prove (4) note that for any integral irreducible polynomial g(z) of degree n, the polynomial z n g(z −1 ) is also irreducible and has the same degree and the same height. Hence for any domain Ω ∈ C it holds where Ω −1 is defined as Ω −1 = z −1 ∈ C : z ∈ Ω . When Q tends to infinity, we get by applying Theorem 2.1 On the other hand, after the substitution z → 1/z, we obtain Since the class of domains Ω is sufficiently large, (4) follows.
The next statement shows that, in some sense, there is "repulsion" from the real axis for non-real algebraic numbers which increases inversely with the size of the imaginary part.
Proposition 2.3. It holds
where the constant A does not depend on y and can be written explicitly as follows: Here, the integration is performed over the region it follows that Hence ψ n (z) and D n (z) can be rewritten as follows: and D n (z) = (t 1 , . . . , t n−1 ) ∈ R n−1 : max Note that D n (x 0 ) = D n (x 0 + 0 · i). Letting Im z → 0 concludes the proof.
When |z| is relatively small or relatively large, it is possible to write the limit density in a simpler form. Let us conclude the section by considering the case n = 2.
Proof of Theorem 2.1
Denote by P Q a class of all integral polynomials of degree at most n and height at most Q. The cardinality of the class is (2Q + 1) n+1 . Recall that an integral polynomial is called prime, if it is irreducible over Q, primitive (the greatest common divisor of its coefficients equals 1), and its leading coefficient is positive.
For k ∈ {0, 1, . . . , n} denote by γ k the number of prime polynomials from P Q that have exactly k roots lying in Ω. For any algebraic number its minimal polynomial is prime, and any prime polynomial is a minimal polynomial for some algebraic number. Therefore, Consider a subset A k ⊂ [−1, 1] n+1 consisting of all points (t 0 , . . . , t n ) ∈ [−1, 1] n+1 such that the polynomial t n x n + · · · + t 1 x + t 0 has exactly k roots lying in Ω. Then the number of primitive polynomials from P Q which have exactly k roots in Ω is equal to λ * (QA k ). By the definition of a prime polynomial, we have where R Q denotes a number of reducible polynomials (over Q) from P Q . Note that the factor 1 2 arises in above inequality because prime polynomials have positive leading coefficient. It is known (see [17]) that Hence it follows that To estimate λ * (QA k ) we need the following lemma.
Here, the implicit constant in the big-O-notation depends only on the maximal degree and the number of bounding surfaces.
Remark. The result of the lemma is well-known. One can find a result of this type e.g. in classical monograph by Bachmann [1, pp. 436-444] (see especially formulas (83a) and (83b) on pages 441-442). For the readers convenience we include a short proof here.
Proof. Note that where N is chosen large enough such that A ⊂ [−N, N ] m . Applying the classical Möbius inversion formula (see, e.g., [16]) yields By the Lipschitz principle (see [7]) it follows that for some constant c depending only on the maximal degree and the number of algebraic surfaces that compose the boundary ∂A. Applying this to (7), we get It is well known (see, e.g., [16] Therefore, Furthermore, it holds that (10) Combining (8), (9), and (10) completes the proof.
Since Ω is bounded by a finite number of algebraic curves, the boundary of A k is formed by a finite number of algebraic surfaces. It follows from Lemma (3.1) that which together with (5) implies To calculate n k=1 k Vol(A k ) we need the following result from the theory of random polynomials. Let ξ 0 , ξ 1 , . . . , ξ n be independent random variables uniformly distributed on [−1, 1]. Consider the random polynomial G(x) = ξ n x n + ξ n−1 x n−1 + · · · + ξ 1 x + ξ 0 . The right-hand side of the latter relation was calculated in [19] in more general setup: it was shown that if the coefficients ξ 0 , ξ 1 , . . . , ξ n have a joint density of distribution p(x 0 , x 1 , . . . , x n ), then EN (Ω) is given by the formula where r = |z| and α = arg z are polar coordinates in the complex plane. The corresponding formula in [19] contains a typo. Here we use the correct version.
In the case when the coefficients are independent and uniformly distributed on [−1, 1], their joint distribution density function is given by the normalized indicator function of the cube [−1, 1] n+1 , and after some transformation (13) takes the form where ψ n is defined in (3). Combining this with (12) and (11) gives (2).
For the r.h.s. part of the second "big" inequality, we immediately have 1 Im z n−1 k=1 t k Im z k+1 = 1 sin α n−1 k=1 t k r k sin (k + 1)α . is not equal to zero if and only if (t 1 , . . . , t n ) ∈ D n (z), where z = r(cos α + i sin α).
|
2016-03-17T12:39:51.000Z
|
2014-10-14T00:00:00.000
|
{
"year": 2014,
"sha1": "46825f5d47593370185da3fe729c74661e5c5901",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://doi.org/10.1090/proc/13208",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "46825f5d47593370185da3fe729c74661e5c5901",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
56343227
|
pes2o/s2orc
|
v3-fos-license
|
Productive and Reproductive Performance of Indigenous Chickens in Ethiopia
This study reviews the productive and reproductive performance of indigenous chickens in Ethiopia with the aim of delivering summarized and synthesized information for the beneficiaries and producers. Chicken production is encompasses into traditional scavenging, small and large-scale market orientated systems based on the objective of the producer, the type of inputs used and the number and types of chickens kept. In Ethiopia, indigenous chickens produces 10-20 eggs per clutch and 30-65 small eggs per hen per year in 3-4 clutches. Local chickens reach slaughter/market age at 8 to 12 months with 0.6-2.5kg average weight at farmer management system. Indigenous chickens require long time to reach sexual maturity and takes longest time to recover reproductive cycle by local broody hen. The average mortality rate was highest and which affects both productive and reproductive performance of indigenous chickens by reducing survival rate. There were huge number of indigenous chickens existing in Ethiopia but productivity was disproportional to the number of chickens. The major constraint which affects productive and reproductive performance of indigenous chickens are diseases and predators, feed shortages, lack of training and extension services, and lack of proper marketing systems. In conclusion, lowest productive and reproductive performance was recorded which needs further improvement by adjusting training and extension service for farmers.
INTRODUCTION
Poultry production is an important sector in Ethiopia where chickens and their products are important sources for income generation for rural peoples and important source for high quality protein for developing countries.Poultry in Ethiopia is similar with chicken and total chicken population were 60.5 million, from 94.33, 2.47 and 3.21% were indigenous, exotic and hybrid chickens, respectively (CSA, 2016).Backyard poultry production in Ethiopia represents a significant part of the national economy in general and the rural economy in particular which contributes 83.5% of the national egg and meat products (CSA, 2016).
Chicken production encompasses into traditional scavenging, small and large-scale market orientated sectors which is based on the objective of the producer, the type of inputs used and the number and types of E-mail: mataworkmilkias@gmail.com.
Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License chickens kept (Halima, 2007).The rural poultry sector constitutes about 98% of the total chicken population (FAO, 2007) and largely consists of the indigenous or native domestic fowl.The traditional back yard system are characterized by mainly low-input and small-scale with 4 to 10 mature birds per household, reared in the back yards with inadequate housing, feeding and health care.Scavenging is the most important component of the poultry diet (Fisseha et al., 2010;Meseret, 2010).
The Ethiopian indigenous chickens are known to possess desirable characters such as thermo tolerant, resistance to some disease, good egg and meat flavor, hard egg shells, high fertility and hatchability as well as high dressing percentage (Aberra, 2000).According to Abubakar et al. (2007) the impact of the Ethiopian village chicken in the national economy and its role in improving the nutritional status, family income, food security and livelihood of many smallholders is significant owing to its low cost of production.The diverse agro-ecology and agronomic practice prevailing in the country together with the huge population of livestock in general and poultry in particular, could be a promising attribute to boost up the sector and increase its contribution to the total agricultural output as well as to improve the living standards of the poor livestock keepers (Aleme and Mitiku, 2015;Hunduma et al., 2010).
The Ethiopian indigenous chickens are none descriptive breeds closely related to the jungle fowl and vary in color, comb type, body conformation, weight and may or may not possess shank feather and broodiness is pronounced (Demeke, 2008).The mean annual egg production of indigenous chickens is estimated to be 60 small-size eggs per year with a thick shell and deep yellow yolk color.Indigenous chickens are poor in productive and reproductive performance which are characterized by small sized eggs, slow growth rate, late maturity, slow age at first mating, small clutch size, a natural learning to broodiness and high mortality of chicks among the flock.Low productivity of indigenous chickens is due to low hatchability and high mortality of chicks (Fissaha et al., 2010;Getachew and Negassi, 2016).
There were huge number of indigenous chickens in Ethiopia but its productive and reproductive performance were low and varies in different area, and are not reviewed and well documented for users and producers.There is a need for reviewing the productive and reproductive performance of village chickens to improve the indigenous chicken productivity and to save the indigenous genotype from distinction or replacement by exotic chickens.This being the cases, the objective of this review is to review the productive and reproductive performance of indigenous chickens in Ethiopia with the following specific objectives: 1.To review the productive performance of indigenous chickens in Ethiopia 2. To review the reproductive performance of indigenous chickens in Ethiopia 3. To review the constraints that affects productive and reproductive performance of indigenous chicken in Ethiopia
Productive performance of village chickens
The productive performance of indigenous chickens are low, which includes clutch number, average number of eggs laid per clutch, average days per clutch, average number of eggs per hen per year, slaughter age and weight of chickens.
Clutch number
Clutch numbers of Ethiopian indigenous chicken is different at different production and management systems.According to CSA (2016) report the national average clutch number of Ethiopia indigenous chicken was 4 per year.The number of clutch periods showed by local hens per year is 3.8, 2-6 and 3.7 in Bure, Fogéra and Dale, respectively (Fissaha et al., 2010).According to Melkamu and Wube (2013) in Debsan, Tikara and Kebele at Gonder, Zuria and Woreda the average clutch number were 3 per year.Alem (2014) reports in Central Tigray, the average clutch number per year were 3.15 to 3.2 and 3.2 at lowland and midland agro-ecologies, respectively.
The number of clutch periods recorded per year was 4.29±0.17(range 3.38 to 6.11) in Metekel zone of Northwest Ethiopia, respectively (Solomon et al., 2013).The average number of clutches per year per hen was 3.2 for local hens ranged from 2 to 5 with an average clutch length of 21.6 days, ranged from 15 to 28 days in lowland and midland agro-ecological zones of Central Tigray (Alem, 2014).The average number of clutches per year recorded from the Gomma Wereda was 3.43 (Meseret, 2010).The overall average clutch number of chicken in North Wollo of Amhara region was 3.62 per year (Addisu et al., 2013).Mekonnen (2007) reported that the mean clutch number of indigenous chicken in three districts of SNNPRs was 3.8 per year.
Egg production
Indigenous chickens' produces lowest number of eggs and is small in size.An indigenous chicken in Ethiopia produces 12 eggs per clutch (CSA, 2016).According to Yadessa et al. (2017) finding indigenous chickens produces 14.3 small eggs per clutch in Mezhenger, Sheka and Benchi -Maji zones of south western Ethiopia.Solomon et al. (2013) report under existing farmer management condition, number of eggs produced per clutch was 13.56±0.26 in Metekel zone of Northwest Ethiopia.Addisi et al. (2013) reported that, average eggs laid/clutch/hens was 16.88, 14.23 and 11.9 eggs in Quara, Alefa and Tach Annachiho districts, respectively.Average number of eggs laid per hen per clutch was 13.6 for local hens which ranged from 9 to18 eggs in lowland and midland agro-ecological zones of Central Tigray (Alem, 2014).The average number of eggs per clutch of indigenous chickens reported from Gomma district was 12.92 (Meseret, 2010).
Egg production potential of local chicken is 30 to 60 eggs/year/hen, with an average of 38 g egg weight under village management conditions while exotic breeds produce around 250 eggs/year/hen with 60 g egg weight in Ethiopia (Alganesh et al., 2003).Indigenous chickens produce 48 small eggs per hen per year at farmers' management conditions in Ethiopia (CSA, 2016).The average number of eggs produced per hen per year was 54.5 in Mezhenger, Sheka and Benchi -Maji zones of south western Ethiopia (Yadessa et al., 2017).
According to Addisu et al. (2013) 49.51 eggs per hen per year was reported from North Wollo, Amhara Region, Ethiopia.Solomon et al. ( 2013) also reported 59.5 eggs per hen per year in Metekel zone, Northwest Ethiopia.According to Fissaha et al. ( 2010) finding, the total egg production/hen per year of local hens under farmer management condition is estimated to be 60, 53 and 55 in Bure, Fogéra and Dale woredas, respectively.Melkamu (2014) finding showed that an indigenous chicken produces average of 65 eggs per hen per year.Indigenous chickens produce 59.51±2.66(range 45.38 to 93.19) per hen per year in Metekel zone of Northwest Ethiopia (Solomon et al., 2013).Mean annual egg production of the indigenous chickens of Gomma Wereda was 43.8 eggs (Meseret, 2010), mean annual number of eggs produced from Dale district was 55.2 eggs/year/hen (Mekonnen 2007) and average number of eggs per hen per year in Ambo was 36 to 42 (Fikre, 2000).The mean annual egg production/hen in North Wollo of Amhara region was 49.51 ± 0.38 (Addisi et al., 2013).Bogale (2008) indicated that the meat production ability and growth performance of indigenous chicken are limited and the local males reach 1.5 kg live weight at 6 months of age and females of about 30% less.According to Gain (2017), local Ethiopian chickens 1.25 kg are in slaughter village management condition.The average weight of mature males (cocks) was significantly higher in midland (1.812) kg than in lowland (1.694) agro-ecology in Central Tigray.But, similar body weight of hens (1.37 and 1.356 kg), cockerels (1.024 and 1.119 kg) and pullets (1.021 and 1.064 kg) was recorded in lowland and midland agro ecology, respectively.These significant Matawork 255 differences in body weight of indigenous chickens were attributed to non-genetic factors like supplementary feeding, watering and health care in different agroecology of Centeral Tigray (Alem, 2014).According to Meseret (2010) finding the mean market weight of indigenous male chickens in Gomma wereda was 1.5 kg at 8.62 months in village management condition.Mekonnen (2007) reported that the mature body weight of cocks and hens at farmers management condition in Wonsho, Loka abaya and Dale districts of Southern Ethiopia were 1.58 and 1.30 kg, respectively.The average weight of local hens ranges from 0.6 to 2.1kg and local cocks ranges from 0.6 to 2.5kg at selected districts of North Western Amhara region.
Slaughter/Market age of indigenous chickens
According to Gain (2017) report Ethiopian indigenous chickens reach slaughter at the age of 8 to 12 months in village management system.Mean age at slaughter for indigenous male chickens of Gomma Wereda was 8.62 months (Meseret, 2010).Getiso et al. (2017) reported that in three agro-ecologies of SNNPR, indigenous chickens reach slaughter at 9.9 months.Also in western, Tigray indigenous chickens reach slaughter at 4.66 and 4.5 months for male and female chickens, respectively (Shishay et al., 2015).
In other hand, indigenous male chickens of Wolaita zones in southern Ethiopia requires 8.6, 9.4 and 8.9 months to reach slaughter at highland, midland and lowland areas, respectively (Zereu and Lijalem, 2016).
Reproductive performance of village chickens
Reproductive cycle takes longest time for indigenous chickens because they require long time to reach sexual maturity age and replace parent stock by traditional broody hens which require long time to recover the reproductive cycle.
Age at sexual maturity of indigenous chickens
The overall mean age of cock at first mating was 4.9 months in Mezhenger and Sheka but in benchi-Maji zone it requires 5.2 months (Yadessa et al., 2017).Meseret (2010) reported that the mean sexual maturity of indigenous chicken at Gomma district of Jimma zone were about 6.33 months.
According to Aberra et al. (2013) report, age at first egg of scavenging chickens in different agro-ecological zones of Amhara region was 6.6 months.The average age of indigenous pullets and cockerels at first mating was 5.2±1.16 and 5.44±1.3months in Metekel zone of Northwest Ethiopia, respectively (Solomon et al., 2013).
Average age at first egg was 27.2 weeks for local breeds ranged from 24 to 28 weeks and average age at first mating of cockerels was 26 weeks for local chickens in lowland and midland agro-ecological zones of Central Tigray (Alem, 2014).Mekonnen (2007) reported that, age at first egg was 7.07 months for indigenous pullets of Dale wereda.The overall mean age of sexual maturity was 24.25 ± 0.04 and 23.84 ± 0.05 weeks for indigenous male and female chickens in North Wollo of Amhara Region, respectively (Addisu et al., 2013).In Bogale (2008) finding, the mean age of sexual maturity of indigenous chicken in Fogera district was 23.48 ± 0.1 and 23.6 ± 0.11 weeks for male and female, respectively.
Hatchability of indigenous chickens
Natural incubation is the most commonly used method for replacing and increasing the size of flocks by the help of broody hens.Incubating hens uses dark and quite place for laying and incubating eggs.Producers adjust appropriate place and makes nest for broody hens which uses clay pot and straw bedding (cartoons) but in some cases uses clay without bedding (broken pot).Farmers are very conscious and concerned for preparation of appropriate place which provide good feed resources and best environment for incubating by broody hens.Traditionally, farmers incubate at dry season and uses eggs which were laid within the houses (Bikila, 2013).
The average number of eggs incubated per hen in different agro-ecological zones of Amhara region was 12.8 and out of the incubated eggs, only 10 chicks were hatched, giving an average hatchability of 79.1% (Aberra et al., 2013).According to Solomon et al. (2013) report the average number of eggs set per hen was 14.74±0.25 (range 12.40 to 16.91) with a hatchability of 84.7% in Metekel zone of Northwest Ethiopia.According to Fissaha et al. (2010) report, 13 eggs (ranged 7 to 22) have hatchability percentage of 82.6 and 89.1 at Bure and Dale districts of Ethiopia, respectively.According to Alem (2014) report in both agro-ecologies of the Central Tigray the average numbers of eggs set for incubation per broody hen were 10.2 eggs with hatchability of 85.8% for local eggs.
The number of eggs set per hen depends on availability of eggs, size of eggs and size of broody hen and the maternal instinct of the broody hen.However, the overall mean number of eggs incubated was 11.32 eggs with minimum of 6 and maximum 20 eggs per hen and the percent hatchability was 82.74% in Nole Kabba Woreda and Western Wollega, Ethiopia (Habte et al., 2013).The mean percent total hatchability calculated for the indigenous chickens of the Gomma Wereda was 22% (Meseret, 2010).The Average number of eggs set for incubation was 13 ranging from 10 to 20 per hen from which relatively fair number (83%) chicks were hatched in East Gojam zone of Amhara regional state (Melese and Melkamu, 2014).Samson and Endalew (2010) reported that, productive indigenous hens lay on average 10 to18 eggs per clutch and 7 to15 eggs were incubated using a broody hen from the incubated eggs 5 to10 chicks hatched per clutch.
Mortality and survival rate of indigenous chickens
Scavenging system is characterized by high chick mortality in the first two weeks of life, caused mainly by predators and Newcastle disease in Southern region of Ethiopia (Melesse and Negesse, 2011).
According to Alganesh et al. (2003) and Negussie et al. (2003) the low productivity of the local scavenging hens is not only because they are low producers of small sized eggs and slow growers but also the system is characterized by high chick mortality before they reach around 8 weeks of age.In different agro-ecological zones of Ethiopia at Amhara region, 10 chicks were hatched and among these only 5.5 chicks reached market age, which implies 58.3% survival rate suggesting high chick mortality during the growing period (Aberra et al., 2013).Chicks which reached grower stage 8 weeks (survival rate) were 65.8% for local chickens in lowland and midland agro-ecological zones of Central Tigray (Alem, 2014).According to Tadelle et al. (2003) finding, average survival rate of chicks in Ethiopia was 51.3% and about 44.2% mortality of chicks (55.8% survived) was reported by Abraham and Yayneshet (2010) from Northern Ethiopia.Mean chick mortality (to an age of 8 weeks) of indigenous chickens of Gomma Wereda was 41% (Meseret, 2010) and the mean number of chicks which survives to market age in East Gojam zone of Amhara region was 65.91% (Melese and Melkamu, 2014).
Disease and predators
Disease and predator were the main constraints of indigenous chicken production at farmer management condition in Lemo district of Hadiya zone in southern Ethiopia (Salo et al., 2016).Halima (2007) reported that, diseases and predator were the major factor that causes loss of chicken in Northwest Ethiopia.Shishay et al. (2014) revealed that, both diseases and predators have highly prevalent challenges which hinder indigenous chicken productivity.According to their report Newcastle disease (1st), fowl salmonella (2nd), coccidiosis (3rd), fowl typhoid (4th), fowl cholera (5th), fowl pox (6th) and fowl coryza (7th) were the major and economically important diseases that hinder the expansion of village chicken production in Western Zone of Tigray, Northern Ethiopia.Fentie et al. (2013) also, recently reported that poor health care, incidence of predation, poor housing and feeding management were the major constraints of village chicken production of which, poultry diseases (46.2%) and predation (27.1%) were the most predominant causes of chicken loss.New castle disease was the biggest constraints of family chicken production in North Gondar of Northwest Ethiopia.Diseases and predators were the first and second major constraints that cause loss of chickens in North West Ethiopia (Halima, 2007).A study conducted in Mekele zone of North West Ethiopia also revealed that, seasonal outbreak of diseases and predators were major factors that cause loss of chickens, and lack of credit services, limited skill of management practices and low productivity of local chickens were outlined as major constraints of chicken production (Solomon et al., 2013).The most serious constraint hindering poultry production is predator and poor housing system and the scavenging feeding system of poultry leads for this problem in Arbegona Woreda of Sidama Zone in Southern Ethiopia (Feleke et al., 2015).The most important constraints impairing the existing chicken production system under farmer's management condition in their order of significance were disease, lack of veterinary health service, traditional management system with limited feed supplementation, poor housing and no access of improved breeds with limitation of extension service (Melese and Melkamu, 2014).Bogale (2007) reported that, shortage of supplementary feed (19.4%) was the main constraint which hinders indigenous chicken productivity in Fogera district.There is no purposeful feeding of chickens under the village conditions in Ethiopia and scavenging is almost the only source of diet.Scavenging feed resource base for local birds are inadequate and the main constraints in Fogera district (Bogale, 2008).Scavenge feed resources are defined as the total amount of feed products available to all scavenging animals in a given area.It depends on the number of households, the type of crop grown and crop processing as well as climatic conditions (Sonaiya and Swan, 2004).
Feed shortage
The local birds in the farming community are allowed to wander freely inside and outside the house in search of the food.Anything in and around the house is used as the most important part of their diet.So, the important sources of the feed for bird are household wastes, anything from the environment and small amount of grain thought to be useful sources of nutrition (Meseret, 2010).The local chickens in the farming community are allowed to wander freely inside and outside the house in search of the food.So the important sources of the feed for bird are household wastes, anything from the environment and small amount of grain thought to be useful sources of Matawork 257 nutrition (Resource-Centre, 2005).
Marketing system
There is no formal poultry and poultry product marketing channel and informal marketing of live birds and eggs involving open markets are common throughout the Woreda, which affects production of indigenous chickens in Haramaya (Abera and Geta, 2014).Fluctuation (seasonality) in prices of chicken products was the most prevailing chicken and egg marketing constraint (Bikila, 2013).The major constraints in rural chicken marketing were identified as low price, low marketing output and long distance to reliable markets.As a result, the smallholder farmers are not in position to get the expected return from the sale of chickens in North West Ethiopia (Awol, 2010).Seasonal fluctuation of chicken and eggs, low supply (output) of chickens and eggs due to disease and predation, presence of limited market outlets and lack of space for chicken marketing in urban area were market related constraints which affects poultry production (Moges et al., 2010).
Lack of training and extension service
There was low extension support from responsible bodies to improve indigenous chicken production in Eastern Ethiopia (Getachew et al., 2015).According to Bikila (2013), low supply of exotic breed and limited credit for poultry production, weak extension service, lack of appropriate chicken and egg marketing information to producer farmer and lack of enough space for chicken marketing in urban markets were the major challenges which hinders indigenous chicken productivity.The extension linkage between the research output and the ministry of agriculture and the farmers are found to be extremely weak, thus in general there is no consistent feedback to the research.Fisseha et al. (2007) also reported that, lack of access to extension agents for chicken farmers is one of the main reasons for the lower extension service in Burie district of Amhara region.
Lack of access to get extension agents was the main reason (31.8%) for absence of extension service with regard to village chicken production.Lack of modern poultry rearing knowledge through extension service and training was the other constraint in both districts of Ethiopia (Fissaha et al., 2010).It is also reported that training for both farmers and extension staff focusing on disease control, improved housing, feeding, marketing and entrepreneurship could help to improve productivity of local chicken (Moges et al., 2010).
CONCLUSION AND RECOMMENDATIONS
Chicken production system encompasses into traditional, small and large-scale market oriented production system based on the objectives of the producers, the type of inputs used and type and number of chickens reared.Traditional production system is characterized by lowinput and small-scale with 4 to 10 mature birds per households with inadequate housing, feeding and health care practice.The productive performances of indigenous chickens are low in village management condition (inappropriate feeding, housing and health care practice).
In Ethiopia, indigenous chickens produces 10 to 20 eggs per clutch and 30 to 65 small eggs per hen per year was laid in 3 to 4 clutches.Local chickens reach slaughter/market age at 8 to 12 months with 0.6 to 2.5kg average live weight at farmer management condition.Indigenous chickens require long time to reach sexual maturity and takes longest time to recover reproductive cycle by local broody hen.
Chickens take 5 to 7.2 months to reach first mating and egg lying age.They reproduce by natural incubation system by broody hen with the average hatchability of 75 to 85%.The average mortality rate was highest, which affects both productive and reproductive performance of indigenous chickens by reducing survival rate.There were huge number of indigenous chickens existing in Ethiopia but productivity was disproportional to the number of chickens.The major constraints which affects productive and reproductive performance of indigenous chickens includes diseases and predators, feed shortages, lack of training and extension services, and lack of proper marketing systems.
Depending on the above conclusions, the following recommendations were needed to improve productive and reproductive performance of indigenous chickens.
|
2019-05-30T23:45:54.202Z
|
2017-04-25T00:00:00.000
|
{
"year": 2018,
"sha1": "91dee10da351641c55a312eb1b5da39792da586e",
"oa_license": null,
"oa_url": "https://academicjournals.org/journal/IJLP/article-full-text-pdf/CC9C8B858551.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "91dee10da351641c55a312eb1b5da39792da586e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Geography",
"Biology"
]
}
|
248638171
|
pes2o/s2orc
|
v3-fos-license
|
Machine learning and pre-medical education
Machine learning and artificial intelligence (AI)-driven technologies are contributing significantly to various facets of medicine and care management. It is likely that the next generation of healthcare professionals will be confronted with a series of innovations that are powered by AI, and they may not have sufficient time during their professional tenure to learn about the underlying machine learning frameworks that are driving these systems. Educating the aspiring clinicians and care providers with the right foundational courses in machine learning as part of postsecondary education will likely transform them as high-tech physicians and care providers of the future.
Introduction
The long path to becoming a physician in countries such as the United States begins at least as early as undergraduate/postsecondary education or even in high school. Most medical schools recommend an educational pathway (known as pre-medicine or pre-med) that includes completion of courses focused on the scientific fields of biology, physics, chemistry, organic chemistry, neuroscience, and behavioral sciences along with courses in humanities. As such, pre-medicine is not necessarily a major in most degree-granting institutions. Rather, it is an educational track that undergraduate students pursue prior to matriculation to a medical school. It also involves educational and professional development activities such as volunteer work preferably related to patient care, as well as clinical and research experience, followed by the application to medical schools. A student on a pre-med track may choose an undergraduate major in any field if certain required courses are completed. The pre-med courses are necessary to prepare for the Medical College Admission Test (MCAT) and satisfy most medical school prerequisites that are recommended by the Association of American Medical Colleges (AAMC). Similarly, educational institutes in other countries may have their own set of requirements for postsecondary education and meeting them would position the trainee to pursue a medical degree. While these established pathways are been followed faithfully for several years by accredited universities worldwide, the technological advancements, recently due to the rapid progress in artificial intelligence (AI) are disrupting the practice of medicine and the delivery of healthcare. Indeed, in a 2018 report [1], the World Health Organization noted that digital technologies and AI will be vital tools in achieving their global strategic targets -1 billion more people benefitting from universal health coverage, 1 billion more people better protected from health emergencies, and 1 billion more people enjoying better health and well-being. These initiatives along with several other national and international efforts to modernize healthcare using AI should serve as motivation to re-design the undergraduate curricula, especially the pre-med pathway and enable the next generation of physicians and other care providers to face the emerging data science revolution.
Importance of machine learning education
In April 2018, the United States Food and Drug Administration (FDA) approved the marketing of the first digital health device that used AI to detect diabetic retinopathy (DR) in adults [2]. The software driven by an AI algorithm can process digital images of the patient's retinas and detect the probability of mild or more severe DR. The FDA evaluated data from a study of retinal images obtained from 900 patients with diabetes at 10 primary care sites. In the study, the AI algorithm correctly identified the presence of more than mild DR 87.4% of the time and was able to correctly identify those patients who did not have more than mild DR 89.5% of the time. This software can now be installed within a primary care setting to allow practitioners opportunistic screening of DR during routine patient visits. Since this approval, several other AI-based medical devices and algorithms have undergone regulatory clearance [3], and many more of them are in the pipeline to get reviewed and potentially gain FDA approval. Evidently, the "end-users" of most of these devices are physicians or other care providers. The question that arises is whether to continue to expect physicians to be plain end-users or if they should get educated on the machine learning algorithms that could potentially facilitate diagnostic decision-making in clinical practice. If the care providers are not necessarily well-versed with computing and data sciences, then there is a high chance that these technologies may lose their value.
Physicians and other care providers also have an important role to play in protecting their patients and other stakeholders from pitfalls and inappropriate application of machine learning technologies. It is important to filter the true value of machine learning from the hype as the care providers need to be aware of the inaccuracies of machine learning, unethical use, and unwanted as well as costlier options for managing patients. For example, a recent study showed that AI systems for detecting COVID-19 using chest radiographs can learn spurious shortcuts even when tested on external datasets, thus pushing forward the need for stronger validation protocols and the use of interpretability techniques for evaluating medical AI systems [4]. On the other hand, studies have also revealed common pitfalls related to the development of medical AI models [5], and potential issues with mandating model explainability as a requirement for clinical deployment [6]. There are ongoing debates on various other topics including model relevance and validity, ethics and Kolachalama Page 2 Artif Intell Med. Author manuscript; available in PMC 2023 July 28. trust [7], privacy and security [8], as well as cost and efficiency [9]. Evidence from all these studies should prompt the clinical community to obtain a good understanding in machine learning [10], which can then enable them to be comfortable with the growing use of these technologies in healthcare.
It is worth noting that practitioners in many other fields are perhaps using machine learningbased systems without being well-versed in the underlying computing and machine learning fundamentals. Drawing a parallel with what happens within these non-medical fields, it is unsurprising to expect that care providers routinely use medical imaging equipment or other technologies without a thorough understanding of its fundamentals (physical, electronic, etc.). Nevertheless, the risk of not educating the end-users is that we may see less or inefficient user adoption, resulting in limited or incorrect utilization and reduced benefits of AI in healthcare. As the AI technologies continue to progress, the roles of care providers are likely to change as well. The advantage of educating the care providers is that they can more appropriately utilize the power of these technologies for patient care and management.
Clearly, the timing of when to provide this education matters. Recent articles argued that medical schools need to begin training the next generation of medical professionals by introducing machine learning courses within their curriculum [11][12][13][14][15][16][17]. There are similar perspectives to encourage machine learning education when physicians are going through advanced clinical training and beyond [18][19][20]. It is important to realize that change happens slowly, especially when medical schools strive to follow Abraham Flexner's uniformly arduous and expensive brand of medical education [21]. An alternative and potentially attractive possibility is to teach machine learning principles and its applications to undergraduate students as they prepare for medical school, at least in those institutions that offer such programs.
Pre-medical track provides an opportune window for machine learning education
Introduction to machine learning as part of postsecondary education for students on the premed track has several advantages. First, students are in their early formative years, and hence they are likely to gain broad insights related to principles of machine learning and their applications. Since calculus and statistics courses are already mandatory in most schools, students will have the opportunity to take additional courses on these topics along with courses in computing and programming before signing up for a machine learning course. As such, data science and machine learning courses meld calculus and statistics into a closed loop, reinforcing their value and significance. Most universities have already begun to design new sets of interdisciplinary courses related to teaching data science to undergraduate students that build on learning outcomes [22]. Such modules or degree programs are likely to provide students with exciting career paths that one could consider along with medical school such as MD/PhD, MD/MBA, MD/MPH or MD/JD programs, with specific emphasis on data science and machine learning. As such, machine learning skills are in high demand and even trainees who may not end up pursuing medical education are likely to benefit from these courses, both in their careers and personal life. Second, most medical school curricula are already cramped with several foundational and systems-based courses within their first two years to accommodate core clinical experiences in the third and fourth years. Moreover, assessment in undergraduate medical education, which drives much of learning, is largely focused on preparation for licensing exams. The tight schedule and exam-driven focus leave limited scope for the medical education offices to design a comprehensive course or a module that would allow students to gain some understanding of machine learning and its applications. These practical limitations can be overcome if the matriculant joins the medical school with at least a rudimentary knowledge on a few topics related to machine learning.
Curricular recommendations
The pre-med curriculum in the accredited universities should consider accommodating at least one foundational course on machine learning and another one focused on the application of machine learning in healthcare and medicine. As part of the foundational course, students can gain preliminary knowledge on the capabilities and limitations of the principal methodologies of data-driven, model-based prediction and decision-making, including inferential statistics, data mining, and machine learning. As part of their application-oriented course, students can develop the skills necessary to assemble computational pipelines and deliver reproducible data analysis of structured and unstructured biomedical datasets. Students can also develop the ability to assess the societal impacts of data-centered methods, including adherence to policy, privacy, security, and ethical norms. It is imperative that students acquire foundational knowledge in calculus, statistics, and computer programming prior to embarking on the machine learning courses. The spectrum of required exposure to machine learning is broad and can be tailored based on the interests and skills of the trainees. While some trainees will go on to become researchers that will develop AI systems (requiring in-depth knowledge of data science), other will be consumers of AI applications (requiring more general knowledge). For those that wish to fully engage in developing AI systems, even these two courses may not suffice and therefore pursuit of an undergraduate degree in computer science, mathematics and engineering may be encouraged.
Graduates from the program will be ready to contribute to the art, science, and engineering of the data-driven processes that are woven into all aspects of society, economy, and public discourse. They will be ready to pursue a healthcare career in which they contribute to the synthesis of knowledge through methodical, generalizable, and scalable extraction of insights from data, as well as to the design of new information systems and products that enable actionable use of those insights toward discovery, clinical diagnosis, patient management and innovation in a wide range of medical applications.
We are still in the early stages of appreciating the full impact of data science in medicine. Thus, adding machine learning courses to the long list of AAMC-recommended or another organization-based pre-med courses might sound impractical for universities to implement curricular changes, and challenging for students to take all of them within the stipulated time. The purpose of this article is therefore not to enforce sweeping changes in the pre-med curriculum but suggest a simple recommendation to increase flexibility on the Kolachalama Page 4 Artif Intell Med. Author manuscript; available in PMC 2023 July 28. course selection. For example, all the required pre-med courses related to biology, chemistry, humanities, organic chemistry, and physics can be made mandatory for one semester, leaving another semester open to taking other courses. In this framework, students will gain more breadth of knowledge based on the existing pre-med courses along with courses related to machine learning and its applications. The accredited universities that already have or those that are currently planning to design undergraduate programs focused on data science are well-placed to offer machine learning courses that can be integrated within the pre-med track. Note that this article mainly recommends pre-med curricular changes in the United States and Canadian universities. Indeed, educational institutions in Australia, Ireland, and South Korea provide a choice to obtain a pre-medical degree [23]. Therefore, educational systems within such countries have an option to propose machine learning courses during postsecondary education.
Conclusion
There is growing evidence that AI frameworks driven by machine learning algorithms have the potential to accelerate the workflow of clinicians and other care providers. The future of utilizing AI to improve healthcare is exciting and we are likely to witness more use cases that address routine and challenging clinical applications. To fully realize the promise and evaluate the pitfalls of AI, we need to consider how the next generation of trainees who are aspiring to join the healthcare workforce would need to understand the tenets of machine learning and its medical applications. Medical schools as well as residency and fellowship programs should continue to find ways to offer machine learning training modules despite their overly tight schedules. A forward-thinking initiative will be to offer introductory machine learning courses as part of pre-medical education at accredited institutions. This article can be viewed as a call to the community to pursue these recommendations.
|
2022-05-10T15:56:04.543Z
|
2022-05-04T00:00:00.000
|
{
"year": 2022,
"sha1": "8ac17381f74d91d1c0f3e27f3fa8d883b4b4909d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.artmed.2022.102313",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ee804e51973e2b0cfb124f16cd58f30608e1aaa",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
14006610
|
pes2o/s2orc
|
v3-fos-license
|
Systematic 1/M Expansion for Spin 3/2 Particles
Starting from a relativistic formulation of the pion-nucleon-delta system, the most general structure of 1/M corrections for a heavy baryon chiral lagrangian including spin 3/2 resonances is given. The heavy components of relativistic nucleons and delta fields are integrated out and their contributions to the next-to-leaing order lagrangians are constructed explicitly. The effective theory obtained admits a systematic expansion in terms of soft momenta, the pion mass $m_\pi$ and the delta-nucleon mass difference $\Delta$. As an application, we consider neutral pion photoproduction at threshold to third order in this small scale expansion.
Introduction
Chiral symmetry provides important restrictions on the interactions of pions, nucleons and photons [1]. The consequences are most conveniently summarized by the use of an effective field theory, valid in the low energy regime. This simultaneous expansion in small momenta and light quark masses is known as Chiral Perturbation Theory (ChPT) [2,3,4,5,6]. Unlike in the sector of Goldstone bosons, the mass of the nucleon is large and nonvanishing in the chiral limit. Nevertheless, a consistent chiral power counting, known as Heavy Baryon Chiral Perturbation Theory, HBChPT, can be maintained by performing also a systematic 1/M-expansion, M being the mass of a baryon [7,8]. 3 In principle, any observable of the pion-nucleon system can be calculated to a given order in the chiral expansion-the price to be paid is the introduction of new low energy coupling constants which are not fixed by the symmetry requirement alone. Any parameter-free prediction of HBChPT is, however, also a prediction of low energy QCD. Many of these resulting "low energy theorems" (LET) have been discussed in the recent literature [11].
The spin 3/2 delta resonances play a special role in the pion-nucleon system, since the mass difference ∆ = M ∆ − M N is not large compared to the typical low energy scale m π and because the πN∆-coupling constant is anomalously large. The more conventional version of HBChPT takes into account the effect of the delta (and of other resonances) only through contributions to the coupling constants of higher order operators in the chiral expansion. This approach is in particular well suited to derive low energy theorems. A concern, however, is that, in the physical world of nonvanishing quark masses, the perturbation series might converge slowly due to the presence of large coupling constants driven by small denominators-i.e. by terms proportional to 1/∆. An alternative approach to HBChPT includes the delta degrees of freedom explicitly [8,12]. In addition to solving the problems mentioned above, this technique has the advantage that the range of applicability can in principle be extended into the delta-region.
In this letter we sketch explicitly the steps necessary for a systematic low energy expansion in the presence of the spin 3/2 delta resonance. A full presentation will appear shortly. [13] We begin with a covariant formulation of an effective theory of the πN∆-system. The heavy degrees of freedom are identified and integrated out via a systematic 1/M-expansion. We arrive at an effective field theory of nonrelativistic nucleons and deltas coupled to pions and external sources. The theory is manifestly Lorentz invariant and admits a low energy expansion in terms of small momenta q, the pion mass m π and the delta-nucleon mass difference ∆, which we collectively denote by the symbol ǫ. 4 Of course, the procedure described in the next section is not unique-the general methods of such heavy mass expansions have been given previously [9,10,14]. However, it represents a useful starting point for the evaluation of higher order effects . Indeed, the 1/M corrections derived in this manner have a simple physical interpretation-exchange of the heavy degrees of freedom-and the formalism can straightforwardly be extended to deal with higher order terms in the ǫ expansion, as will be discussed in [13]. Furthermore, it is straightforward to treat other resonances than spin 3/2 along the same lines.
As a simple application of our formalism we shall consider neutral pion photoproduction at threshold . A one-loop calculation within the framework of HBChPT has produced a LET for the electric dipole amplitude E 0 + as an expansion in powers of µ = m π /M N [15,16] Here κ p is the anomalous magnetic moment of the proton while g πN N is the strong pion-nucleon coupling constant. Recently an O(p 4 ) calculation has been given [17] which reconciles the theoretical prediction with experiment [18]. However, each term in this expansion is large with alternating sign, thus making the convergence particularly slow. Below we calculate the correction of order ǫ 3 to Eq. (1), which arises due to a 1/M corrected vertex of the ∆(1232) resonance. However, first we set our formalism.
1/M-expansion for spin 3/resonances
Consider the lagrangian for a relativistic spin 3/2 field Ψ µ coupled in a chirally invariant manner to the Goldstone bosons with Following Pascalutsa [19] we have factored out the dependence on the unphysical free parameter A by use of the projection operator O A αµ . Then defining the physical spin 3/2 field as we see that Eq. (2) is manifestly invariant under point transformations as required by general considerations [20].
To leading order in the derivative expansion, the relativistic spin 3/2 lagrangian with field-redefinition Eq.(4) then takes the form 5 To take into account the isospin 3/2 property of ∆(1232) we supply the Rarita-Schwinger spinor with an additional isospin index i, subject to the subsidiary condition Following the conventions of SU(2) HBChPT in the spin 1/2 sector [6,11], we have defined the following structures: v µ , a µ denote external vector, axial-vector fields and are the only external sources possible at this order. The first two pieces in Eq. (7) are the kinetic and mass terms of a free spin 3/2 lagrangian [19]. The remaining terms constitute the most general chiral invariant couplings to pions. Note that aside from the conventional π∆∆ coupling constant g 1 we have included two additional pion-couplings characterized by g 2 , g 3 which contribute only if at least one of the spin 3/2 fields is off mass shell. The next step consists of identifying the "light" and "heavy" degrees of freedom of the spin 3/2 fields, respectively. The procedure is analogous to the case of spin 1/2 fields, as pioneered in the case of heavy quark effective theory [9,10] and later applied to spin 1/2 HBChPT [16]. For the case of spin 3/2 particles the problem is technically somewhat more challenging due to the off-shell spin 1/2 degrees of freedom associated with the Rarita-Schwinger field 6 . In order to separate the spin 3/2 from the spin 1/2 components it is convenient to introduce a complete set of orthonormal spin projection operators for fields with fixed velocity v µ P 3/2 The formalism for the case of heavy systems of arbitrary spin was given by Falk in [9].
which satisfy The four-velocity v µ is related to the four-momentum p µ of the spin 3/2 particle by where M is a baryon mass scale and k µ is taken to be a residual soft momentum. We now employ the familiar projection operators of the heavy mass formalism and introduce heavy baryon fields for our spin 3/2 particles in order to eliminate the dependence on the large mass M ∆ in Eq. (7). In analogy to the spin 1/2 case we identify the "light" spin 3/2 degree of freedom via whereas the remaining components can be shown to be "heavy" [13] and are integrated out. We note in particular that G µ includes both spin 1/2 and spin 3/2 components. Of course, the virtual effects of the heavy degrees of freedom G µ are nevertheless accounted for in the heavy baryon formalism, they show up as higher order 1/M corrected vertices involving the remaining (on-shell) spin 3/2 fields T µ , as we will show below. We also note that the T µ degrees of freedom satisfy the constraints v µ T µ i = γ µ T µ i = 0 (15) and correspond to the SU(2) version of the decuplet field introduced in ref. [8].
We now perform a systematic 1/M-expansion, following an approach developed by Mannel et al. in HQET [10], which was later applied to spin 1/2 HBChPT by Bernard et al. [16]. Since we are interested in the interactions of nucleons with the spin 3/2 resonance, we must treat both fields simultaneously. We therefore write the most general lagrangian involving relativistic spin 1/2 (ψ N ) and spin 3/2 (ψ µ ) fields as with L ∆ given in Eq. (7) and where the dots denote higher order counterterm contributions and z corresponds to the leading-order pion-nucleon-delta off-shell coupling constant.
Rewriting the lagrangians of Eq. (16) in terms of the spin 3/2 heavy baryon components T µ and G µ , and the corresponding "light" and "heavy" spin 1/2 components N, h, defined as we find the general heavy baryon lagrangians Note that we have used the same mass M in the definition of heavy delta and nucleon fields respectively. This is necessary in order that all exponential factors drop out in Eq. (19). The matrices A N , B N , ..., C ∆ admit a small energy scale expansion of the form where A (n) ∆ is of order ǫ n . As emphasized in the introduction, we denote by ǫ small quantities of order p, like m π or soft momenta, as well as the mass difference ∆ = M ∆ − M N . This mass difference is distinct from the pion mass in the sense that it stays finite in the chiral limit. However, in the physical world, ∆ and m π are of the same magnitude. We therefore adhere to a simultaneous expansion in both quantities. It is only through this small scale expansion that we obtain a systematic low energy expansion of the πN∆-system.
To make this more explicit, consider the leading order matrices A where S µ denotes the Pauli-Lubanski spin vector. One can easily see from Eq.(21) that our formalism produces the exact SU(2) analogues of the spin 1/2 [7] and spin 3/2 [8] lagrangians of Jenkins and Manohar. Furthermore, as expected, the O(ǫ) heavy baryon lagrangians Eq.(21) are free of the offshell couplings z, g 2 , g 3 7 In our formalism off-shell couplings will only start contributing at O(ǫ 2 ) via B and D matrices. Explicit expressions for the expansions of B ∆ , C ∆ etc. will not be displayed here but can be found in ref. [13].
From Eq.(21) we determine the SU(2) HBChPT propagator for the delta field: where P 3/2 µν is a spin 3/2 projector [8] and denotes an isospin 3/2 projector. From Eq. (22) one can see that the delta propagator counts as ǫ −1 in our expansion scheme. The final step is again in analogy to the heavy mass formalism for spin 1/2 systems. Shifting variables and completing the square, we obtain the effective action Eq.(26) represents the master formula of our treatment of a coupled spin 1/2 -spin 3/2 system in HBChPT. All 1/M corrected vertices can be directly obtained by calculating the appropriate matrices A, B, C, D to any order desired. The new terms proportional to C −1 ∆ and C −1 N are given entirely in terms of coupling constants of the lagrangian for relativistic fields. This guarantees reparameterization 8 and Lorentz invariance [14,23]. Furthermore, all such terms are 1/M suppressed. The effects of the heavy degrees of freedom (both spin 3/2 and 1/2) thus show up only at order ǫ 2 . Note also that the effective NN-, N∆-and ∆∆-interactions all contain contributions from both heavy N-and ∆-exchange respectively.
In the above formalism, it is understood that at each order one must also include the most general counterterm lagrangian consistent with chiral symmetry, Lorentz invariance, and the discrete symmetries P and C. As should be clear, it is crucial to write this counterterm lagrangian in terms of relativistic fields. The choice of variables Eqs. (13,14) and Eq.(18) yields then automatically the contributions to matrices A, B, C and D. Only these objects have a well defined small scale expansion-since S eff is written entirely in terms of heavy baryon fields, derivatives count as order ǫ, quark masses as order ǫ 2 etc. In order to calculate a given process to order ǫ n , it thus suffices to construct matrices A to the same order, ǫ n , B and D to order ǫ n−1 , and C,C to order ǫ n−2 . Note that since the propagator of the T µ field counts as order ǫ −1 , one-particle reducible diagrams as in Fig. 1 have also to be considered. 9 Finally one has to add all loop-graphs contributing at the order one is working. The relevant diagrams can be found by straightforward power counting in ǫ.
1/M corrections to threshold π 0 photoproduction to O(ǫ )
As an elementary example of the use of this formalism, consider neutral pion photoproduction 10 . A phenomenological analysis of the influence of ∆(1232) in this process was given in [24]. At threshold in the small scale expansion as described above, to order ǫ 3 , this amounts to calculating all 1-loop graphs with vertex insertions fromà (1) as well as all tree graphs with vertices derived from up to and includingà (3) to the process γp → pπ 0 . The electric dipole amplitude is then related to the cross section in the center of mass frame through [25] (E 0+ ) 2 = |k| |q| where k and q are the photon and pion three-momenta, respectively. It is most convenient to break up the calculation into one-particle irreducible (1PI) diagrams. Possible loop graphs involving the leading order vertices of Eq. (21) start at order ǫ 3 in our counting. However, one can check explicitly that, aside from the well-known triangle graph contribution [15,16], at threshold there exist no other loop effects to E π 0 p 0+ involving ∆(1232) to this order in the ǫ-expansion.
The photoproduction amplitude is then given by the diagrams of Fig. 1 a)-c). Due to the structure of A For the nucleon-nucleon transition this is well known. For the nucleon-delta vertex, the situation is similar. To leading order, the γN∆ vertex does not exist, the γπN∆ coupling vanishes for the neutral pion, and the πN∆ vertex is proportional to q µ , which, when contracted with projection operator P 3/2 µν associated with the delta-propagator, vanishes at threshold. Moreover, 1PIvertices without pions or photons attached, also only begin at O(ǫ 2 ); this is the reason why no tree diagrams with more than a single propagator need be considered. Thus the 1PI one-photon and one-pion vertices are needed to O(ǫ 2 ), while the 1PI πγ vertices are to be calculated to O(ǫ 3 ).
Analyzing Eq.(26) we evaluate the following structures for our calculation of N and γ 0 B Vertices which do not involve spin 3/2 particles can be taken from [11,23]. Summing up all the nucleon-only contributions, including the triangle graphs, we recover the LET Eq.(1), as expected. We now proceed to analyse the effects of 1/M corrected vertices involving ∆(1232).
Due to the fact that the photoexcitations of ∆(1232) only start with the M1 transition, there is no γN∆ interaction at O(ǫ). Consequently, there is also no 1/M corrected vertex at O(ǫ 2 ). However, the well-known relativistic counterterm lagrangian [13,22] provides a large part of the M1 transition strength and leads to the heavy baryon structure which we use in the diagrams of Fig. 1b. The leading order πN∆ vertex does not contribute at threshold, but its 1/M corrected structure can provide a contribution to the s-waves ! Multiplying out the relevant matrices, we find: These two vertices then lead to an O(ǫ 3 ) contribution of ∆(1232) to the process γp → π 0 p at threshold, given by the diagrams in Fig. 1b: This new contribution of Eq. (32) is distinct in the following sense-in the chiral limit, it scales like m 3 π , i.e. the corresponding photoproduction amplitude is of order p 4 . The LET Eq.(1) is therefore not violated by this term. There are many other terms which are of O(p 4 ) [17], but Eq.(32) is the only term which is of order ǫ 3 due to the 1/M corrections. In the physical world of finite pion mass, E ∆ 0+ is in principle of the same order of magnitude as the p 3 effects. Moreover, it has the opposite sign to the large p 3 terms in the LET for E 0+ .
In order to give a numerical estimate for E 0+ to O(ǫ 3 ), we add Eq. (32) and Eq. (1). Utilizing b 1 = −2.30 ± 0.35, g πN ∆ = 1.5 ± 0.2 [22], we find Comparing with the number extracted from the most recent experiment [18], E 0+ = (−1.31 ± 0.08) × 10 −3 /m π + , which is in agreement with a chiral O(p 4 ) calculation [17], we conclude that it is mandatory to calculate the E 0+ multipole to O(ǫ 4 ). As the O(p 4 ) calculation shows, Nπ loop graphs at the next order cancel to a large extent the big contribution from the triangle graphs. It will be interesting to see how big the ∆(1232) effects are at that order. Work in this direction is under progress.
To conclude, we have presented a systematic low energy expansion of HBChPT including spin 3/2 resonances. As an application, we have considered π 0 photoproduction at threshold and have found the leading contribution of ∆(1232) to E π 0 p 0+ to be of order m 3 π /(m π + ∆). Many other processes associated with the πN∆-system can be treated with the formalism described here. Figure 1: Nucleon pole (a), delta pole (b), and contact (c) diagrams contributing to pion photoproduction in the heavy baryon approach.
|
2014-10-01T00:00:00.000Z
|
1996-06-27T00:00:00.000
|
{
"year": 1996,
"sha1": "fad764992300ede2872b48b055ae2837b668d86e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9606456",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ed686eee8e76053b7bdc29c6c460216086d5ad43",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
7363052
|
pes2o/s2orc
|
v3-fos-license
|
Can words heal? Using affect labeling to reduce the effects of unpleasant cues on symptom reporting
Processing unpleasant affective cues induces elevated momentary symptom reports, especially in persons with high levels of symptom reporting in daily life. The present study aimed to examine whether applying an emotion regulation strategy, i.e. affect labeling, can inhibit these emotion influences on symptom reporting. Student participants (N = 61) with varying levels of habitual symptom reporting completed six picture viewing trials of homogeneous valence (three pleasant, three unpleasant) under three conditions: merely viewing, emotional labeling, or content (non-emotional) labeling. Affect ratings and symptom reports were collected after each trial. Participants completed a motor inhibition task and self-control questionnaires as indices of their inhibitory capacities. Heart rate variability was also measured. Labeling, either emotional or non-emotional, significantly reduced experienced affect, as well as the elevated symptoms reports observed after unpleasant picture viewing. These labeling effects became more pronounced with increasing levels of habitual symptom reporting, suggesting a moderating role of the latter variable, but did not correlate with any index of general inhibitory capacity. Our findings suggest that using an emotion regulation strategy, such as labeling emotional stimuli, can reverse the effects of unpleasant stimuli on symptom reporting and that such strategies can be especially beneficial for individuals suffering from medically unexplained physical symptoms.
INTRODUCTION
The perception of signals coming from the body (interoception; Cameron, 2001) has been strongly linked to emotional processes. Emotion theories from James (1884) to newer views (Damasio, 1994;Wiens, 2005) consider bodily signals as essential elements of emotional experiences, while theories of interoception emphasize the role of an affective component of bodily sensations in the perception of body state (Craig, 2003). This inter-connection is further supported by neurobiological findings showing intertwined neural pathways for the representation of emotional experiences and the perception of bodily sensations (Craig, 2009).
This link between emotion and interoception has also been observed at the behavioral level, as emotional states seem to interfere with the perception of bodily signals, a process with important implications for the subjective experience of physical symptoms. Research has shown for example that the presence of unpleasant cues augments the perception of experimentally induced bodily sensations, like pain (de Wied and Verbaten, 2001;Meagher et al., 2001), dyspnea (Von Leupoldt et al., 2006), or esophageal stimulation (Phillips et al., 2003), as well as the reporting of physical symptoms in general (Salovey and Birnbaum, 1989). Recent work has further shown that unpleasant cues can result in increased symptom reports even without any physiological challenge, although this effect seems to be mostly observed in persons reporting frequent bodily complaints in daily life not explained by organic dysfunction (high habitual symptom reporters; Bogaerts et al., 2010;Constantinou et al., 2013). Such differential effects of unpleasant cues have been also reported for patients with functional syndromes (Montoya et al., 2005) or people scoring high on trait Negative Affectivity (Bogaerts et al., 2005).
Thus, it seems that some people are more prone to be influenced by emotional information than others in their subjective experience of current body state. Interestingly, both people high on trait Negative Affectivity (Gross and John, 2003;Moberly and Watkins, 2008) and patients with functional syndromes (Waller and Scheidt, 2006;van Middendorp et al., 2008) have been found to use ineffective strategies to regulate their emotion, like suppression and avoidance, while less effort for emotion regulation has been linked to increased symptom reporting during periods of stress in non-clinical samples (Goldman et al., 1996). Furthermore, brain imaging studies have shown that patients with functional syndromes show greater activation in limbic networks and reduced activation in prefrontal inhibitory systems compared to controls during unpleasant bodily stimulation (Mayer et al., 2005;Elsenbruch et al., 2010;Tillisch et al., 2011). These findings overall suggest that individuals who tend to over-report symptoms exhibit deficits in emotion regulation, and these deficits may contribute to the "fusion" of emotional experiences with the symptom perception process. Based on this notion, it can be hypothesized that enforcing emotion regulatory processes may reduce affective influences on symptom reporting in these groups, a proposition that is examined in this paper.
Emotion regulatory processes, that is "the processes by which individuals influence which emotions they have, when they have them and how they experience and express these emotions" (Gross, 1998;p. 275) include strategies employed intentionally to down-regulate emotion (explicit emotion regulation), as well as processes that incidentally result in down-regulation (implicit emotion regulation; Gyurak et al., 2011). Although, explicit emotion regulation strategies, like cognitive reappraisal or behavioral suppression, have dominated emotion regulation research (Berkman and Lieberman, 2009;Gyurak et al., 2011), recent work has focused on the effects of incidental emotion regulation strategies, such as affect labeling. The latter can be seen as an operationalization of the commonsensical notion that verbalizing feelings can dampen them (Hariri et al., 2000). Affect labeling tasks typically include presentations of emotional stimuli, but instead of explicit instructions to down-regulate emotions, participants are asked to assign emotional labels to the stimuli.
Assigning emotional labels has been found to reduce amygdala activation and increase inhibitory activation at prefrontal areas compared to non-verbally matching target faces with similar in expression facial stimuli (Hariri et al., 2000), to labeling non-emotional features of target stimuli (Lane et al., 1997;Lieberman et al., 2007) and to merely viewing pictorial cues (Taylor et al., 2003). Thus, affect labeling seems to activate inhibitory processes and exert emotion regulatory benefits (Lieberman et al., 2007) in a way similar to explicit emotion regulation (Ochsner et al., 2004). These benefits also extend to self-reported affect and autonomic reactivity, with studies indicating reduced physiological responding following labeling of emotional pictures (McRae et al., 2009). The attenuating effects of labeling seem to persist in time (Tabibnia et al., 2008), while recent data support its usefulness in the context of exposure therapy (Kircanski et al., 2012).
A strategy like affect labeling could be especially beneficial for persons experiencing medically unexplained physical symptoms, as they typically tend to avoid emotional experiences (van Middendorp et al., 2008) and are characterized by a difficulty in recognizing and expressing emotions (alexithymia; De Gucht and Heiser, 2003;Waller and Scheidt, 2006). Labeling an emotion implies the activation of prior conceptual knowledge about emotion categories, which among others includes ways to act upon specific emotions (Barrett, 2006). Thus, an affect labeling procedure may initiate emotion regulatory processes, that otherwise would not spontaneously occur in people who tend to over-report symptoms, by engaging them in categorizing affect. This is assumed to reduce experienced negative affect, which in turn may lead to a reduction in symptom reporting.
A secondary question is whether these hypothesized attenuating effects of affect labeling on symptom reports relate to dispositional regulatory capacities of high symptom reporters. Prior research suggests that successful emotion regulation depends on executive functioning abilities (Hofmann et al., 2012), and shares common neural substrates related to inhibitory processes, like the right VLPFC, with other forms of self-control (Cohen and Lieberman, 2011). As high symptom reporters seem to also perform poorly in tasks assessing executive control, like motor inhibition tasks (Glass et al., 2011), we aimed to examine whether labeling effects on symptom reporting depend on individual differences in general inhibitory abilities.
Besides behavioral tasks, an additional measure of regulatory capacity is used in this study, namely heart rate variability (HRV). HRV is considered a physiological marker of emotion regulation (Thayer and Lane, 2000;Appelhans and Luecken, 2006) as it reflects the capacity of efferent signals to modulate cardiac activity according to situational demands. It has been associated to selfreported (Fabes and Eisenberg, 1997) and spontaneous regulation of emotion under unpleasant contexts (Pu et al., 2010).
To sum up, the present study aimed to examine: (a) whether affect labeling can reverse the augmenting effects of unpleasant cues on symptom reporting in a non-clinical sample, (b) whether this possible reduction is modulated by the level of habitual symptom reporting, (c) whether such a reduction is predicted by changes in experienced affect and d) whether it correlates with behavioral, self-reported and physiological indices of self-regulation.
To this end, students with varying levels of habitual symptom reporting completed a modified Affect Labeling task, which included viewing pleasant and unpleasant pictures under three conditions: merely viewing, labeling the emotion of the picture, and labeling the content of the picture. Picture viewing was followed by affect ratings and a symptom checklist. Participants also completed a computerized motor inhibition task (Parametric Go-No Go task) and an HRV assessment. We expected that: (a) labeling the emotion of the pictures will lead to reduced self-reported affect, as well as reduced symptom reports after unpleasant pictures compared to merely viewing the pictures and non-emotional labeling, (b) the effects of emotional labeling on symptom reports will be more pronounced at higher levels of habitual symptom reporting, (c) the reduction in symptom reports during labeling will be predicted by reductions in affect ratings, and (d) this reduction in symptom reports will correlate positively with inhibitory capacity as assessed by the Go-No Go task, with self-reported self-regulation capacity and with HRV indices.
SAMPLE
Data were collected from 63 healthy first year psychology students (seven male, M age = 19.02, SD age = 1.52), who were invited based on their scores on the Checklist for Symptoms in Daily Life (CSD; Wientjes and Grossman, 1994) obtained during collective psychological testing. Participants with scores across the whole spectrum of habitual symptom reporting were selected for participation. Specifically, the initial pool of students (N = 401) was divided into four groups based on the quartiles of the total score of the CSD (scores could range from 39 to 195) and an equal amount of participants was invited from each group. Due to unbalanced response rates, an equal number of participants from each group was not feasible: the CSD scores of the final sample followed a rather normal distribution (range = 50-116, M = 82.87, SD = 15.20).
Participants were excluded if they reported to (a) have a diagnosis of a medical or psychiatric disorder, (b) have an electronic implant (e.g., pacemaker) and (c) use anxiolytic medication, Frontiers in Psychology | Emotion Science antidepressants, or beta-blockers. The data for the Affect Labeling task of two participants who failed to appropriately follow the instructions were excluded from analyses. Students were compensated for their participation with course credit or a small monetary reward. The study was approved by the Multidisciplinary Ethical Committee of the Faculty of Psychology and Educational Sciences of the University of Leuven.
Modified affect labeling task
The typical Affect Labeling task (Lieberman et al., 2007) was modified as follows: (a) instead of faces, emotional stimuli consisted of pictures selected from the International Affective Picture System (IAPS; Lang et al., 2005), (b) stimuli were grouped into sets homogenous in valence, and (c) a symptom reporting phase was added to each trial. Emotional pictures were selected using valence and arousal ratings of IAPS pictures provided by Belgian participants in other studies. They were grouped into six sets of 10 pictures, three pleasant and three unpleasant 1 , so that sets of similar valence did not differ from each other in valence or arousal 2 . Furthermore, in each pleasant set, five pictures were scoring high on excitement (e.g., skiing) and five on contentment (e.g., cute animals) based on Mikels et al. (2005) norms, while in each unpleasant set five pictures were high on sadness (e.g., cemetery) and five on fear (e.g., gun).
The task consisted of six picture viewing trials, three with pleasant and three with unpleasant pictures. During each trial, 10 pictures were presented in the upper part of the screen for 6 s each (no inter-stimulus interval) and participants had to perform one of three tasks: a) VIEW: merely watch the pictures, (b) LABEL EMOTION: select from two emotion words presented under the picture (two out of: content, excited, sad, afraid) the one most applicable to the depicted emotion, and (c) LABEL CONTENT: select from two words presented under the picture (two out of: object, animal, human, landscape) the one most applicable to describe the content of the picture.
Each trial consisted of: (a) a 3-s presentation of a word cue signaling the task participants had to do (VIEW, LABEL EMOTION, LABEL CONTENT), (b) a 60-s period of picture viewing, and (c) a 90-s inter-trial period, during which participants completed electronically affect ratings and a symptom checklist (see further).
Motor inhibition task
Motor inhibition was assessed with the Parametric Go-No Go task (PGNG; Langenecker et al., 2007), a reaction-time task with increasing inhibitory demands. During the PGNG, participants 1 Positive1: 1463, 1920, 2550Positive2: 1620Positive2: , 2341Positive 3: 1710Positive 3: , 2311Positive 3: 2360Negative1: 6242, 9001, 6190, 9911, 1525, 9410, 9425, 9426, 9520, 9561;Negative2: 9611, 1114Negative2: 9611, , 2095Negative2: 9611, , 2520Negative2: 9611, , 2900Negative2: 9611, , 2692Negative3: 1932Negative3: , 2800 see letters on a computer screen (black small case letters on a white background) presented one after the other for 500 ms each without inter-stimulus interval. At the first level of the task, participants press a button whenever one of three target letters (x, y, z) appears on screen as soon as possible. At the second level, participants are asked to press a button every time one of two targets (x or y) appears on screen, but only when the current target is different from the previous one, i.e., respond to a "y," after responding to an "x" (Go trials). They must inhibit their response when the current target is the same as the previous one, i.e., inhibit responding to an "x" after responding to an "x" (No Go trials). The third level is identical to the second but with three targets this time (x, y, z). The accuracy (percentage of correct responses) at the "No-Go" trials of the last two levels was used in analyses as an index of behavioral inhibition capacity.
Heart rate variability
Baseline heart rate was recorded using a Polar RS800CX watch (Polar Electro Oy, Kempele, Finland), with a chest strap on which electrolyte gel was applied, placed just below the chest. Participants were asked to sit in a comfortable chair, relax, and breathe normally for 5 min. The experimenter holding the watch was seated at the side of the participant, so that the watch was not visible. Polar watches are commonly used to collect heart rate and they have been found to provide data comparable to those by traditional ECG electrodes (Nunan et al., 2009). The recorded R-R intervals were off-line processed with the ARTiiFACT software (Kaufmann et al., 2011) to extract HRV parameters by two independent raters (inter-rater reliability: r = 0.92-0.99). For the purposes of this study, the RMSSD time-domain parameter and the High Frequency (HF) frequency-domain parameter were used.
Due to technical problems, the data of eleven participants were not used, while another participant was excluded due to smoking right before the recording.
Habitual symptom reporting
The CSD based on Wientjes and Grossman's symptom checklist (1994) was used to assess participants' tendency for symptom reporting in everyday life. In this 39-item questionnaire participants rated on a 5-point Likert Scale (1 = never, 5 = very often) the extent to which they experienced a variety of symptoms, e.g., headache, dizziness, back pain, etc. over the past year. Total scores were used for the selection of participants. The reliability (Cronbach's alpha) of the total scores exceeded 0.90 in our sample.
Self-control
As a self-report measure of self-regulation, the Dutch version of the Brief Self-Control Scale (Tangney et al., 2004) was used. This 13-item questionnaire consists of statements like "I am good at resisting temptation" and participants have to rate the extent each statement reflects how they are on a 5-point Likert scale (1 = not at all, 5 = very much). Internal consistency and test-retest reliability has been found to exceed the criterion of 0.70 in the English (Tangney et al., 2004) and the Dutch version (as reported in Finkenauer et al., 2005).
Emotion regulation
The Dutch version of the 10-item Emotion Regulation Questionnaire (Gross and John, 2003) was used to assess people's reliance on cognitive reappraisal to regulate their emotions (six items; e.g.,: "I control my emotions by changing how I think about the situating I'm in") or suppression (four items; e.g.,: "I control my emotions by not expressing them"). Participants rated their level of agreement with each statement on a 7-point Likert scale. A reappraisal and a suppression score were calculated.
State symptom reports
A list of 14 complaints was incorporated in the modified Affect Labeling task after each trial. The list included 10 everyday symptoms previously used in a similar picture viewing paradigm (Bogaerts et al., 2010;Constantinou et al., 2013) and 4 additional gastro-intestinal symptoms added for exploratory reasons (chest tightness, pounding of the heart, headache, fatigue, not able to breathe deeply, rapid heartbeat, dizziness, muscular pain, stomach or abdominal cramps, nausea, stomach pain, bloated stomach, reflux sensations, burning feeling in the eyes). Participants rated the presence of each of these complaints during picture viewing on a 5-point Likert scale (1 = not at all, 5 = very strong). A total symptom score (ranging from 14 to 70) was computed for each trial.
Affect ratings
After each trial of the modified Affect Labeling task, participants also rated their experienced affect during picture viewing in the dimensions of valence, arousal and control using a computerized nine-point version of the Self-assessment Manikin (SAM; Bradley and Lang, 1994). With this pictorial scale, values for each of the three dimensions are represented by nine human figures depicting gradually increasing valence, arousal or control, and participants respond by choosing the appropriate figure.
PROCEDURE
Participants selected based on their CSD scores were invited by e-mail to participate in a study about "emotions and reaction time." For the HRV assessment, participants were asked to refrain from alcohol for 12 h and smoking, physical exercise and caffeine for 4 h before the experiment. They were also asked not to eat 2 h prior to the experiment.
Upon arrival, participants gave written informed consent and their compliance with the aforementioned instructions was assessed. A series of factors that may influence HRV was also recorded: smoking frequency, alcohol and caffeine consumption, exercise and BMI. They also completed a first set of questionnaires (General Health Questionnaire and the CSD). After questionnaire completion, the equipment was attached and the HRV recording took place. Afterwards, participants were seated in front of a desktop computer and completed the PGNG task and the Affect Labeling task in counterbalanced order.
The three levels of the PGNG task were completed at fixed order while for the Affect Labeling task the six picture viewing trials were presented in a semi-counterbalanced order. Specifically, 12 different orders were constructed, so that each of the six trials was presented twice in a specific order position, while making sure that each of the three sets of pleasant/unpleasant pictures was presented four times for each of the three tasks (View, Label Emotion, Label Content).
A 10-min break was added between the PGNG and the labeling task to avoid fatigue effects, during which participants completed the rest of questionnaires (Self-Control Questionnaire, Emotion Regulation Questionnaire). The researcher was present in the room throughout the experiment and seemed to be working at the other side of the room, so that participants did not feel disturbed or watched, but could ask questions whenever needed.
The Affect 4.0. software (Spruyt et al., 2010) was used for stimuli presentation and timing of the labeling task, while E-prime 1.0 (Schneider et al., 2002) was used for the presentation of the PGNG task.
DATA ANALYSES
Data from the Affect Labeling task were analyzed with Repeated Measures ANCOVA with Emotion (positive/negative) and Task (View/Label Content/Label Emotion) as within variables and CSD scores as a continuous predictor (after centering to the mean). Repeated measures ANCOVAs were run with the affect ratings after each trial (valence, arousal and control) as dependent variables to confirm the emotion regulatory effects of the task and with the total symptom scores to assess the effects of emotion regulation on symptom reporting. Greenhouse-Geisser corrected p-values and epsilon are reported when the sphericity assumption was violated, while follow-up comparisons were examined with post hoc Bonferroni tests. Interactions involving the continuous predictor were inspected by plotting effects at different levels of the continuous predictor (average, +1 SD, −1 SD).
To test the relationship among labeling effects and other regulatory measures, Pearson's bivariate correlations were conducted among the accuracy level of PGNG, HRV indices, the questionnaires and the emotion and content labeling effects on symptom reports. The latter variables were calculated by subtracting the total symptom scores during the viewing trial from those of the emotion labeling trial and the content labeling trial, respectively. Similar difference scores were calculated for valence and arousal ratings after each trial to examine whether labeling effects on emotion predict labeling effects on symptom reports in multiple regressions. All analyses were conducted with STATISTICA 11.0 software (Statsoft, Inc., Tulsa, OK, USA).
SAMPLE CHARACTERISTICS
Sample characteristics are illustrated at Table 1. All participants were healthy and did not take medication, except for one participant who had medication controlled asthma. The vast majority were non-smokers, while about 81% of the sample did not consume coffee regularly and 89% consumed alcohol rarely or weekly. Most female participants were using contraceptive pills.
MANIPULATION CHECKS
To examine whether the Affect Labeling task was successful in regulating affect, repeated measures ANCOVAs were conducted with valence, arousal and control ratings as dependent measures (see Table 2 for means and SDs). For valence, an Emotion main effect, F(1,59) = 753.81, p < 0.001, partial η 2 = 0.93, and an Emotion × Task interaction, F(2,118) = 15.53, p < 0.001, partial η 2 = 0.21 were found, indicating that negative trials led to significantly more unpleasantness than positive ones, but within the negative trials, both labeling conditions were rated as less unpleasant (emotion labeling: M diff = −0.79, 95% CI [−1.28, −0.30], p < 0.001, content labeling: M diff = −0.92, 95% CI [−1.41, −0.43], p < 0.001) compared to merely viewing unpleasant pictures (Figure 1). For the positive trials, the opposite effect was observed, with labeling conditions rated as more unpleasant compared to merely viewing pleasant pictures, especially Content Labeling (M diff = 0.53, 95% CI [0.13, 0.92], p < 0.01). Furthermore, Task interacted significantly with CSD scores (the continuous predictor), F(2,118) = 5.17, p < 0.01, partial η 2 = 0.08. This interaction was further explored with separate analyses for each task, which showed that the CSD scores had a nearly significant effect on valence ratings for merely viewing, F(1,53) = 3.49, p = 0.07, partial η 2 = 0.06, but not for the two labeling conditions. Plotting the effect of Task at different levels of CSD scores (average, +1 SD, −1 SD), indicated that the difference between viewing and the two labeling conditions was more pronounced as CSD scores increased (Figure 2).
FIGURE 1 | Emotion × Task interaction effect for valence (top left), arousal (top right) and control ratings (bottom left) and symptom reports (bottom right) after each trial.
led to reduced arousal compared to merely viewing (Figure 1). No main effects or interactions with CSD scores were found for arousal ratings.
Overall, these analyses indicate that (a) cues evoked the expected emotional reactions and (b) these reactions were dampened during labeling of the pictures emotionally or nonemotionally, suggesting that the task successfully produced emotion regulatory effects.
Labeling effects on symptom reports and measures of self-regulation
To examine whether labeling effects on symptom reports relate to participants' inhibitory capacities, we calculated difference scores subtracting the total symptom score of the viewing condition from (a) that of the Emotion Labeling condition (emotion labeling effect) and (b) that of the Content Labeling condition (content labeling effect), so that negative values represent a reduction and positive values an increase in symptom reports after labeling. As task had no effect on positive trials, only difference scores for the negative trials were calculated. Pearson's r correlations indicated that the two labeling effects did not correlate significantly with the accuracy at the PGNG task (means at Table 1). Performance on the PGNG task did not correlate with CSD scores either. Similarly, no significant correlations were found for the HRV indices or self-reported regulatory abilities.
Labeling effects on symptom reports and affect ratings
To examine whether labeling effects on symptom reporting are predicted by the emotion regulatory effects of labeling, two multiple regressions were conducted with (a) the emotion labeling effect on symptom reports and (b) the content labeling effect on symptom reports as dependent variables. Predictors for each regression were the corresponding labeling effect on valence ratings, the corresponding labeling effect on arousal ratings, total CSD scores (all centered to the mean) and the interaction of CSD scores with valence and arousal effects. All five predictors were entered together in the regression to examine the effects of each www.frontiersin.org emotional dimension (valence, arousal), while controlling for the other 3 . For the emotion labeling effect, the overall model was nearly significant, R 2 adj. = 0.10, F(5,55) = 2.28, p = 0.06. The only predictor approaching significance was arousal (β = 0.25, SE = 0.13, t = 1.92, p = 0.06), with higher reductions in arousal after Emotion Labeling predicting higher reductions in symptom reports. For the content labeling effect, the model was statistically significant, R 2 adj. = 0.12, F(5,55) = 2.68, p < 0.05. A significant interaction between CSD scores and the content labeling effect on arousal ratings was found (β = −0.60, SE = 0.24, t = −2.54, p = 0.01) and was further explored as suggested by Aiken and West (1991). Specifically, the regression slopes for three levels of CSD (average, +1SD, −1 SD) and three levels of arousal (average, +1 SD, −1 SD) were calculated and these showed that as perceived arousal decreases after Content Labeling, symptoms also decrease but only for those low in CSD (Figure 4A). Furthermore, the content labeling effect on valence ratings, and its interaction with CSD scores also approached significance, (β = −0.31, SE = 0.16, t = −1.99, p = 0.05 and β = −0.46, SE = 0.23, t = −2.00, p = 0.05), with increases in valence after Content Labeling resulting in a reduction of symptom reports. The interaction showed that this effect becomes more pronounced as CSD scores increase ( Figure 4B).
DISCUSSION
The present study investigated whether applying an implicit emotion regulation technique can reduce the well-documented augmenting effect of unpleasant cues on symptom reporting and whether individual differences in habitual symptom reporting moderate such a reduction. To this end, an affective picture paradigm previously used to induce elevated symptom reports (Bogaerts et al., 2010;Constantinou et al., 2013) was combined with an affect labeling task (Lieberman et al., 2007). Healthy participants varying in habitual symptom reporting viewed pleasant and unpleasant pictures under three conditions: passive viewing, labeling the emotion, and labeling the content of the pictures, and then completed affect ratings and a symptom checklist.
Manipulation checks indicated that labeling led to less extreme ratings of valence, arousal, and control compared to passive viewing for both pleasant and unpleasant pictures. These results are in line with studies showing that labeling can dampen both positive and negative emotions and support its usefulness as an emotion regulation technique. Main analyses showed that labeling additionally led to a reduction in symptom reports after picture viewing. This reduction was, as expected, moderated by participants' level of habitual symptom reporting as the difference between the two labeling conditions and passive viewing was more pronounced at higher levels of habitual symptom reporting. This suggests that people who experience frequent medically unexplained symptoms can benefit the most from labeling procedures. Regression analyses further indicated that the effects of labeling on symptom reports are predicted by its effects on experienced affect.
Interestingly, contrary to our initial hypothesis, emotional and content labeling influenced self-reported affect in a similar way, suggesting that both kinds of labeling can have emotion regulatory properties. This contradicts previous studies showing that affect labeling has a rather specific effect on inhibitory pathways in the brain, which is not found for non-emotional labeling (Lane et al., 1997;Lieberman et al., 2007). However, as Lieberman et al. (2011) pointed out, in one of the few studies examining selfreported affect, little is known about the effects of non-emotional labeling on self-reports. Current findings indicate that the specificity found for emotional labeling in brain activations, is not replicated in self-reports. This discrepancy suggests that nonemotional labeling may influence self-reported affect through different routes than those described for emotional labeling. For example, non-emotional labeling may function as distraction (as it draws attention away from the emotional components of the Frontiers in Psychology | Emotion Science pictures), a strategy that seems to have comparable effects with affect labeling on self-reports or it may be that the labeling process itself (regardless of the kind of label) in general results in attenuated affect (Lange et al., 2003;Lieberman et al., 2011). Further research is needed to delineate the mechanisms underlying the effects seen in our content labeling condition.
Besides experienced affect, both kinds of labeling also influenced symptom reporting. Symptom ratings after each trial showed that, as in previous studies (Bogaerts et al., 2010;Constantinou et al., 2013), the mere presentation of unpleasant pictures can induce elevated symptom reports. Labeling the unpleasant pictures, though, either emotionally or non-emotionally, reduced this bias, an effect that seems to be most profound in people high in habitual symptom reporting. Prior studies, as well as current findings, have shown that high symptom reporters are more prone to the influences of unpleasant cues on symptom reporting (Bogaerts et al., 2010;Constantinou et al., 2013). This has been attributed to the combination of a reduced ability to regulate emotion with more elaborate and accessible representations of symptom experiences in this selected group (Brown, 2004;Bogaerts et al., 2010). As these symptom representations are inherently linked to unpleasantness, they are assumed to be automatically triggered by affectively-congruent cues (Bower, 1981;Lang, 1995) producing the effects observed in our picture viewing paradigms.
These automatic effects of affective cues could be constrained by the activation of inhibitory control processes that regulate affect (Banich et al., 2009), which we hypothesized to be less successfully employed by high habitual symptom reporters. This assumption was supported by the finding that labeling effects were more pronounced at higher levels of habitual symptom reporting, as well as by the fact that during the view condition (where no instructions are given) higher habitual symptom reporting was related to more unpleasantness and more symptom reporting after unpleasant cues. Thus, high symptom reporters seem less able to spontaneously regulate affect, but can successfully engage in the labeling tasks and benefit more from them. These results, using a well-controlled experimental design, provide support for interventions targeting emotion regulation training, like expressive writing, in patients with functional syndromes (Broderick et al., 2005;Junghaenel et al., 2008).
The connection between emotion regulation and symptom reporting was further emphasized by regressions showing that the reduction of symptoms during labeling was predicted by reductions in experienced affect during picture viewing, especially self-reported arousal. It is important to note, though, that symptom reduction during content labeling was mostly related to arousal for low habitual symptom reporters, but to valence for high symptoms reporters. This may point to differences between the two groups in how labeling works and how symptom reports emerge. For low symptom reporters, reductions in symptoms during labeling may stem from reductions in actual physiological arousal, while for high symptom reporters labeling effects on symptom reports seem unrelated to actual bodily changes, but rather depend on the experienced unpleasantness and the resulting affectively congruent schema activations. As objective measures of autonomic arousal were not included in this study, this tentative hypothesis cannot be examined with the current data.
Another finding worth pointing out is that although emotion regulation seems to reduce affective influences on symptom reporting, especially for high habitual symptom reporters, this process does not seem to relate to their dispositional regulatory capacity. Contrary to our hypotheses, measures of general inhibitory capacity, like a motor inhibition task (PGNG task), a physiological marker (HRV) or self-reported regulatory ability, were not associated with the beneficial effects of labeling on symptom reporting nor with habitual symptom reporting. This rather surprising lack of correlations may suggest that labeling may influence symptom experiences through other processes, like distraction and people's expectations about its effects and not via the inhibitory effects of emotional labeling (Lieberman et al., 2007). The similar effects of emotion and content labeling on symptom reports further shows that the inhibitory processes involved in emotional labeling did not have additional effects in the context of symptom reporting.
Alternatively, this lack of correlations may also be partly due to the neutral nature of the motor inhibition task. As there is no affective-motivational component in the PGNG task, it may be less relevant to the resources required for emotion regulation (Banich et al., 2009). Although common substrates have been reported for "cold" and "hot" inhibitory control (Cohen and Lieberman, 2011), different kinds of inhibition can also have additional distinct effects (Dillon and Pizzagalli, 2007). Thus, future research using emotional equivalents of inhibitory tasks would be useful in delineating these associations. Additionally, although intentional emotion regulation has been related to motor inhibition , implicit emotion regulation tasks may be less related to such tasks that involve intentional efforts for inhibitory control, a hypothesis that should be further explored.
Finally, several limitations of the present study should be reported. Firstly, the amount of symptoms reported by participants during the task was rather low, which is to be expected when healthy young people rate their body state without experimentally induced bodily symptoms. However, this may hamper the strength of the effects and their generalizability from mild and short-lived symptom experiences to more long-lasting, debilitating symptoms, as those experienced by patients with functional syndromes. Furthermore, current analyses used total symptom scores, thus it cannot be concluded whether the observed effects were general or specific to certain symptoms categories. Future research could try to delineate the extent of labeling effects on symptom reporting, and replicate these findings in patient populations, both in similar experimental designs, as well as in more ecologically valid paradigms (e.g., diary studies). Additionally, current results pose further questions regarding the mechanisms behind labeling effects on self-reports, which should also be addressed by future research.
CONCLUSION
Current findings emphasize the malleability of the symptom reporting process and its influence by mild manipulations, like the www.frontiersin.org presentation of unpleasant cues. This study showed that symptom reporting can easily be augmented, but also easily reduced via the application of implicit emotion regulation techniques. High habitual symptom reporters seem to be more prone to these manipulations. The study also provides first indications about the usefulness of such strategies in reducing affective biases in symptom perception, although more research is needed to verify the beneficial effects of these strategies in clinical groups (e.g., patients with functional syndromes).
ACKNOWLEDGMENTS
Authors would like to thank Dr. Stefan Sütterlin for his assistance and guidance regarding the extraction of HRV parameters used in this study.
|
2016-05-04T20:20:58.661Z
|
2014-07-22T00:00:00.000
|
{
"year": 2014,
"sha1": "cab863cdc47fd06f0ccef5519f0ad2b8645eab06",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2014.00807/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cab863cdc47fd06f0ccef5519f0ad2b8645eab06",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
240933451
|
pes2o/s2orc
|
v3-fos-license
|
Several inverse problems in thermal physics
— There are solution approaches for several thermal-physical inverse problems discussed in the paper that are based on extension of non-stationery temperature field in the series. An applied method for separation of variables differs from the conventional method for separation of variables. Effect thereof differs as well and is especially convenient for solving inverse thermal-physical problems. Abraham Temkin (1919–2007) created this method for separation of variables. He also offered several methods for solving inverse problems. Those methods are not studied enough and are not known among a wider range of experts. Those methods have been created between 1956 and 1973 and have been published in various journals in Samara, Moscow, Minsk, and Riga. All those publications are in Russian and not available in electronic format.
INTRODUCTION
The first publication that we know and which laid the foundation for unconventional separation of variables is [1].In that paper it is proved that a function that is defined by a convolution integral can be expanded in a following series where ( ) ( ) ( The aforesaid paper illustrates that the series (2) converges if all derivatives ( ) ( ) where δ is the Dirac delta function, then ( ) ( ) ( ) and in such a case the series ( 2) is an extension of function f(t) in the Taylor series.
As solution of a heat transfer equation is expressed by a convolution integral, if initial distribution of temperature is homogenous and time-dependent boundary conditions are set on the boundary, it is obvious that attempt to apply the discussed series for solving a heat transfer equation is useful.
Obtained solutions [2] are in the form convenient for one to apply them for solving inverse heat transfer problems.Later on A.Temkin had many publications devoted to those issues, whose results are summarized in a monograph [2] published in 1973.It should be noted that computing opportunities were quite limited at the time when those methods were created; therefore the methods have not been verified sufficiently.
SOLUTION OF A HEAT TRANSFER EQUATION
A solution of heat transfer equation put down by means of a generalized Taylor series in a voluntary curved area is given in [2].For the purpose of simplicity, we shall consider only the simplest segments where every point depends only on one coordinate of an area -a plate, a cylinder, and a sphere.If speaking on inverse heat transfer problems, solids of such particular kind are used for determination of thermo-physical properties of material.A symmetric temperature field in those solids is described by an equation where t is temperature, τ is time, x is the co-ordinate, k=1 is the plate, k=2 is the cylinder, k=3 is the sphere, a -temperature conductivity coefficient.If convective heat exchange occurs with the environment having time-dependent temperature ( ) on the surface, then following equality is valid on the boundary where .Problem ( 6) -(8) in dimensionless form shall be written down as follows According to [2], solution of the problem (9) -( 11) is of the following form, where This result is obtained in two ways in the [2], id est, by inserting ( 13) and ( 14) in the equation ( 9) and equalling the coefficients to derivatives of the same order ( ) ( ) and to the latter from the conventional model.
Here it should be noted that solution of the problem (9) is also expressed in the form of ( 12) -( 14) under boundary conditions of different type.Functions ( ) depend on a type of boundary conditions and geometry of an area.Those are either polynomials or functions where a polynomial is included as addend.In this account those functions are called quasi polynomials in the [2].
For initial condition (11) be met, the following is set in formula (14) ( ) ( ) There is proved in the [2] that can be not taken into account in formula (12).The following inequality shows that [2] ( ) where µ>0.
It is proved in the [2] that P 2n (N)>0, P 2n+1 (N)≤0 and that inequality is valid Therefore if B>1 and all derivatives ( ) ( ) are limited, then (13) converges.Besides the more intense heat exchange between the solid and the ambience, the faster the series converges.
There is method for determination of an asymmetric temperature field discussed in the [2].
COEFFICIENT
Temperature conductivity coefficient is determined in laboratory conditions, by measuring temperature inside a simple-shaped solid while it is warmed up (cooled down more rarely).Heat transfer process is described by an equation (6), where x∈[0,b].Temperature is measured at two inner points x 1 and x 2 , x 1 < x 2 .There is a case possible when x 2 =b.Supposing that x 2 =b and transferring to dimensionless values as showed before, but keeping real temperature in equation (9), we get that heat transfer process can be described by equation ( 9) and boundary conditions where Co-ordinate functions P n (N) are given in the [2]: if k=1, then ( ) final number of addends in formula (21) and denoting b 2 /a=y, we obtain Because P 0 (N)=1 for all k=1,2,3 [2].In this way we get that y is a polynomial root.If M=1, then it follows from (22) that If temperature is measured in the middle (N 1 =0), then if k=1, it results from (23) that and if k=2, then Formulas ( 23) -(25) are found in the [2] by means of an approach described in this article.The [3] provided formulas (24) and (25), where they are obtained in a different way.
It is clear that a more precise result is anticipated if several addends are taken in the sum (22).If M=2, then it follows from (22) that ( ) 0 1 < ″ τ t that is not hard to be ensured experimentally, multiplication of the roots from equation ( 26) is negative.Hence a question as regards a valid root fools away, because y must be definitely positive.
If temperature is measured in the centre (N 1 =0) and k=1, then (26) results in [2], [4] ( ) When M=3, N 1 =0 and k=1, then the following occurs [4]: In the [4] we referred to a large number of calculations made, by using mathematical software, which illustrated that equation (28) had one real and two complex roots at most various boundary conditions t 1 (τ).The [4] compares accuracy of formulas ( 24), (27), and (28) for determination of temperature conductivity coefficient with temperature field being chosen as input data, which is obtained at a given temperature conductivity coefficient, by solving a heat transfer equation numerically by means of mathematical software.It is concluded that formula (27) and ( 28) is significantly more precise than formula (24) and that accuracy of all discussed formulas improves if time is increased while temperatures used in calculations are recorded.
IV. CALCULATION OF BOUNDARY CONDITIONS AS PER TEMPERATURE MEASUREMENTS INSIDE A SOLID
Surface temperature of a solid can be determined by technical means (thermal imaging camera), but it requires a surface be open and accessible.If it is not the case, thermal imaging camera cannot be applied.Temperature measurements inside a solid by means of a thermocouple, and determination of surface temperature of a solid by using those measurements would be a solution.Such approach is recommended in the [2].
Let us consider simple-shaped solids only where heat transfer process may be described by equation ( 6) and x∈ [0,b].it should be noted that this approach may be also applied to solids of complex shapes.Thermal-physical properties of material that is heat transfer coefficient λ and temperature conductivity coefficient a are known.Let us suppose that temperature is measured at an inner point of area x 1 =b 1 , b 1 <b.
Then in area x∈[0;b 1 ] equation ( 6) with the following boundary conditions is valid where q(τ) is got from solution of problem ( 6), (29).When problem ( 6), (29), and (30) is transferred to dimensionless form, we get equation ( 9), where N=x/b 1 , N∈[1,b/b 1 ], and boundary conditions are In the last formula U(F) is acquired from solution of problem (9), (31).Solution of problem (9), (31), and (32) in the case of high F values is searched in the following form [6]: where co-ordinate functions P n (N,1) and P n (N,2) are searched, by inserting (33) in equation ( 9) and requesting compliance with the boundary conditions.The solution of described problem if k=2 can be written down as a recurrent formula Interesting to note that in the [2] solution of problem ( 9), (31), and (32) is not obtained in the form (34), (35).Author of the [2] followed the following scheme.Solution of problem (9), (31) at high F values and N∈[0,1] have the form V. DETERMINATION OF TEMPERATURE CONDUCTIVITY COEFFICIENT FOR THIN MATERIALS If material is thin, for instance, window glass, film, etc., thermocouple cannot be placed inside such material.In such a case the studied material can be placed between two materials with known thermal-physical properties and thermocouples can be placed in these materials as showed in figure 1.In such a scheme thermocouple co-ordinates are x 1 , x 2 , x 3 , and x 4 .
Temperature conductivity coefficient a 1 and heat transfer coefficient λ 1 are established.The studied material is located between x=0 and x=b.
In order to determine temperature conductivity coefficient a of the studied material, the following scheme is proposed in the [2]: 1. Heat flow q(x 2 ,τ) at point x=x 2 is established according to temperature measurements at points x=x 1 un x=x 2 .
3. Having applied temperature measurements at points x=x 3 and x=x 4 , t(b,τ) and q(b,τ) are calculated similarly.
6. Having equalled results obtained pursuant to point 4 and 5 above, temperature conductivity coefficient a is established.
One should note that it is important so that t(-l,τ) would not be equal to t(l+b,τ).
The discussed scheme is implemented in [7], by using temperature field in the area x∈ [-l,l+b] as input data that is calculated by means of mathematical software MATHEMATICA, and by solving the problem numerically.The [7] includes a conclusion that the scheme described in the [2] runs, but restrictions as regards thickness b of studied material exist.If b is decreased, then problem becomes ill conditioned and it is impossible to establish temperature conductivity coefficient.It manifests itself in such a way that time range [τ 1 ,τ 2 ] does not exist at too small b values where a is constant.Physically it means that if thickness of the studied material is too little, then this material does not practically affect readings of thermocouple if compared to the case if the studied material would not exist at all.
CONCLUSIONS
The examined solution of heat transfer equation and inverse problems associated therewith are less known to a wide range of experts.Practical calculations given in [4], [6], [7] illustrates that theoretical concepts summarized in the [2] are applicable for solving inverse heat transfer equations.Discussed problems are only a part of scientific heritage left by A.Temkin.Major part is not studied yet up to date.
series (2) as a generalized Taylor series due to the fact that if the function ( ) ( ) When transferring from dimensionless time F to real time τ in formula (20), we get λ stands for heat transfer coefficient, α is heat exchange coefficient, b is a half of thickness in case of a plate or radius in case of a cylinder or a sphere.
≥ = [2]the measured temperature.Temperature field in the area x∈[0;b 1 ] can be calculated unequivocally.Temperature t(b,τ) must be found.The area x∈[b 1 ,b] is a plate (k=1), an empty cylinder (k=2) or an empty sphere (k=3) with thickness b-b 1 .If we could determine temperature field in this area, temperature t(b,τ) would be also known.So that temperature field in this area would be determined unequivocally, boundary conditions must be set on both boundaries of the area.But it is impossible to set boundary condition at x=b, because it is a calculated value.It is known according to[5]that temperature field in solids of such type is calculated unequivocally if boundary conditions of two different types are set on the boundary x=b 1 while boundary conditions on the boundary x=b are not set.In the[2]boundary conditions of such kind are called boundary conditions of the fourth type named after Likov.As temperature field in the area [0,b 1 ] is calculated, then heat flow is also calculated at x=b 1 ,
|
2020-12-17T09:07:49.480Z
|
2020-12-15T00:00:00.000
|
{
"year": 2020,
"sha1": "77004723bb922b87eb46e28fc9c40d77f48dabd7",
"oa_license": null,
"oa_url": "https://doi.org/10.46300/91014.2020.14.7",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6333265d0a93f3ecb81c9bb1c353a88241d709bc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
259149704
|
pes2o/s2orc
|
v3-fos-license
|
Dysfunctional T Follicular Helper Cells Cause Intestinal and Hepatic Inflammation in NASH
Nonalcoholic steatohepatitis (NASH), characterized by hepatic inflammation and cellular damage, is the most severe form of nonalcoholic fatty liver disease and the fastest-growing indication for a liver transplant. The intestinal immune system is a central modulator of local and systemic inflammation. In particular, Peyer’s patches (PPs) contain T follicular helper (Tfh) cells that support germinal center (GC) responses required for the generation of high-affinity intestinal IgA and the maintenance of intestinal homeostasis. However, our understanding of the mechanisms regulating mucosal immunity during the pathogenesis of NASH is incomplete. Here, using a preclinical mouse model that resembles the key features of human disease, we discovered an essential role for Tfh cells in the pathogenesis of NASH. We have found that mice fed a high-fat high-carbohydrate (HFHC) diet have an inflamed intestinal microenvironment, characterized by enlarged PPs with an expansion of Tfh cells. Surprisingly, the Tfh cells in the PPs of NASH mice showed evidence of dysfunction, along with defective GC responses and reduced IgA+ B cells. Tfh-deficient mice fed the HFHC diet showed compromised intestinal permeability, increased hepatic inflammation, and aggravated NASH, suggesting a fundamental role for Tfh cells in maintaining gut-liver homeostasis. Mechanistically, HFHC diet feeding leads to an aberrant increase in the expression of the transcription factor KLF2 in Tfh cells which inhibits its function. Thus, transgenic mice with reduced KLF2 expression in CD4 T cells displayed improved Tfh cell function and ameliorated NASH, including hepatic steatosis, inflammation, and fibrosis after HFHC feeding. Overall, these findings highlight Tfh cells as key intestinal immune cells involved in the regulation of inflammation in the gut-liver axis during NASH.
INTRODUCTION
Non-alcoholic fatty liver disease (NAFLD) is the most common form of liver disease worldwide, currently affecting more than 25% of Americans 1,2 .As obesity and type 2 diabetes rates continue to surge, the prevalence of NAFLD has also increased, leading to an alarming health concern 3 .NAFLD is a progressive condition that initiates with hepatic steatosis but that can evolve into a more severe disease entity known as nonalcoholic steatohepatitis (NASH), which presents features of hepatocellular injury and fibrosis 2,4 .It is estimated that nearly 25% of adults with NAFLD have NASH 2,4 with an increased risk of developing debilitating conditions such as cirrhosis and hepatocellular carcinoma 2,4 .Despite recent efforts, our understanding of the precise cellular and immune mechanisms orchestrating NASH pathogenesis remains incomplete.
Human studies have implicated changes in the gut microbiota in promoting NAFLD and NASH [5][6][7][8][9][10] , through mechanisms including increased gut permeability and intestinal bacterial overgrowth 11 .Disruptions in the commensal microbiome can lead to altered metabolite production, such as decreased levels of butyrate, critical for maintaining intestinal integrity permeability 12,13 .A compromised intestinal integrity, in turn, allows the translocation of endotoxin, microbial antigens, and other inflammatory factors from the gut into the liver, leading to hepatic inflammation [6][7][8][9] .Previous studies have demonstrated that the adaptive immune system is critical in regulating the microbiota and intestinal homeostasis, particularly through the actions of immunoglobulin A (IgA) [14][15][16] .IgA is the most abundantly produced antibody in mammals 17,18 that directly targets gut microbes to restrain the outgrowth of specific microbes and promote their diversity 17 .Consequently, a reduction in the affinity or specificity of intestinal IgA towards gut microbes can compromise this balance 16 , and may influence NASH.
Peyer's patches (PPs) are part of the gut-associated lymphoid tissue (GALT) distributed throughout the small intestine 19,20 and responsible for the regulation of the intestinal immune responses 19,20 .Indeed, PPs act as a central hub for the initiation of the adaptive immune response in the gut 19,20 .The PPs harbor germinal center (GC) responses, critical for the affinity maturation of antibodies, the generation of memory B cells and long-lived plasma cells, and the production of intestinal IgA [19][20][21] .In the GC, the interaction between Tfh cells and B cells is essential for somatic hypermutation and class switching recombination [22][23][24] .However, the role and mechanisms by which Tfh cells regulate obesity-related metabolic diseases, such as NASH, remains inadequately understood.
Here, we focused on the role of intestinal Tfh cells in the maintenance of gut homeostasis and disease progression during HFHC diet-induced NASH.We found that intestinal Tfh cells expanded in the PPs of NASH mice where they play an essential role in maintaining gut homeostasis and mitigating liver inflammation.Furthermore, intestinal Tfh cells exhibit functional impairments during NASH, leading to diminished GC responses in the intestine and a subsequent decrease in gut IgA + B cell responses.
Mechanistically, NASH results in an aberrant upregulation of Kruppel-like factor 2 (KLF2) in Tfh cells while its insufficiency reverses the dysfunctional phenotype of Tfh cells and mitigates NASH progression.Overall, our findings shed light on the fundamental role of intestinal Tfh cells in maintaining gut homeostasis and highlight the pivotal role played by intestinal immunity in the pathogenesis of NASH.
Intestinal Tfh cells expand in the PPs during NASH
We have previously reported that HFHC feeding for 20 weeks leads to severe NASH including obesity hepatic steatosis, inflammation, and fibrosis 5 (Supplementary Figure 1A).To determine how NASH influences the immune cell populations in intestinal PPs, we fed C57BL6/J mice either a normal chow diet (NCD) or a HFHC diet for up to 20 weeks.Histological analysis revealed enlarged PPs with increased cellularity in HFHC intestinal sections, compared with NCD controls (Figure 1A-B).Quantification of total CD45 + cells in PPs by flow cytometry confirmed the increased leukocyte number in HFHC mice (Figure 1C).While no differences were detected in the lamina propria (LPL), we also observed a substantial increase in cellularity in the mesenteric lymph node (mLN) and spleen of HFHC mice (Supplementary Figure 1B-D).Consistent with previous reports 25,26 , however, we detected increased Th17 cells in the LPL and mLN, but not in the spleen (Supplementary Figure 1E-G).To investigate the transcriptional landscape of PPs during NASH, we performed bulk RNA sequencing (RNA-seq) of PPs and mLNs isolated from NCD and HFHC mice.Principal component analysis (PCA) revealed a substantial separation between PPs from HFHC and NCD mice, while mLNs from both groups clustered together, indicating that NASH resulted in an altered gene expression profile unique to the PPs (Figure 1D).Gene set enrichment analysis demonstrated that PPs from HFHC mice were enriched in immune transcripts, particularly in B and T cell signatures (Figure 1E).To investigate the nature of such immune response, we determined the expression of a curated list of adaptive immune cell genes and found an increased expression of B cell and T cell genes in HFHC PPs (Figure 1F).Notably, PPs from HFHC mice showed an increased expression of Tfh cell-associated genes (Bcl6, Cxcr5, Tox2, Icos), but no difference in CD8 or other CD4 T cell subsets (Figure 1F).We confirmed the increase in B cells and CD4 T cells, but no changes in CD8 T cells in the PPs from HFHC mice by flow cytometry (Supplementary Figure 1H-J).Considering the Tfh gene signature observed in the PPs of HFHC mice, we further characterized the Tfh populations by flow cytometry, using an established gating strategy (Supplementary Figure 1K).We found that CD4 T cells from HFHC PPs show an elevated expression of CXCR5 (Figures 1G-I) and contain an increased frequency and cell number of Tfh and Pre-Tfh cells (Figure 1J-L).Collectively, these data demonstrate that NASH induces an increased immune response in PPs, characterized by an altered transcriptional landscape and a bias towards increased Tfh cell differentiation, suggesting a role for these cells in modulating intestinal immune responses during NASH.
Diminished Germinal Center and IgA Responses in NASH Mice
To further examine the effect of NASH on intestinal immune homeostasis, we evaluated the germinal center (GC) IgA responses in NCD-and HFHC-fed mice.Notably, we discovered that the frequency of GL7 + CD95 + GC B cells was substantially reduced in the PP of HFHC-fed mice, compared with NCD controls (Figure 2A-B).As a result, the differentiation of IgA + B cells in the PPs was also diminished (Figure 2C-D), suggesting that NASH induces a defective GC response and a loss of IgA-producing cells.To confirm these findings, we performed immunofluorescence staining of PPs from NCD-and HFHCfed mice and observed smaller GC regions in the PP of NASH mice (Figure 2E).We then used histocytometry to analyze the immunofluorescence data and found a reduction in GC B cell responses in NASH mice (Figure 2E-F), in agreement with our earlier observations.Given that GCs play a crucial role in the affinity maturation of IgA, we reasoned that impaired GC B cell responses could affect the ability of IgA to bind commensal bacteria.Indeed, we detected a lower IgA coating of fecal bacteria in NASH mice (Figures 2G-I).Together, these data suggest that HFHC diet-induced NASH has profound detrimental effects on intestinal immune responses, including a reduced GC reaction and impaired IgA production and ability to bind bacteria.
Dysfunctional Intestinal Tfh Cells in HFHC Diet-Induced NASH Mice
We next sought to functionally characterize the Tfh cells in the PPs of HFHC dietinduced NASH mice.Flow cytometry analysis revealed an altered Tfh cell phenotype in PP from NASH mice, including enhanced CD40L and CD44 expression but a substantially decreased expression of PD1 and BCL6 (Figure 3A-B).Bulk RNA-Seq of FACS-sorted Tfh cells from the PP of NCD and HFHC-fed mice revealed a marked shift in the transcriptional landscape of Tfh cells in NASH (Figure 3C).We found that genes that were highly expressed in Tfh cells from the PP of NCd mice were markedly downregulated in NASH (Figure 3C).To better understand the nature of this shift, we performed Gene Set Enrichment Analysis (GSEA) on the differentially expressed genes and discovered that genes involved in the differentiation of Tfh cells were downregulated in the PP of HFHCfed mice, suggesting a loss of Tfh cell identity during NASH (Figure 3D).Specifically, we detected a lower expression of key Tfh cell signature genes and transcription factors that are vital for Tfh cell differentiation and functionality such as Bcl6, S1pr2, and Pdcd1 (Figure 3E).Tfh cells from NASH mice also showed a substantial reduction in IL-4 gene expression (Figure 3E) and production in vivo, detected using IL-4 GFP mice (Supplementary Figure 2A-B).In addition, NASH Tfh cells showed no changes in the Th1 transcription factor T-bet but a decreased expression of GATA3 (Th2) and increased ROR-gt, master regulator of the Th17 fate (Figure 3F and Supplementary Figure 2C-D,).
Tfh cells from NASH mice had increased expression of the Tfh cell differentiation inhibitor KLF2 27 (Figure 3F) and its upstream regulator FOXO1 28 (Figure 3F-G).Chemokine receptors such as CCR7 and S1PR1 were also increased in Tfh cells from the PP of NASH mice (Figure 3E, H), suggesting a dysfunctional migration.Furthermore, Tfh cells from NASH mice had increased glucose uptake (Supplementary Figure 2E), decreased mitochondria mass (Supplementary Figure 2F), and lower mitochondria membrane potential (Supplementary Figure 2G), indicative of altered metabolism.
In a properly organized germinal center (GC), Tfh cells residency within the GC is facilitated by the downregulation of KLF2, CCR7, and S1PR1 27 .The increase in these molecules in Tfh cells from NASH mice led us to interrogate their localization within the GCs, an important aspect needed for maintaining a robust GC response and B cell maturation 29 .A recent study showed that primary Tfh cells include a subset of CD90 low GC-resident and CD90 + non-resident cells 29 .Thus, we investigated how NASH affects the location of the Tfh cells in the GC and found that, unlike Tfh cells from NCD mice, the majority of Tfh cells in the PP of NASH mice were CD90 + non-resident GC cells while the CD90 low resident cells decreased (Figure 3I-J).Indeed, histocytometry analysis of PPs showed fewer CD4 T cells within the GCs of NASH mice, compared with controls (Figure 3K-L).Thus, these data show that Tfh cells present with a dysfunctional phenotype that includes a defective differentiation transcriptional program, decreased IL4 production, and a loss of residence within the GC.
Considering the potential contribution of obesity per se to a dysfunctional Tfh phenotype in NASH mice, we explored the potential impact of genetic obesity on PP Tfh cells.To this end, we utilized Ob/Ob (Lep ob ) mice that develop hyperphagia, obesity, intestinal bacterial overgrowth, and liver steatosis despite being fed an NCD.By 25 weeks of age, the NCD-fed Ob/Ob mice gained substantially more bodyweight than their WT counterparts (Supplementary Figure 3A).Despite such profound obesity, however, we observed no substantial alterations in the frequency, GC-residency, and expression of PD1, BCL6, and Foxo1 in the Tfh cells of Ob/Ob mice (Supplementary Figure 3B-F).Furthermore, while there was no reduction in the GC reaction (Supplementary Figure 3G), there was a slight decrease in the frequency IgA + B cells (Supplementary Figure 3H) in the PP of Ob/Ob mice.This data suggests that the Tfh cell dysfunction caused by NASH may not be directly attributable to obesity but to the effects of the HFHC diet and/or subsequent NASH progression.
Microbial Unresponsiveness of Intestinal Tfh Cells in Diet-Induced NASH Mice
IgA responses, especially those mediated by high-affinity T cell-dependent IgA, play a crucial role in maintaining intestinal homeostasis, regulating microbiota community, bolstering mucosal defense, and restraining invasive commensal species [30][31][32] .The production of intestinal IgA is primarily attributed to the interactions between Tfh cells and B cells within the GC in PPs 14 .Given that intestinal Tfh-GC responses can be induced and modulated by gut microbiota 19,20 , we sought to determine whether Tfh cells in NASH remain responsive to gut microbiota.To accomplish this, we depleted the gut microbiota of NCD-and HFHC-fed mice with broad-spectrum antibiotics provided in the drinking water (Supplementary Figure 4A).The antibiotic treatment effectively depleted the gut bacteria and altered the microbiota diversity and composition in both NCD and HFHC mice (Supplementary Figure 4B-E).Notably, antibiotic treatment resulted in a substantial reduction in the frequency of Tfh cells, GC B cells, and IgA-producing cells in PPs of NCD-, but not HFHC-fed mice (Supplementary Figure 4F-G).Similarly, antibiotic treatment altered the expression of PD1 and Foxo1 in Tfh cells from NCD but not HFHC mice (Supplementary Figure 4H), suggesting that Tfh cells are unresponsive to changes in the gut microbiota during NASH.One possibility is that the lack of dependency of the NASH Tfh cells on the microbiota is related to a decreased T cell receptor (TCR) signal in the pre-Tfh subset (Supplementary Figure 4I).
Tfh Cells Maintain Intestinal Barrier and Protect against NASH
To investigate the direct role of Tfh cells in NASH progression, we used Cd4 Cre Bcl6 flox/flox mice that specifically lack Tfh cells (Figure 4A).Both Tfh knockout (Tfh KO) and littermate wild-type (Wt) mice were fed the HFHC diet for up to 20 weeks to induce NASH.
The absence of Tfh cells led to the abrogation of GCs and IgA + B cells in PPs (Figure 4B, Supplementary Figure 5A), along with a notable decrease in IgA + B cells in the small intestine lamina propria (sLPL) (Supplementary Figure 5B).Given the critical role of intestinal immune cells in maintaining gut homeostasis and modulating microbiota composition [14][15][16]18 , we performed 16S rRNA sequencing to analyze microbiota communities of Tfh KO and Wt mice. Ou analysis revealed that the microbiota communities were substantially distinct between Tfh KO and Wt mice as failed to cluster by principal component analysis (PCA) (Figure 4C).Using targeted mass spectrometry, we also examined microbiota-derived short-chain fatty acids (SCFA) and found a substantial reduction of butyrate in fecal contents from Tfh KO microbiota (Figure 4D).
Given the importance of butyrate in maintaining intestinal integrity 12 , we assessed the levels of the tight junction proteins in intestinal epithelial cells and found that aE-Catenin and Occludin were reduced in Tfh KO mice, compared with Wt controls (Figure 4E-F).As a result, we observed a higher intestinal permeability in Tfh KO mice following a FITC-Dextran oral gavage assay (Figure 4G).In the PPs and small intestine LPL, we detected an increased accumulation of pro-inflammatory CD44 + T-bet + CD8 T cells and a decrease in regulatory T cells (Tregs) in the mLN (Supplementary Figure 5C-G), a finding consistent with the requirement for butyrate for the differentiation of Tregs 33,34 .Together, these results suggest a disrupted gut homeostasis and the development of a "leaky gut" in the absence of Tfh cells.
Since disruption of the intestinal barrier is known to exacerbate liver inflammation in NASH 6,8,35 , we examined the immune infiltrates in the liver of HFHC-fed Tfh KO mice.
Comprehensive profiling of the immune landscape of the liver using cytometry by time of flight (CyTOF) showed an overall increased immune cell infiltration in Tfh KO mice (Figure 4H), suggesting exacerbated inflammation.Notably, hepatic macrophages and CD8 T cells from Tfh KO mice produced increased levels of TNF-α and IFN-γ, respectively (Figure 4I).As a result of increased inflammation, the circulatory levels of liver function enzymes alanine transaminase (ALT) and aspartate transaminase (AST) were higher in Tfh KO mice, compared with Wt controls (Figure 4J).In conclusion, our data indicate that the protective role of intestinal Tfh cells in the gut mitigate liver inflammation and disease progression in NASH.
KLF2 Insufficiency Improves the Tfh Cell Phenotype and Ameliorates NASH
Given that Tfh cells are crucial for maintaining gut and liver immune homeostasis and that they present a dysfunctional phenotype in NASH, we reasoned that restoring the function of Tfh cells in NASH may provide protection against the disease.In our RNAseq analysis, we found an abnormal increase in KLF2 in Tfh cells from the PP of NASH mice likely due to the role of KLF2 in restricting the fate commitment of Tfh cells 27 .Therefore, we hypothesized that reducing KLF2 in Tfh cells during NASH would restore their differentiation and function and may ameliorate disease progression.To decrease KLF2 expression in Tfh cells, we used Cd4 Cre Klf2 Wt/flox (KLF2 D/+ ) mice that harbor a single allele deletion of the Klf2 gene in CD4 T cells.We fed KLF2 D/+ and littermate Wt mice a HFHC diet to induce NASH and assessed the phenotype of Tfh cells in the PPs and NASH severity.Compared with Wt controls, KLF2 D/+ mice showed no alterations in the frequency of total Tfh cells, although a slight increase in pre-Tfh cells was observed (Figure 5A-B).
Importantly, Tfh cells in KLF2 D/+ mice had an improved phenotype as evidenced by an increased expression of BCL6 and PD1 (Figure 5C-D), while FOXO1 and ICOS expression remained unchanged (Figure 5E).Importantly, the restored Tfh cell phenotype in HFHC-fed KLF2 D/+ mice correlated with an increase in IgA + B cells (Figure 5F) and overall ameliorated NASH.Specifically, KLF2 D/+ mice had lower liver weight (Figure 5G), reduced triglycerides (Figure 5H), and a decreased NAFLD activity score (NAS) determined in H&E-stained of liver sections (Figure 5I).Furthermore, KLF2 D/+ mice had lower ALT and AST levels (Figure 5J).Consistent with the improvements in NASH, KLF2 D/+ mice displayed improved glucose and insulin tolerance (Supplementary Figure 6 A-B) and energy metabolism as assessed by indirect calorimetry (Supplementary Figure a set of NASH signature genes and found a marked downregulation of NASH-associated genes in KLF2 D/+ mice, compared with Wt controls (Figure 5K).Moreover, KLF2 D/+ mice had a decreased hepatic accumulation of pathogenic PD1 + CD8 T cells 36 (Figure 5L).Overall, our findings underscore a key role for KLF2 in regulating Tfh cell function and NASH pathogenesis.
DISCUSSION
The gut-liver axis involves bidirectional communication between the gut and liver and its dysfunction contributes to the onset and progression of NAFLD 6,8 .In addition, dysbiosis of the gut microbiota can influence energy homeostasis, lipid metabolism, and fat storage, leading to worsened liver disease 7,8 .Mechanistically, an increased permeability of the gut epithelium or "leaky gut" caused by factors such as diet, pathogens, or chemicals, facilitates the translocation of microbial antigens to the liver 7,9 .
Lipopolysaccharides (LPS) are the main gut-derived antigen that can induce metabolic endotoxemia and promote hepatic steatosis, inflammation, and fibrosis 7,9 .Additionally, LPS directly activates innate immune responses in the liver, resulting in an inflammatory process that contributes to tissue damage 5,7,9 .Thus, a better understanding of the immune mechanisms regulating the inflammatory tone in the gut-liver axis is needed for the development of effective therapeutic strategies against NASH.
Tfh cells play a pivotal role in the adaptive immune responses to pathogens 37,38 .These cells are primarily found within the follicles of secondary lymphoid organs, including the lymph nodes and the spleen, where they participate in the formation of GCs that are critical for the production of high-affinity antibodies 37,38 .Notably, Tfh cells have emerged as important regulators of gut homeostasis 14,16 .In our study, we found that NASH induces Interestingly, our results showed that the dysfunction of Tfh cells is not directly attributable to obesity but is specific to the effects of HFHC diet and/or NASH development, as suggested by the normal Tfh cell phenotype and function in the Ob/Ob (Lep ob ) mouse in which obesity is largely attributed to their leptin deficiency-induced hyperphagia 39 .Given this surprising finding, we speculate that the HFHC diet, rather than obesity per se, may be the primary factor triggering Tfh cell dysfunction.Indeed, recent research showed that dietary sugar, high in our HFHC diet, disrupts the protective role of the immune system against metabolic disease 40 .Future work is needed to isolate specific dietary components responsible for this effect and to identify the mechanisms by which they disrupt intestinal immune homeostasis.
One of the intriguing findings of our work is the unresponsiveness of Tfh cells to the gut microbiota changes induced by NASH.It is well established that gut microbiota can initiate and fine-tune the Tfh-GC interactions 32,41 .However, our observations indicate that Tfh cells in NASH mice lose this regulatory control, perpetuating defective GC responses and sub-optimal IgA production even after alterations in the microbiota.Such decoupling of Tfh function from changes in the gut microbiota may contribute to NASH progression.However, the mechanisms leading to the unresponsiveness of Tfh cells to the microbiota in NASH remain to be elucidated.Our data showed a significant upregulation of KLF2 in NASH Tfh cells, which increased the expression of chemokine receptors such as CCR7 and S1PR1.Notably, CCR7 guides T cell localization to T cell zones within lymphoid tissues, while S1PR1 governs T cell trafficking and re-entry into circulation.Under homeostasis, the differentiation of Tfh cells is facilitated by a tightly controlled shift in chemokine receptor expression whereby pre-Tfh cells downregulate CCR7 and upregulate CXCR5, facilitating their migration from T cell zones to B cell follicles to assist in GC formation 38 .Given the altered expression of chemokine receptors in NASH Tfh cells, it is plausible that these cells might be mislocalized, which may affect their function and responsiveness to microbial antigens.Supporting this notion, our histocytometry analyses revealed an aberrant dispersion of intestinal Tfh cells within PPs in NASH mice.Additionally, the antigen specificity of Tfh cells, determined by their TCR, may be altered in NASH as evidenced by the loss of TCR signal in pre-Tfh cells.TCR sequencing of Tfh cells from Nin NASASH mice is needed to confirm this possibility.
In summary, our study highlights the complex interplay between intestinal Tfh cell function and NASH.The paradoxical expansion of dysfunctional Tfh cells in NASH leads to sub-optimal GC responses, reduced IgA production, and disturbed gut homeostasis.Therefore, we propose that strategies aimed at restoring Tfh cell function, such as downregulation of KLF2, could offer a viable immune therapy for the treatment of NASH.
Bulk RNA sequencing
Total RNA was extracted from livers using an RNeasy Plus Mini kit (Qiagen).Samples were sequenced on a Novaseq 6000 using a 150 PE flow cell at the University of Minnesota Genomics Center.The SMARTer Stranded RNA Pico Mammalian V2 kit (Takara Bio) was used to create Illumina sequencing libraries.Differential gene expression analysis was performed using edgeR (Bioconductor).Gene set enrichment analysis (GSEA) was performed using Clusterprofiler (Bioconductor).
Histocytometry
The histocytometry analysis was described previously [44][45][46] .Briefly, the region of interests (ROIs) was identified and the fluorochrome intensities of each ROI were quantified using ImageJ and data were exported into Excel, Prism and FlowJo software for the localization analysis.
Histology
Liver tissues fixed in 10% formalin were histologically assessed for steatosis by hematoxylin and eosin staining, analysis of NAFLD activity score (NAS) was performed by a blinded liver pathologist.
Metabolic assessments
Insulin tolerance tests (ITT) and glucose tolerance tests (GTT) were performed as previously described 5 .To determine energy metabolism, mice were placed in automated metabolic cages for 48 h.An assessment of energy expenditure (EE) was obtained via indirect calorimetry in free-moving animals housed in individual cages consisting of an indirect open circuit calorimeter that provides measures of O2 consumption and CO2 production (Oxymax, Columbus Instruments, Ohio).The ambulatory activity was assessed by the breaking of infrared laser beams in the x-y plane.The cages were provided with ad libitum access to food and water throughout the procedure.These procedures were performed by the IBP Phenotyping Core at the University of Minnesota.
Statistical Analysis
Statistical significance between means was determined with an unpaired t-test using GraphPad Prism 8.3.Spearman's rank test was used to determine correlation.Data are presented as means ± SEM.Statistical significance was set to 5% and denoted by *p < 0.05, **p < 0.01, and *** p < 0.001.
substantial alterations in the cellularity and gene expression of PPs including an accumulation of Tfh cells.The expansion of Tfh cells, however, was accompanied by a dysregulated Tfh phenotype, reduced IgA+ B cell differentiation, and diminished GC responses.Such dysfunction of Tfh cells during NASH is likely responsible for the abnormal GC and IgA responses as demonstrated by our experiments with Tfh-deficient mice.Moreover, we found that NASH upregulated the Tfh expression of KLF2, a transcription factor that restricts the fate commitment of these cells.Overall, the atypical expression pattern of Tfh genes suggests a loss of Tfh cell identity in NASH, which could be a major contributing factor to the impaired GC responses and decreased IgA production.Despite our findings, the molecular mechanisms contributing to the upregulation of KLF2 in Tfh cells during NASH remain elusive.Previous studies demonstrated that ICOS signaling downregulates KLF2 expression T cells, which is a critical step for Tfh cell differentiation.However, we did not detect changes in ICOS expression in Tfh cells during NASH.This finding suggests the involvement of additional factors that regulate KLF2 expression in NASH.
Kristin A. Hogquist at the University of Minnesota.Kfl2 flox/flox mice on B6 background were kindly provided by Dr. Stephen C. Jameson at the University of Minnesota.At 6 weeks of age, mice received a normal chow diet (NCD) or high-fat high-carbohydrate diet (HFHC)
Figure 1 .
Figure 1.Enlarged PPs and expanded intestinal Tfh cells in HFHC diet-induced
Figure 2 .
Figure 2. Impaired GC responses and decreased IgA+ B cells in HFHC mice.
Figure 4 .
Figure 4.The role Tfh cells in gut homeostasis and NASH progression.
|
2023-06-14T13:12:00.359Z
|
2023-10-12T00:00:00.000
|
{
"year": 2023,
"sha1": "b21c095a3e824f932ba728b15100447946de301e",
"oa_license": "CCBYNCND",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/06/09/2023.06.07.544061.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "b21c095a3e824f932ba728b15100447946de301e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
251448316
|
pes2o/s2orc
|
v3-fos-license
|
Machine learning based personalized promotion strategy of piglets weaned per sow per year in large-scale pig farms
Background The purpose of this study was to analyze the relationship between different productive factors and piglets weaned per sow per year (PSY) in 291 large-scale pig farms and analyze the impact of the changes in different factors on PSY. We chose nine different algorithm models based on machine learning to calculate the influence of each variable on every farm according to its current situation, leading to personalize the improvement of the impact in the specific circumstances of each farm, proposing a production guidance plan of PSY improvement for every farm. According to the comparison of mean absolute error (MAE), 95% confidence interval (CI) and R2, the optimal solution was conducted to calculate the influence of 17 production factors of each pig farm on PSY improvement, finding out the bottleneck corresponding to each pig farm. The level of PSY was further analyzed when the bottleneck factor of each pig farm changed by 0.5 standard deviation (SD). Results 17 production factors were non-linearly related to PSY. The top five production factors with the highest correlation with PSY were the number of weaned piglets per litter (WPL) (0.6694), mating rate within 7 days after weaning (MR7DW) (0.6606), number of piglets born alive per litter (PBAL) (0.6517), the total number of piglets per litter (TPL) (0.5706) and non-productive days (NPD) (− 0.5308). Among nine algorithm models, the gradient boosting regressor model had the highest R2, smallest MAE and 95% CI, applied for personalized analysis. When one of 17 production factors of 291 large-scale pig farms changed by 0.5 SD, 101 pig farms (34.7%) can increase 1.41 PSY (compared to its original value) on average by adding the production days, and 60 pig farms (20.6%) can increase 1.14 PSY on average by improving WPL, 45 pig farms (15.5%) can increase 1.63 PSY by lifting MR7DW. Conclusions The main productive factors related to PSY included WPL, MR7DW, PBAL, TPL and NPD. The gradient boosting regressor model was the optimal method to individually analyze productive factors that are non-linearly related to PSY. Supplementary Information The online version contains supplementary material available at 10.1186/s40813-022-00280-z.
Background
Piglets weaned per sow per year (PSY) is a key factor to evaluate the productivity performance of pig farms, which has been widely used in the pig industry for more than 30 years [1]. By using information management systems in large-scale pig farms (such as the Huiyangzhu system used by Shandong New Hope Liuhe Group Co., Ltd.), managers can obtain PSY and relevant production factors in time to make better management decisions and goals [2]. To improve PSY, a series of measures need to be taken to improve reproductive performance of sows, including emphasizing the management of replacement gilts to improve their lifetime production efficiency, balancing feed nutrition during lactation to increase weaning weight, strengthening piglet care within three days of farrowing to reduce pre-weaning mortality and paying attention to personnel skill training to implement strategies correctly [3][4][5].
The genetic progress of the pig industry has significantly improved PSY and related performance [6]. Some countries have PSY approaching or even exceeding 30 [7], resulting in a great promotion of the global protein supply. However, there is a lot of room for improvement in this regard in China. In production practice, if you can choose the factor that has the greatest impact on PSY among the related factors of PSY, improving this factor will achieve a multiplier effect with half the effort [8]. Algorithm models (including linear and nonlinear models) may provide an effective technical method to solve this problem. In the field of pig breeding, the algorithm model has been gradually applied to pig feed formula optimization, breeding analysis, disease transmission dynamics and major infectious disease prediction [9][10][11][12]. In terms of management, it can also help managers to analyze market conditions, evaluate sales opportunities and formulate sales plans [13,14]. Linear models have been often used in the analysis of production data, including PSY. For example, Munsterhjelm et al. [15] used multiple linear regression to study the relationship between farm welfare and sow reproductive performance. Sanglard et al. [16] estimated the heritability and genetic correlation between the sample-to-positive (S/P) ratio and reproductive performance of breeding pigs after vaccination with porcine reproductive and respiratory syndrome (PRRS) vaccine with the BayesCo linear regression model. BayesB linear regression model was used to analyze the bivariate genome-wide association between PRRS antibody response and reproductive traits [17]. However, when the data performance is not completely linear, the use of a linear model has limitations, resulting in poor effect and affecting the accuracy of prediction. Although the contribution of each independent variable can be estimated from both linear and nonlinear models, the target factor of each farm can be accurately calculated by nonlinear models instead of the overall trend estimation by linear models. Therefore, nonlinear models have advantages in processing production data.
The purpose of this study was to analyze the key production factors of 291 large-scale pig farms and their relationship with PSY through machine learning, analyze the correlation between factors, and find out the production factors with high correlation with PSY. Through the optimal algorithm model obtained by the cross-continuation of machine learning, the expected growth value of PSY in each farm is calculated when all production indicators increase by 0.5 standard deviations (SD), and a personalized PSY promotion strategy is given. To our knowledge, no previous study has evaluated the PSY using gradient boosting regressor model.
Farm description
The study did not require approval from the Ethics Committee on Animal Use because no animal was handled. This study involved 648,826 breeding sows of 291 pig breeding farms from 79 large-scale breeding subsidiaries. They fulfilled the following inclusion criteria, which were (1) having a population of 750 or more sows, (2) using the internal data management system of the company, and completing data records. The automatic feeding system, mechanical ventilation system, formula of standardized feed and mating schedule of these farms were described in a previous study [8]. The farms were from 20 provinces, located in seven production factors of 291 large-scale pig farms changed by 0.5 SD, 101 pig farms (34.7%) can increase 1.41 PSY (compared to its original value) on average by adding the production days, and 60 pig farms (20.6%) can increase 1.14 PSY on average by improving WPL, 45 pig farms (15.5%) can increase 1.63 PSY by lifting MR7DW.
Conclusions:
The main productive factors related to PSY included WPL, MR7DW, PBAL, TPL and NPD. The gradient boosting regressor model was the optimal method to individually analyze productive factors that are non-linearly related to PSY.
Data collection and manipulation
The collection and authorization of these production data in this study were described before [8]. This study analyzed 291 large-scale (750-5,800 sows) pig farms for the whole year of 2021. All variables were recorded at the herd level, taking into account previous studies on pig farms as they demonstrate herd performance and are important to farm economics. All 17 production factors in the original data system were used to analyze the relationship with PSY and subsequent analysis.
Definitions
Grandparent (GP) farm means the pig farm where only purebred sows are kept, which were from a Landrace pure line. Parent stock (PS) farm means the pig farm where the crossbred sows (F1 offspring) are kept, which were from a cross between the Landrace pure line and a Large White pure line. All crossbred sows were produced from Large White female and a Landrace male.
Due to producer confidentiality, the absolute value of PSY was normalized from 0 to 1 (100%). For non-digital variable, farm type follows the formal definition below:
Statistics analysis
All analyses were conducted with the python programming language in PyCharm 2021.3.2 (Community Edition). The farm was considered an experimental unit. In order to reduce the noise in raw data, abnormal data points were deleted. Each record was the production data of the farm for the whole year of 2021. Due to the impact of production operations or epidemics, the farrowing rate (FR) of some farms was 0, indicating that the farm has not delivered piglets in 2021; the number of piglets born alive per litter (PBAL) was 0, indicating that the farm cannot provide PBAL and cannot calculate the PSY. Therefore, farm with 0 FR and 0 PBAL are excluded as abnormal data. Spearman's rank correlation analysis between 18 variables (including PSY) was performed to construct the Correlation Coefficient Matrix. The correlation between each variable and PSY and the collinearity between each variable were analyzed through the correlation coefficient matrix.
Nine machine learning models were applied to learn and predict the PSY of the pig farm with the sklearn algorithm module, which were Gradient Boosting Regressor, HistGradient Boosting Regressor, Extra Trees Regressor, Random Forest Regressor, Bayesian Ridge, Linear Regression, Bagging Regressor, AdaBoost Regressor, ElasticNet. Leave-One-Out Cross-validation (LOO-CV) was applied for machine learning and prediction. In each cycle, 290 farms were taken for training, and one farm was used for prediction. A total of 291 models were established in 291 cycles. Finally, the prediction results were aggregated to complete the final effect evaluation.
Also, machine learning model predictions were performed to individually find the PSY improvement bottleneck of each farm. Specifically, for each pig farm, we modified every factor separately, which was increased or decreased by 0.5 standard deviation of the factor. Then, re-predicted the updated data to compare the modification plan that maximized the improvement of PSY. The formal definition for prediction was as follows: where N presents the total number of pig farms; M presents the total number of variables. X ij means the value of variable number j in the number i farm; X i means a vector consisting of all variables of the number i farm; x ' ij means the result of changing x ij by a small amount. std j is the standard deviation of the number j variable; t is used to control the direction of change (increase or decrease); δ is the change amplitude parameter. Figure 1 showed that 17 production factors were nonlinearly distributed with PSY, in which total number of piglets per litter (TPL), PBAL, number of weaned piglets per litter (WPL), FR and mating rate within 7 days after weaning (MR7DW) were positively correlated with PSY. Instead, the stillbirth rate, return-service rate, weaning to breeding interval and NPD were negatively correlated with PSY. And the correlation of the other six were not clear. Many of these production factors limited the upper bound of PSY (Fig. 1DEFGHJKLM), indicating that they could be the bottleneck of PSY improvement on this farm.
Results
The correlation coefficient matrix of 18 parameters in 291 large-scale pig farms is shown in Fig. 2. The production factors with a strong correlation (≥ 0.8000) between two pairs included TPL versus PBAL (0.9440), followed by design stock versus actual stock (0.8516). The top five production factors with the highest correlation with PSY were WPL (0.6694), MR7DW (0.6606), PBAL (0.6517), TPL (0.5706) and NPD (− 0.5308).
Comparing the mean absolute error (MAE), 95% confidence interval (CI), R 2 , mean square error (MSE) and mean absolute percentage error (MAPE) of nine algorithm models, the gradient boosting regressor model had the smallest MAE, the narrowest 95% CI and the highest R 2 ; meanwhile, the MSE and MAPE were relatively low ( Table 1). The residuals of gradient boosting regressor are normally distributed (P value > 0.05 by D' Agootino-Pearson test) and homoscedastic (P value > 0.05 by Breusch-Pagan test). Figure 3 showed that the predicted PSY calculated by the gradient boosting regressor model was highly confident with the actual PSY. 95.19% (277/291 pig farms) of the predicted PSY values fell within the 95% CI. The MAE value was 1.6047, which indicated that the average difference between the results predicted using the model and the actual PSY was 1.6047.
The changes of 17 production factors of 291 largescale pig farms at 0.5 SD absolute value were calculated by the gradient boosting regressor model, respectively. And the bottleneck factor and corresponding PSY promotion value of each pig farms were obtained (see Additional file 1). In addition to the return-service rate and farm type, a 0.5 SD change of 15 other production factors can increase PSY to various levels (0.15 PSY-3.31 PSY) ( Table 2). In one third of the pig farms, an absolute increase of 0.5 SD in the number of production days can increase 0.29-2.92 PSY. With an increase of WPL, one fifth of the pig farms can improve an average of 1.14 PSY. One sixth of the pig farms can improve an average of 1.63 PSY with the increase of MR7DW. These three factors accounted for 70.8% of all pig farms. Among all factors, weaning to breeding interval had the largest average improvement (1.84 PSY).
Discussion
The production data of this study came from 291 largescale pig farms in 79 subsidiaries of an agri-food group. Although the design of pig farms, feed quality and standard operating procedure (SOP) was consistent, there were differences in scale, breeds, hygiene and personnel implementation, which may affect the performance of piglets and sows [18][19][20]. In order to reduce the influence of external factors as much as possible on the basis of ensuring the amount of data, the following measures have been taken: (1) we selected large-scale farms for this study with more than 750 breeding sows. Koketsu and Lida [5] showed that large-scale pig farms had more advanced facilities, more human resources and a higher level of genetic improvement than small ones. (2) The statistical factors were expressed as the average value of all sows per pig farm within one year (2021), avoiding the interference of seasonal effects [20]. To avoid respondents' bias, this study used only objective production data [21]. Although there may be some errors in data input and there is no distinction of parity (69.4% of pig farms were put into operation in 2020 and later, mainly because the parity is generally low), the data trend was still useful of reference, and our conclusions were in line with the calculation formula of PSY.
With the recording and storage of production "big data", the scale, standardisation and modernization of pig farms will be the future development direction of the breeding industry. In production management software, only preliminary descriptive analysis was generally carried out, and most producers only used these data to generate basic production performance and working lists [22]. Deep mining the meaning of data and using data analysis-based pig herd management can help producers and veterinarians in large-scale pig farms maximize the potential productivity of pigs and improve economic benefits [5].
Because the cycle of pig breeding was relatively long, the production chain was complex, and it was easy to be affected by factors such as environment, nutrition and disease. Greater computing power and more complex mathematical methods were required to meet the needs of epidemiological investigation, disease prediction, production data analysis, etc. Using a Rimpuf simulation model to simulate airborne transmission of foot-and-mouth disease, Sørensen et al. [23] found that [9] found that the reintroduction, persistence and extinction of PRRSV played a key role in the intratransmission of PRRSV through the random mathematical modeling of piglet herds. The nine algorithm models used in this study had different characteristics. Linear Regression is a simple and widely used model for overall trends calculation and prediction but cannot calculate individualized results for each farm. All the other eight models are nonlinear. Compared with linear regression, they can better learn the nonlinear relationship between factors and PSY or the combined effect of each factor. Bayesian ridge regression model combines the advantages of Bayesian linear regression and ridge regression models, which is that regularization parameters can be learned automatically, avoiding overfitting or underfitting. It can be applied not only to the fitting of normal data but also to the fitting of morbid or abnormal data. It can also solve the problem of overfitting in maximum likelihood estimation, and the utilization rate of logarithmic data samples was 100%; only using training samples can effectively and accurately determine the complexity of the model and can be used to predict the litter size [25]. Bagging Regressor can solve the overfitting problem through ensemble, suitable for small data sets. Elastic Net optimizes overfitting and underfitting problems by combining L1 + L2 (linear combination between Ridge and Lasso) regularization methods.
Tree models (including Random Forest Regressor, Ada Boost Regressor, Extra Trees Regressor, Gradient Boosting Regressor, Hist Gradient Boosting Regressor) can learn the combination of key factors that affect PSY and calculate the PSY of the farm according to the PSY fitting of other similar samples. Random forest was an optimization model based on bagging. The final result is merged and output by training multiple tree models. It used bootstrap aggregation and randomization of predictors to achieve high prediction accuracy and capture nonlinear dependencies [26,27]. Extra Trees Regressor differs from traditional decision trees in the way they are constructed. When looking for the best split to split a sample of nodes into two groups, a random split is done for each of the randomly selected features (max_features), and the best split among them is chosen.
Adding Boost can further reveal secondary learning for prediction errors and improve the accuracy of prediction results. Ada Boost Regressor has the ability of adaptive enhancement. The samples that were wrongly classified by the previous primary classifier will be strengthened, and all the weighted samples will be used to train the next primary classifier again. At the same time, a new weak classifier is added in each round until a predetermined sufficiently small error rate is reached or a pre-specified maximum number of iterations is reached. However, the error of the Ada boost regressor model was quite high. This may be because the model required larger sample size and a smaller number of 291 training samples, which made it impossible to give full play to the advantages of the model. The larger the sample size was needed to improve the accuracy of the model. The production indicators of large-scale pig farms will not reach the maximum number, but there were corresponding production standards (see Additional file 2). When the depth of each regression tree was very small, the gradient boosting regressor model algorithm can export the same high accuracy as the deeper regression tree. Usually, the maximum depth of each regression tree was set to a smaller value to prevent overfitting [28]. The gradient promotion of regression tree produced a competitive, highly robust and interpretable process for regression and classification, which was especially suitable for mining dirty data. Gradient Boosting Regressor tree can improve the fitting effect by training multiple tree models of learning residuals. And as a nonlinear model, it can produce personalized calculation results. For Hist Gradient Boosting Regressor, discrete features can be converted into continuous features through histogram statistics, leading to the ability to directly process discrete features. While these nine models are constantly evolving, that doesn't mean the latest model is the best. The optimal solution depends on many factors, such as the size of the dataset, the degree of dispersion, and the state of the data. In this study, due to the high correlation between each parameter (production factors and PSY), the nonlinear classification algorithm of the tree model was more suitable. Furthermore, given the upper bound of the data, the gradient boosting regressor model can reasonably deal with the ceiling effect (that was, the already high sensitivity will not be increased for the exposure above the average impact, just as the sensitivity will not be reduced for the exposure below the average effect). Therefore, the result with the smallest MAE, R 2 and the narrowest 95% CI were obtained by the gradient boosting regressor model.
Our study concluded that WPL, MR7DW, PBAL, TPL and NPD were the most affecting production indicators of PSY, as previously reported [19]. Personalized analysis of 291 pig farms through the gradient boosting regulator model found that increasing the number of production days improved PSY in 1/3 of the pig farms. This may be because the longer production time, the more continuous and stable the production, and the more reasonable the parity structure of the sow herds. 92.1% of the investigated pig farms were newly built after the outbreak of African swine fever epidemic in 2018, and 70% were put into production after 2020. On one hand, it will be affected by the epidemic. On the other hand, the unstable personnel and insufficient quantity will also have a certain impact on the production performance. WPL and MR7DW had indirect effects on PSY because they directly affected and increased NPDs, thus reducing the number of piglets per sow per year [21] and PSY [29]. Sows in normal estrus after weaning were more likely to mate in 4-6 days after weaning in subsequent parity, and sows mated within 4-6 days after weaning had higher reproductive performance and longer lifetime productivity [30]. Therefore, the higher the MR7DW, the higher the lifetime reproductive performance of the PSY and even the sow.
Conclusions
17 productive factors had a non-linear relationship with PSY. The main production factors related to PSY were WPL, MR7DW, PBAL, TBL and NPD. Compared with MAE, 95% CI and R 2 among the nine algorithm models, the gradient boosting regressor was the optimal model to analyze the production factors that were non-linearly related to PSY, which can realize the personalized promotion of PSY for specific large-scale pig farms. Our practical approach has great application value prospects, looking for the optimal solution in every farm considering its own particular conditions, which lead to work individually in the most relevant factors in every case. 70% of 291 pig farms can improve PSY to various levels by increasing the production days, WPL and MR7DW.
|
2022-08-10T13:32:43.231Z
|
2022-08-10T00:00:00.000
|
{
"year": 2022,
"sha1": "400f3da317adbd33356a9e6f6a1f553e68be4122",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "400f3da317adbd33356a9e6f6a1f553e68be4122",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265014319
|
pes2o/s2orc
|
v3-fos-license
|
Strong association of lumbar disk herniation with diabetes mellitus: a 12-year nationwide retrospective cohort study
Background Despite reports on the association between diabetes mellitus (DM) and lumbar disk herniation (LDH), large-scale, nationwide studies exploring this relationship are lacking. We aimed to examine the profiles of DM in individuals with LDH and explore the potential mechanisms underlying the development of these disorders. Methods This retrospective, population-based study was conducted between 2008 and 2019 using data from the National Health Insurance (NHI) research database in Taiwan. The primary outcome was the date of initial LDH diagnosis, death, withdrawal from the NHI program, or end of the study period. Results In total, 2,662,930 individuals with and 16,922,546 individuals without DM were included in this study; 719,068 matched pairs were established following propensity score matching (1:1 ratio) for sex, age, comorbidities, smoking, alcohol consumption, antihyperglycemic medications, and index year. The adjusted risk for developing LDH was 2.33-fold (95% confidence interval: 2.29−2.37; P<0.001), age-stratified analysis revealed a significantly greater risk of LDH in every age group, and both males and females were approximately twice as likely to develop LDH in the DM compared with non-DM cohort. Individuals with DM and comorbidities had a significantly higher risk of developing LDH than those without, and the serial models yielded consistent results. Treatment with metformin, sulfonylureas, meglitinides, thiazolidinediones, dipeptidyl peptidase-4 inhibitors, or alpha-glucosidase inhibitors was associated with a more than 4-fold increased risk of LDH in the DM cohort. DM was strongly associated with the long-term development of LDH; over the 12-year follow-up period, the cumulative risk of LDH was significantly higher in patients with than without DM (log-rank P<0.001). Conclusion DM is associated with an increased risk of LDH, and advanced DM may indicate a higher risk of LDH.
Introduction
Diabetes mellitus (DM) is a chronic disease characterized by elevated blood glucose levels, and is associated with various comorbidities.Types 1 and 2 are the two primary forms of DM, with type 2 representing approximately 90% of DM cases.Type 1 DM, also known as autoimmune DM, is a chronic disease characterized by insulin deficiency and hyperglycemia caused by the elimination of pancreatic b-cells (1).DM can affect various organ systems, resulting in severe complications over time.Individuals with type 2 DM are at risk of both microvascular and macrovascular complications, including retinopathy, nephropathy, neuropathy, and cardiovascular comorbidities.Insulin resistance and impaired insulin secretion are the primary defects of type 2 DM (2), and several antihyperglycemic medications (AHMs) with various mechanisms of action for reducing blood sugar have been developed.Commonly used oral AHMs include metformin (biguanide class), sulfonylureas (Sus), meglitinides, thiazolidinediones (TZDs), alpha-glucosidase inhibitors (Agis), dipeptidyl peptidase-4 inhibitors (DPP4is), and sodium-glucose cotransporter-2 inhibitors (SGLT2is).Injective AHMs include GLP-1 receptor agonists (GLP1Ras) and insulin.
Lumbar disk herniation (LDH) is a common cause of lower back and unilateral leg pain that commonly occurs during the fourth and fifth decades of life, affecting a significant portion of the population, with a lifetime prevalence of 10%.The occurrence of Modic changes in the lumbar region exhibited a significant increase in both the 40s and 60s (3).Similarly, the prevalence of severe intervertebral disc degeneration in the lumbar region demonstrated a significant increase in individuals aged 20s, 30s, 50s, and 70s (3).Approximately 5-20 cases of LDH per 1,000 adults occur annually, with around 95% of herniations occurring at L4-L5 or L5-S1 (4).Degeneration of intervertebral disks is a leading cause of back pain; disk degeneration, disk herniation, and radicular pain result from an imbalance between catabolic and anabolic responses (5), and disk degeneration is typically associated with herniations.Male sex, taller height, intensive work, obesity, and smoking were reported to predict LDH recurrence (6, 7), and although the relationship between DM and lumbar disk degeneration has been the subject of research, the findings remain inconsistent.Some studies have reported cases wherein DM is a risk factor in patients with multiple disk herniation.Notably, patients who underwent surgery for lumbar disk disease had a significantly higher incidence of DM than those who underwent surgery for other reasons (8). Park et al. (9) revealed that type 2 DM is significantly associated with lumbar spine disorders and frequent spinal procedures, while another study revealed a positive relationship between DM and lumbar disk diseases, including LDH (10).Additionally, a longer duration and poor control of hyperglycemia was reported to aggravate disk degeneration (11).Based on magnetic resonance imaging findings, another study found no conclusive evidence suggesting that insulin-dependent DM has a significant impact on bone density or disk degeneration (12).Therefore, whether DM is a risk factor for lumbar disk disease remains to be clarified.
Large-scale cohort studies of this topic are lacking; therefore, we aimed to delineate the association between DM and LDH by conducting a nationwide study to determine whether any difference in the risk of LDH exists between individuals with and without DM.
Study population
include any personal, institutional, or other data links between two or more databases.Using the International Classification of Diseases, Ninth and Tenth Revisions, Clinical Modifications (ICD-9-CM and ICD-10-CM) codes, inpatient and outpatient diagnoses were determined.This study was conducted in accordance with the principles of the Declaration of Helsinki and approved by the Institutional Review Board of China Medical University Hospital (approval number CMUH110-REC3-133(CR-1)). A waiver of informed consent was granted by the Institutional Review Board owing to the use of deidentified data in the present study.The access date to the NHIRD was on May 31, 2023.
Main outcome and covariates
The main outcome of this study was the development of LDH.We censored patients on the date of the respective outcome, death, or withdrawal from the NHIRD, or at the end of follow-up on December 31, 2019, whichever came first.During the follow-up period, the incidence rates of LDH were compared between the case and control groups, with LDH defined by ICD-9-CM codes (722.10,722.11) and ICD-10-CM codes (M51.25,M51.26, and M51.27), and DM defined by ICD-9-CM (250) and ICD-10-CM (E08-E13) codes.
Statistical analysis
We used propensity score matching to reduce selection bias and improve the comparability of variablessuch as sex, age, income, comorbidities, AHMs, smoking, alcohol consumption, and index yearbetween the DM and non-DM groups.The closest propensity score was computed, and matched pairs were created using the nearest-neighbor method, with a significance level of standardized mean difference of <0.1 indicating a significant difference between the cohorts.We used the Cox proportional hazards model to compare outcomes between the two groups, with crude and multivariate-adjusted hazard ratios (HRs) adjusted for sex, age, comorbidities, AHMs, and index year.Patients were censored if they developed LDH, died, or reached the end of the follow-up period on December 31, 2019, whichever occurred first.We performed Kaplan-Meier analysis and log-rank tests to compare the cumulative incidence of LDH between the DM and non-DM groups.Statistical analyses were performed using SAS (version 9.5; SAS Institute, Cary, NC, USA), and a two-tailed Pvalue <0.05 was considered statistically significant.
Patient characteristics
Among the data obtained between January 1, 2008, and December 31, 2019, we identified 31,488,321 individuals from the database.After excluding ineligible patients, we included 2,662,930 and 16,992,546 individuals in the DM and non-DM cohorts, respectively.Figure 1 presents a flowchart of the study.We performed 1:1 propensity score matching based on the variables mentioned in the Methods section, resulting in 719,068 matched pairs of patients with and without DM.In the matched cohorts, the mean age of the DM cohort was 59.59 ± 15.32 years, and 50.27% were female; the mean follow-up duration was 5.7 ± 3.48 years.Table 1 presents the baseline demographics of the included participants; the two cohorts showed similar baseline characteristics.
Multivariate analyses
Figure 2 shows a forest plot of the risk factors for LDH in individuals with DM; Supplementary Table 2 summarizes these risk factors.In the multivariable Cox regression analysis, 20,729 (2.88%) patients with LDH did not have a previous diagnosis of DM, and 45,243 (6.29%) patients with LDH were diagnosed with DM before the occurrence of LDH (incidence rate: 4.6 vs. 11.0 per 1,000 person-years).The crude HR (cHR) of DM was 2.38 (95% confidence interval [CI], 2.34−2.42,P<0.001) in patients with LDH.Individuals with DM had a higher risk of developing LDH than those without DM after adjusting for sex, age, comorbidities and AHMs (adjusted HR [aHR], 2.33; 95% CI, 2.29-2.37;P<0.001).The risk of LDH was found to increase in individuals aged 40-59 years and 60-79 years when compared with individuals aged 20-39 years, with aHRs of 1.33 and 1.39, respectively.However, this risk decreased in individuals aged >80 years.Males had a significantly lower risk of LDH (aHR, 0.87; 95% CI, 0.86-0.88;P<0.001) than females, and income level was not associated with LDH.Patients with comorbiditiessuch as hypertension, dyslipidemia, CLD, CKD, obesity, smoking, and alcohol consumptionhad a significantly higher risk of LDH, whereas those with cancer had a significantly lower risk of LDH.Notably, patients with AS had a significantly elevated risk of LDH (aHR, 1.56; 95% CI, 1.46-1.66;P<0.001).Participants using 1 or >2 AHMs had a prominent risk of LDH (aHR, 1.27 and 1.32, respectively).3 summarizes these findings.Individuals with DM had a significantly higher risk of developing LDH, regardless of sex, age, income level, or the coexistence of any comorbidity.Patients using AHMs have a more prominent risk of LDH than those who do not; notably, individuals receiving GLP1RAs had 10.46-fold higher risk of developing LDH than those did not (95% CI, 1.01−108.29;P=0.0489).The use of metformin, SUs, meglitinides, TZDs, DPP4is, or AGis was associated with a more than 4-fold increased risk of LDH in patients with than without DM, and the number of AHMs used was positively associated with the risk of LDH development.
Stratified analyses
To investigate the effect of covariates, four models were used to determine the risk of LDH in patients both with and without DM.Table 2 presents the HRs and 95% CIs for the two cohorts, as well as each model.In Model 1, the cHR was examined.In models 2-4, the aHRs were obtained based on adjustments to different variables.
Duration analysis
Table 3 presents the risk of LDH in both cohorts according to the duration of LDH diagnosis.Regarding follow-up periods <4 years, the aHR of LDH development in the DM cohort was 2.6 (95% Cl, 2.54−2.66;P<0.001).After >10 years of follow-up, the risk of LDH in the DM cohort remained significantly higher than in the non-DM cohort (aHR, 1.51; 95% CI, 1.37−1.68;P<0.001).Flow chart for patients with diabetes mellitus and comparison cohort.
Cumulative incidence of LDH
Figure 4 illustrates the Kaplan-Meier cumulative incidence of LDH, which was significantly higher in the DM than non-DM cohort (log-rank P<0.001).
Discussion
In recent years, there has been a notable increase in the occurrence of both type 1 and type 2 DM, suggesting that a large proportion of the population faces challenges and complications associated with this chronic condition.A study conducted in this context revealed that individuals with DM displayed elevated levels of LDH.These results suggest that inadequate long-term management of DM may contribute to the development of LDH, and potentially increase the chances of requiring surgical intervention.
Degenerative disk disease poses a significant healthcare issue, leading to persistent and often intense back pain that has a detrimental impact on the patient's wellbeing, and contributes to rising healthcare expenses.Understanding the risk factors associated with lumbar disk degeneration is crucial to implementing strategies that prevent or slow disease development and progression.Recent studies indicate a higher vulnerability to intervertebral disk disease in females compared with males; still, the specific impact of DM on intervertebral disk degeneration based on Forest plot of risk factors for lumbar disk herniation among individuals.AHMs, anti-hyperglycemic medications; aHR, adjusted hazard ratio; AGis, alpha-glucosidase inhibitors; AS, ankylosing spondylitis; CLD, chronic liver disease; CKD, chronic kidney disease; DM, diabetes mellitus; DPP4is, dipeptidyl peptidase-4 inhibitors; GLP1RAs, glucagon-like peptide-1 receptor agonists; MM, multiple myeloma; NS, nonsignificant; SGLT2is, sodiumglucose cotransporter 2 inhibitors; SUs, sulfonylureas; TZDs, thiazolidinediones; 95% CI, 95% confidence interval.
differences in sex remains unclear (13).Our study reported consistent findings that males bear a lower risk of LDH than females (aHR, 0.87; 95% Cl, 0.86−0.88).Several clinical studies have demonstrated that the incidence of intervertebral disk disease is higher in individuals with obesity and DM (14); notably, growing evidence indicates a correlation between a high body mass index (BMI), obesity, or overweight, and an increased risk of intervertebral disk degeneration (15).Özcan-Ekşet al. (16) found that severe intervertebral disc disease was significantly more prevalent in obese individuals compared to non-obese individuals, with a prevalence rate of 73.5% in obese patients compared to 50.4% in non-obese patients.In addition, there was a higher likelihood of obese patients exhibiting Modic changes at any lumbar level, particularly in women.This result corroborates our research outcomes.In the present study, obesity was found to increase the risk of LDH (aHR, 1.12; 95% Cl, 1.05−1.19).Associations were also found between dyslipidemia and LDH levels (aHR, 1.11; 95% Cl, 1.09−1.13);however, the relationship between serum lipid levels and back pain remains under debate.Some theories propose that advanced atherosclerosis may play a role in microvessel disease and spinal disk degeneration.Abnormal lipid levels have also been suggested as a potential mechanism that leads to atherosclerosis in the blood vessels of the lumbar region, which in turn can cause low back pain.Additionally, individuals with high TG levels were more likely to experience disk herniation (odds ratio, 2.974; 95% CI, 1.488-5.945)(17).The age-adjusted Forest plot of risk factors for lumbar disk herniation among individuals with and without diabetes mellitus.AHMs, anti-hyperglycemic medications; aHR, adjusted hazard ratio; AGis, alpha-glucosidase inhibitors; AS, ankylosing spondylitis; CLD, chronic liver disease; CKD, chronic kidney disease; DPP4is, dipeptidyl peptidase-4 inhibitors; GLP1RAs, glucagon-like peptide-1 receptor agonists; MM, multiple myeloma; NS, nonsignificant; SGLT2is, sodium-glucose cotransporter 2 inhibitors; SUs, sulfonylureas; TZDs, thiazolidinediones; 95% Cl, 95% confidence interval.
prevalence of low back pain was inversely associated with HDL cholesterol levels, and positively associated with triglyceride; however, after accounting for age, the total cholesterol levels were not significantly associated with low back pain in either gender (18).Cholesterol levels are also associated with CBP in patients with DM; elevated LDL cholesterol levels were associated with CBP, whereas elevated HDL cholesterol levels were negatively associated.
Recently, smoking was shown to negatively influence LDH levels, likely due to microangiopathy.In the present study, we found that smokers had a greater probability of suffering from LDH (aHR, 1.18; 95% Cl, 1.11−1.26)compared with nonsmokers.Two potential mechanisms for disk degeneration caused by smoking have been postulated: (1) downregulation of glycosaminoglycan biosynthesis and cell proliferation mediated by nicotine, and (2) decreased supply of nutrients to the intervertebral disk.The results of our study align with those of previous studies that established a correlation between DM and degenerative disk diseases (9, 19).Furthermore, previous investigations demonstrated that individuals with DM tend to experience worse outcomes after lumbar discectomy than nondiabetic controls, including higher rates of reoperation and longer hospital stays (20).Elevated preoperative HbA1c levels and long-term DM are risk factors for unfavorable outcomes following cervical laminoplasty in patients with DM and cervical spondylotic myelopathy (21).In a review of patients who underwent discectomy for LDH, Mobbs et al. (20) reported higher rates of LDH recurrence and reoperation in patients with DM (28%) than in controls (3.5%).However, Vogt et al. (22) did not find a correlation between a history of DM and the prevalence of L4-L5 degenerative spondylolisthesis.A duration >10 years and poor control of type 2 DM are risk factors for lumbar disk degeneration, with a longer duration associated with more severe disk degeneration (23).The duration of DM is also associated with the need for spinal surgery, suggesting that the cumulative effects of DM over time may contribute to degenerative changes requiring surgical intervention.Lumbar degenerative disk disease is associated with male sex, HbA1c levels, and venous glucose (24).This study also reported a potential link between DM and lumbar spinal stenosis.A Mendelian randomization analysis revealed a causal effect of type 2 DM on degenerative disk disease that persisted even when adjusted to BMI (25).Magnetic resonance imaging revealed a strong correlation between the severity and duration of DM and the presence of Modic changes (26).DM is also associated with poor outcomes following lumbar discectomy and The cumulative incidence of lumbar disk herniation in diabetes mellitus cohort and control cohort.A positive correlation has been identified between DM and degenerative lumbar disk disease; high preoperative HbA1c levels and long-term DM are risk factors for poor cervical laminoplasty outcomes in patients with DM and cervical spondylotic myelopathy (21).Several studies have also suggested that hyperglycemia promotes the formation of advanced glycation end products in the nucleus pulposus, which contributes to the progression of disk degeneration.Recent animal studies have examined the association between hyperglycemia and intervertebral disk degeneration.An animal study indicated that DM accelerates disk degeneration through microangiopathy (28); additionally, microvascular disease a characteristic of DMmay impair disk nutrition and contribute to degeneration.In a study using a rat model, hyperglycemia stimulated disk autophagya process of cellular self-degradationand accelerated stress-induced senescence in nucleus pulposus cells.Autophagy in nucleus pulposus and annulus fibrosus cells also appears to play a significant role in lumbar degenerative diseases.Two studies have demonstrated that high glucose-induced oxidative stress accelerates premature stressinduced senescence in young rat annulus fibrosis cells (29, 30).
The evaluation of proteoglycans in the intervertebral disks of individuals with DM has revealed a reduction in sulfate incorporation into glycosaminoglycan molecules, and lower rates of glycosylation.These findings align with those of a previous study conducted by Robinson et al. (31), who observed a lower presence of proteoglycans in the intervertebral disks of patients with than without DM.These variations may contribute to elevated vulnerability to recurrent herniation in individuals with DM, as sulfation and proteoglycans are recognized for their role in reinforcing the collagen matrix of the disk.Nevertheless, despite the histological evidence, clinical studies have not established a conclusive association between DM and the rate of recurrent LDH.
Notably, the administration of AHMs, such as SUs or meglitinides, was found to be significantly associated with an increased risk of LDH; additionally, the simultaneous use of multiple AHMs, which suggests inadequate blood sugar control, was significantly associated with an increased risk of LDH.Conversely, the use of metformin, DPP4is, SGLT2is, or insulin was significantly associated with a lower risk of LDH.No significant association with LDH was observed for TZDs, AGis, or GLP1RAs.In multivariate analyses, patients with DM using any AHM exhibited a higher risk of LDH than those without DM.This suggests that worsening hyperglycemia, which requires medication, is associated with an increased risk of LDH.
This study demonstrated that participants who used AHMs were at a higher risk of LDH than those who did not.Furthermore, patients who were coadministered >2 AHMs were at a significantly higher risk of developing LDH than those who were coadministered <2 AHMs.These findings suggest that patients with poorly controlled DM tend to exhibit more severe disk degeneration than those with adequate control, as well as that DM is a risk factor for LDH, with an effect dependent on the duration and level of disease control.Additionally, the study observed that the use of medications like metformin, DPP4is, SGLT2is, or exogenous insulin was associated with a lower incidence of LDH.
DM is a complex condition that likely contributes to LDH via various mechanisms; furthermore, the correlation between AHMs and LDH can vary depending on specific clinical circumstances.Therefore, maintaining strict blood glucose control is crucial for preventing or delaying lumbar degenerative diseases in older patients with DM (30).This study acknowledges the importance of further investigation to understand the mechanisms underlying the association between DM and LDH, as well as the disease burden of DM in spinal pathologies; thus, further prospective comparative studies with longer follow-up periods are required to confirm our results.
This study has some limitations, including its retrospective cohort design; however, observational studies cannot provide insight into the causal relationship between DM and interverbal degenerative disk disease, even when based on larger sample sizes.Second, the NHIRD lacks relevant clinical and laboratory information, such as BMI, lipid profiles, and HbA1c levels.Third, although we adjusted for various confounding factors, the residual confounding factors may have biased our results; cohort studies are usually associated with bias due to uncovered and unobserved confounding factors.Last, our findings only be related to the Taiwanese population; thus, similar studies should be performed in different countries to determine whether our observations apply to other populations.Despite the notable limitations mentioned above, the primary objective of this study was to evaluate the overall correlation between DM burden and LDH levels.However, to delve into more precise inquiries, future investigations should consider conducting smaller and more targeted studies.
Conclusions
In recent years, the prevalence of both type 1 and type 2 DM has increased in children and adolescents, indicating that a growing population is at risk of complications associated with this chronic disease.This study aimed to explore the relationship between DM and LDH.These findings revealed that higher LDH burden metrics were identified in patients with than without DM, suggesting that advanced DM contributes to the development of LDH.Elevated blood sugar levels, modified proteoglycan composition, microvascular diseases, and cholesterol levels are potential factors involved in the mechanisms underlying LDH in individuals with DM.In conclusion, early and strict blood glucose control is important to prevent the development of lumbar degenerative diseases in patients with DM.
IRB approval status
This study was reviewed and approved by the Institutional Review Board of China Medical University Hospital (ID number CMUH110-REC3-133(CR-1)).
Impact statement
Diabetes mellitus contributes to the development of lumbar disk herniation; thus, early and strict blood glucose control is important to prevent the development of lumbar degenerative diseases in these patients.
Figure 3
Figure 3 presents a forest plot of the risk factors for LDH in individuals with and without DM; Supplementary Table3summarizes these findings.Individuals with DM had a significantly higher risk of developing LDH, regardless of sex, age, income level, or the coexistence of any comorbidity.Patients using AHMs have a more prominent risk of LDH than those who do not; notably, individuals receiving GLP1RAs had 10.46-fold higher risk of developing LDH than those did not (95% CI, 1.01−108.29;P=0.0489).The use of metformin, SUs, meglitinides, TZDs, DPP4is, or AGis was associated with a more than 4-fold increased risk of LDH in patients with than without DM, and the number of AHMs used was positively associated with the risk of LDH development.
TABLE 1
Characteristics for individuals with and without diabetes mellitus.
TABLE 2
Hazard ratios and 95% confidence intervals for lumbar disk herniation in different models.
TABLE 3
The risks of lumbar disk herniation in the diabetes mellitus cohort relative to the non-diabetes mellitus cohort in terms of different followup period., adjusted hazard ratio; cHR, crude hazard ratio; DM, diabetes mellitus; IR, incidence rate per 1,000 person-years; PY, person-years.†: adjusted by sex, age, comorbidities, and anti-hyperglycemic medications.***P <0.001.cervical laminoplasty; a meta-analysis demonstrated that DM increased the risk of postoperative mortality, surgical site infection, deep venous thrombosis, and prolonged hospitalization after spinal surgery (27). aHR
|
2023-11-05T16:19:12.765Z
|
2023-11-02T00:00:00.000
|
{
"year": 2023,
"sha1": "3e38b9dcde3e58c18064da904fbcc0958293bf6d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2023.1260566/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f9d20ea741d88edb09794d3964b1e77525f698a5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
254553095
|
pes2o/s2orc
|
v3-fos-license
|
Sources of stress and scholarly identity: the case of international doctoral students of education in Finland
Although stressors and coping strategies have been examined in managing stress associated with doctoral education, stress continues to have a permeating and pernicious effect on doctoral students’ experience of their training and, by extension, their future participation in the academic community. International doctoral students have to not only effectively cope with tensions during their training and their socialization in their discipline but also address the values and expectations of higher education institutions in a foreign country. Considering the increase of international doctoral students in Finland, this study focuses on perceived sources of stress in their doctoral training and how their scholarly identity is involved when responding to them. The study draws on thematically analyzed interviews with eleven international doctoral students of educational sciences. The participants, one man and ten women, came from nine countries and conducted research in six Finnish universities. The principal sources of stress identified were intrapersonal regulation, challenges pertaining to doing research, funding and career prospects, and lack of a supportive network. Despite the negative presence of stress, most participants saw stress as a motivating element. However, in order for stress to become a positive and motivational force, participants had to mediate its presence and effects by means of personal resources, ascribing meaning and purpose to their research, and positioning themselves within their academic and social environment. The study argues for stress as a catalyst for scholarly identity negotiation and professional development when perceived positively.
Introduction
There has recently been a surge of interest in doctoral education amidst academic discourse (e.g., Aittola 2017;Cantwell et al. 2012;Laufer and Gorup 2018), especially with the Bologna Process suggesting key aspects of educational and political importance as well as a supranational character (Baptista 2011). Yet going beyond the technical aspects of doctoral training, the personal experiences of the doctoral student have largely remained underexplored (Amran and Ibrahim 2012). One of the doctoral students' personal experiences in doctoral education that remains salient is stress. Doctoral students may be particularly susceptible to stress and precariousness, as they run a high risk of having or developing mental health problems, especially depression, due to factors like organizational policies, work-life imbalance, job demands, and career prospects outside academia (Levecque et al. 2017). Moreover, doctoral students' perceptions of their training play an important role in the success of doctoral programs (e.g., Aittola 2017). Although stressors and coping strategies have been examined in managing stress associated with doctoral education (Devonport and Lane 2014), stress continues to have a permeating and pernicious effect on doctoral students' experience of their training and, by extension, their future participation in the academic community.
Following the European models and regulations, such as the Bologna Process, the Finnish national system of doctoral education has undergone several reforms within the last decades. The reforms, including the expressed need for internationalization, have enhanced possibilities for participation in education for international doctoral students (IDS) (e.g., Aittola 2017; Peura and Jauhiainen 2018). This increase is evident in the number of IDS in education which increased from 72 students in 2007 to 138 students in 2017 (Vipunen, Education Statistics Finland 2017). The increase of IDS could partially be attributed to the fact that the core funding model of universities in Finland up to 2017 favored doctoral degrees by international students (Finnish Ministry of Education and Culture 2011). Finland has seen the increased steering of doctoral training in recent decades by actions such as using a quota of completed doctoral degrees per university as a funding criterion (see Finnish Ministry of Education and Culture 2015). As a result, the doctoral degreesespecially degrees by international students until 2017have gained new importance, and the work by doctoral students forms a considerable part of research in universities (e.g., Hakala 2009). Therefore, the experiences of pressure and professional development by IDS in Finnish universities becomes an issue pertinent to universities' financial and research support (see Peura and Jauhiainen 2018). Within such circumstances, this case study focuses on IDS doing educational research in Finnish universities. It explores the perceived sources of stress in their doctoral training and how scholarly identity is involved when responding to them.
Theoretical framework
Scholarly identity negotiation as an emotion-imbued learning process Professional identity is individuals' understanding of themselves as professional subjects, influenced by personal and professional trajectories, workplace and interpersonal settings, personally held value systems and ethical standards, beliefs, and interests (Eteläpelto et al. 2014). The professional identity of doctoral students as young researchers in training is understood as scholarly identity. Scholarly identity is central to doctoral students' training, as it strongly engages their overall learning, aspirations, desires, and personally held views of themselves as young academics (Cotterall 2015). Research on IDS shows that their socialization, both self-and other-directed, recursively uses internal and external sources and resources in becoming agentic and internalizing institutional practices, pedagogical paradigms, behaviors, positionalities, and ways of thinking (Anderson 2017;Evans and Stevenson 2010;Pyhältö et al. 2012a;Sidhu et al. 2014).
Despite the exercise of individual agency, scholarly identity negotiation is a bidirectional process. Scholarly identity requires recognition and validation by the intellectual and institutional networks in which the doctoral student should credibly exhibit their competence in a relevant discipline (Cotterall 2015). Validation of one's scholarly identity may further be influenced by the roles emphasized in their academic environment, with doctoral students of education choosing that of the practitioner's over that of the teacher'ssomething which stresses the implications of practical experience within educational research (Kovalcikiene and Buksnyte-Marmienea 2015). Moreover, authority figures influence one's scholarly identity negotiation, with the supervisory and research processes bearing on the subjective experience of being a doctoral student (Baptista 2011). In effect, the intellectual and institutional networks at the university comprise the community of practice that affects how doctoral students experience themselves as scholars; they can nurture scholarly identity negotiation and provide it the space to connect to other identities (Coffman et al. 2016).
While bidirectional, the process of navigating and using networks to negotiate one's professional identity is also an emotional one. In her research with IDS, Cotterall (2013Cotterall ( , 2015 argues that identity occupies a central role in doctoral training, as a doctoral student's current understandings and future aims are inextricably tied to their learning trajectories and thinking processes. In addition to that, developing a scholarly identity has emotional dimensions that often go unacknowledged and is a process punctuated by emotion-provoking encounters with key individuals and situations (e.g., supervisory relationship, composing the dissertation, writing in English as a second language) (Baptista 2011;Cotterall 2013;Russell-Pinson and Harris 2019). The emotions that need to be managed on the part of the doctoral student include stress, pressure, and uncertainty, which may be exacerbated under the evertightening financial constraints in academia and the expectation for new faculty members to exhibit more talent and productivity in relation to their predecessors (Austin 2002). In addition to that, doctoral students are expected to be mobile and flexible not only as employees of the contemporary job market (Meijers 2002) but also as researchers with future work and funding prospects (e.g., Academy of Finland criteria). This state of impermanence may heighten their perception of stress in their lives. Taking such conditions into account, examining how stress affects the negotiation of scholarly identity becomes important to doctoral students and doctoral education alike.
Stress and eustress
From a broader perspective, stress may refer to an event or succession of events that cause a response, often in the form of "distress," or to a challenge leading to a feeling of exhilaration, in the form of "good" stress (Joshi 2005). While a stressor is a stimulus event that challenges the integrity or health of the body, stress response is the body's compensatory reaction to that challenge (Lovallo 2005). In literature, stress is often described as a person's response mechanism or a survival reaction to a negative event (Baum et al. 2001;Folkman 2008;Ursin and Eriksen 2004). More particularly, stress is a response syndrome of negative affects which develop because of prolonged and increased pressures that cannot be controlled by an individual's coping strategies (Kyriacou 1987;Ursin and Eriksen 2004). Stress serves as a mediational process in which stressors (or demands) trigger an attempt at adaptation or resolution that results in individual distress if the organism is unsuccessful in satisfying the demands (Linden 2005). Moreover, stress can be understood as part of a sequential process in which objective environmental circumstances are appraised by the individual either as having no adaptive significance or as straining or exceed a person's adaptive resources (Lazarus and Folkman 1984;Linden 2005). Amidst environmental demands, regarded as one of the most common factors causing the stress (Shapero and Hankin 2009), responding to stress occurs at physiological, behavioral, and cognitive levels (Schneiderman et al. 2005).
The negative characteristics of stress are commonly known (Kyriacou 1987;Lazarus and Folkman 1984), yet its positive side, referred to as "eustress," is less often discussed (Mesurado et al. 2016;O'Sullivan 2011). Stress is not only seen as something "negative" when the individual is unsuccessful in satisfying personal or environmental demands but also as something "positive" when it leads to success in fulfilling such demands (Kupriyanov and Zhdanov 2014;Szalma and Hancock 2008). Reactions to stress depend upon the nature of that stress and the capacities that the exposed entity or organism can utilize to answer the challenges which this stress poses. Eustress is both the process of responding positively to stress as well as the positive outcome of this process (O'Sullivan 2011). O'Sullivan (2011 argued that, at the academic level, the positive response to stress could include studying and working to complete assignments, whereas the outcome of eustress could include productivity and successful completion of assignments and exams. This is supported by research among university students who showed eustress as a positive psychological response to academic stressors that are perceived as a challenge (Mesurado et al. 2016). Although the experience of stress may play an important role in doctoral student life, there are, to our knowledge, very few studies conducted on the phenomenon and even less concentrating on the role of stress in scholarly identity.
Research questions
The importance of scholarly identity for doctoral students as well as the institutional and social influences on it has been acknowledged. Moreover, despite attention to practical rather than emotional considerations shaping one's scholarly identity, stress is understood as a salient emotion affecting doctoral students' wellbeing and resilience. Yet international doctoral students' affective stances toward their doctoral education and the impact they have on how they see themselves as developing scholars remain underexplored. The present study focuses on international doctoral students of Education in Finland as a group that needs to shape a scholarly identity under academic and sociocultural circumstances new to them. Focusing on their experience, this case study addresses the following research questions: 1) What sources of stress do international doctoral candidates of Education in Finland perceive during their doctoral training? 2) How is scholarly identity involved in response to perceived sources of stress?
Participants
Eleven IDS pursuing a research doctorate in Education participated in this case study. Their doctoral training took place within educational sciences at the universities of Eastern Finland, Jyväskylä, Lapland, Oulu, Tampere, and Turku. In the research doctorate model used by Finnish universities, the workload lies in accomplishing the doctoral research; this may be a source of stress, as university students are used to studying but have very little previous experience of conducting research. The participants were one man and ten women from China, Greece, Hong Kong, India, Japan, Namibia, Poland, Turkey, and Vietnam. Five of them were in the early (exploring the literature, designing the studies, collecting data, preparing the first manuscript), three in the middle (having submitted or published manuscripts), and three in the final stage (having submitted or published the last manuscript, preparing the dissertation, preparing for thesis defense) of their training.
Research approach and data analysis
The interviews took place from March to April 2018, in person and via Skype. The participants were reached by sending an invitation via the email list of the Finnish Multidisciplinary Doctoral Training Network on Educational Sciences (FinEd). The participants were informed of the content and aims of the study as well as their rights to anonymity, withdrawal of participation, and reading the final manuscript before its submission. Upon signing the informed consent form, the IDS participated in a semistructured interview in English (see Appendix 1). The interview addressed the participants' background and was based on literature on the concepts of scholarly identity and stress. The qualitative approach taken aimed to highlight how participants perceived their experiences and contextualized within social and relational dynamics at the university (Labuschagne 2003). The interviews were conducted by the first author, who at that time was a doctoral student herself, thus creating an interview climate of ease, closeness, and confidentiality. The interviews were audiorecorded (average 42.27 minutes) and transcribed (average 9.5 pages; Calibri, font 11, single-line spacing and a break between speaking turns) by the first author.
All authors became familiar with the transcripts, coded following the model of thematic analysis suggested by Braun and Clarke (2006). The first author coded the transcripts from the perspective of scholarly identity, while the second author coded them from the perspective of stress (see Tables 1, 2 and 3). This helped identify stressors and their relationship to scholarly identity negotiation, while it also enabled triangulation during the coding phase (Bogdan and Biklen 1998), as the similarities found in naming some codes were discussed with the third author. The codes were organized into themes, which were understood as abstract constructs capturing the meaning of units of textual data and identifying possible patterns at different levels of granularity (Guest et al. 2012). The main stressors were examined against the themes for scholarly identity. The recorded and transcribed data have not been altered, but repetitions in the selected excerpts are indicated by […] for easier reading.
Findings
In this section, each perceived source of stress is discussed in relation to the ways scholarly identity was involved in response to stress.
Intrapersonal regulation
Intrapersonal regulation (10/11) involved a sense of conflict between internal and external personal and contextual demands, but also personal resources used to negotiate this conflict. Participants (6/11) referred to expectations they had of themselves concerning becoming skilled at a particular field or method and investing more time than anticipated into their doctoral studies. By examining some of these expectations, some participants tried to make sense of how they themselves might be in the future, should they continue as researchers. For instance, talking about whether stress eases at a later time, IDS2 says: So, they do not struggle […] to grasp the basics […] they enjoy more than they struggle. This is how I want to think about it, because otherwise I think that my motivation will drop dramatically, if I think that it will always be the same, hard and stressful and struggling.
This participant negotiates stress by taking a protective stance toward her motivation and assuming that the present stress she feels, due to her current lack of expertise, will be replaced by quality and enjoyment. The negotiation that takes place may temporarily fortify her scholarly identity against defeatism by envisioning what research may be like at a more advanced stage. Stress from intrapersonal regulation further involved trying to determine one's own place. Nearly all (10/11) participants referring to how they saw themselves as doctoral students replied in a manner that vacillated between roles. The most self-reflective comment would be IDS10's: This excerpt echoes the ambivalence in other participants' answers, but it also highlights the in-between position doctoral candidates might feel they occupy as well as how being funded or not may affect the title or role one may decide upon for themselves. While the institution may be a safe place, participants were still unsure of how to describe themselves within their respective institutions.
A critical comparison of oneself as a scholar to others was also seen as a challenge (10/11). Despite this comparison being positive regarding their doctoral training, a few participants made an unintentional comparison between themselves and others concerning their professional values. For instance, being a researcher is not merely a job one does but a means of personal development. This value is echoed in IDS3's understanding that research is something "maybe you can feel personally connected to and give you this kind of sense of self-actualization, self-fulfillment." However, precisely because doing doctoral studies "is a very privileged life" (IDS3), one should remember that "teaching a bit and doing research a bit, like, together" (IDS3) as well as contributing to societal change are important aspects of research to uphold: Scholarly identity may use others as reference for who one aims to become as a scholar and the purposes one identifies with doing research. The professional values doctoral candidates develop through their studies, but also through watching colleague's behavior, influence how they esteem what they are doing and the goals they set for themselves. Comparing oneself as a scholar to others was not seen as a source of stress per se; rather, it served participants as a way to express their appreciation of their post as doctoral students and position themselves as developing scholars in relation to perceived orientations in the academic world. Participants abated the perceived stress from challenges in intrapersonal regulation by employing personal resources, like learning and career goals, their professional values, their passion, motivation, and self-discipline. For instance, IDS11 stresses her unflinching determination to complete her doctoral training and honor the sacrifices she has made. IDS1 claims that "[s]tress is necessary for development" and reminds herself that "[she is] not doing it for… just for the money but also for [her] own meaning and also for, like, something [she] believe [s] in." For other participants, self-regulation, sense-making, and being merciful toward oneself helped participants view stress as a motivational force in their studies. Stress was seen as "a very positive aspect" that helped identifying what is important to one's life and one's research and "what it means to be involved in academia" (IDS10). Furthermore, seeing stress as "a motivation by itself" urges one to "try harder" and "become more competent and more efficient" as a result of that effort (IDS2). However, because "[stress] can be very destructive" (IDS2), it is important that one concentrates on the short-term (IDS2), allow themselves to learn through trial and error (IDS1), and take a step back when "getting completely overworked" (IDS9), especially in the beginning. The participants' stance toward stress may be a positive one, although they actively employed personal resources which came to bear on the progress of their research to manage perceived stress. By doing so, they attributed stress a positive influence on their studies and saw it as part of their scholarly identity enacted through their doctoral training and their participation in the academic environment.
Challenges in doing research
While finding participants, collecting data, publishing, and presenting one's work were part of research practicalities, the practical challenge of funding one's research was found to be rather stressful (8/11). Supporting oneself in a foreign country causes "financial stress" (IDS1) and receiving "a stipend that can barely […] support your living" as a doctoral student is not the same as other "people earning money, like, real money by working" (IDS1). Moreover, receiving a grant from another country or not at all decreases one's sense of responsibility toward the Finnish institution or their positive attitude (e.g., IDS4, IDS7). These instances may create a belief that being a scholar is not as socially valid as doing other "real" jobs but also gives rise to discrepancies between those whose research is and is not financially recognized. IDS10, in particular, was vocal about funding being a pervasive concern that reflects "broader changes in society" and raises "a question of […] what kind of research is prioritized." She adds: that's the major concern. I think it's not just my concern. I see a lot of people around me also struggling with that.
[…] Yeah, I've learned how difficult it is to get money, how many people are fighting for the money, which is-which made me also give up at some point, because then I just though there's no point, no sense to do this.
While this participant later mentioned she would "try again maybe," other IDS might be too disheartened to do so. Funding was important for concentration on their studies, especially since they would not have to split their time among a full-time job, family, and doctoral training (e.g., IDS4, IDS5, IDS7, IDS8, IDS11). The difficulties and subsequent demotivation that occur from the stress to procure funding for one's research can be detrimental to the development of a resilient and focused scholarly identity.
In addition to funding or lack thereof, future prospects were found to be challenging for half of the participants (7/11). Participants had to cope with stress stemming from doubts, reservations (7/11), and uncertainty (6/11), which they came to see as inherent to being a researcher. Participants felt uncertain of their future development and opportunities to work as future researchers (e.g., IDS2, IDS3, IDS4, IDS11). Moreover, they pondered the significance of their study "for the actual educational context" (IDS7) as well as the validity of what they were doing and their skillset (IDS10). Looking at the issue more broadly, IDS10 explains that: it's not just for doctoral students. I see also all the other academics and staff in university.
[…] Teachers and researchers are never certain what they-often are uncertain about their next year work and such. So this is probably the major stress, stress-related factor.
It seems that to negotiate their scholarly identity, doctoral candidates need to be aware and, indeed, face their own insecurities about the importance of their research as well as the uncertainty of a secure future in academia. Overall, the participants seemed to accept the stress coming from such uncertainty, yet the uncertainty itself can deprive scholarly identity of a positive outlook regarding future prospects and the legitimacy of one's research interests.
From the perspective of scholarly identity, the importance of this theme lies in how the presence of stress in its negative form seems to impact the health of the participants as well as their emotional well-being. Some participants (4/11) commented on how their research is always cognitively present, making them feel so tired they "just want to completely shut down" (IDS1) or have "haunted" sleep (IDS6), because of unresolved research-related issues or the day of defending their dissertation (e.g., IDS2, IDS11). In addition, it causes feelings of "inner anxiety" for not working as efficiently, focused or self-disciplined as one might have wanted, leading one to "break down to some extent" (IDS3), and feel "emotionally agitated" (IDS3), "crazy" (IDS11), or guilty (IDS2, IDS11) when spending time on something else. Stress reaches farther, however, involving the body. IDS2 complained that her "back hurts a lot," IDS8 mentioned weight gain, due to lack of exercise, IDS10 talked about how "[s]tressrelated issues were manifesting [themselves] in a physical way," and IDS5 described her exhaustion: The pressure of, em, especially during the data collection period, I was so exhausted and at one point I-I-I was so burn out and, and I started crying in front of the children. I didn't know, but my voice was gone… My voice was gone. I was-I was totally fatigued, you know, like tired, the whole body.
Not only working oneself too hard to realize a study but also pondering the reasons behind doing research in the first place can be stressful, with unanswered questions becoming "suffocating" and "stressful" questions, despite it being "important to keep asking these questions and also trying to find answers" (IDS10).
The mental and physical toll stress can take on doctoral students could intensify frustration (6/11) and feelings of inadequacy (4/11). Frustration, for example, involved the futility of preparing funding applications in the beginning of one's doctoral training (IDS1) and past research-related experiences (IDS4). Further pressure may be applied by not feeling "competent enough for this" (IDS2) and "[i]nadequate in the concepts, in explaining the whole thing, in, uh, knowing […] all the background and to make the research question, uh, impeccable" (IDS1). The degree to which one chooses to be consciously influenced by stress may lie with lifestyle choices (IDS2), yet its subconscious effects are not always perceived in time, and its subtle nature can be detrimental to scholarly identity negotiation. The process of shaping one's scholarly identity while completing one's doctoral studies may become infused with insecurities and fatigue that could affect not only the here and now of the doctoral experience but also long-term commitment to a career in academia. Trying to find meaning in one's research work and making sense of what doing research entails is part of scholarly identity negotiation. However, when this effort is a constant constraint or struggle for a doctoral student, scholarly identity itself might become too loaded with negative affect. Moreover, stress that becomes noticeable psychosomatically might render scholarly identity uncertain in terms of viability in the long run.
What seemed to counter such negative affect was participants' view of their doctoral training as a process (8/11). They chose the words process, journey, and development to refer to their learning trajectories through doing research. This choice of words reflects the realization that learning to become a researcher is a slow and long process requiring milestones and skill development along the way. Moreover, this process involves deeper self-awareness and learning how to think like a researcher, as: Now, slowly, I think I start building competencies that make me feel more, eh, of a young researcher than of a student, but I still have a long way to go, I think. (IDS2) It is important to note that some participants regarded their learning trajectories as a process that can be creative (4/11) and rewarding (5/11). The conscious adoption of such an outlook on the process of learning through doctoral training is not only important for the personal growth of the individual candidate at the time of training but also for lasting professional empowerment. This is underlined by participants clearly taking ownership of their research (7/11). As IDS3 states, "I think it's for a student to be in charge of his or her studies, like, to take charge and not think that it's somebody else's study." Being "responsible and dedicated to your research" (IDS11) reflected participants' focus and belief of their accountability in doing research, regardless of the way they saw themselves as doctoral candidates. During this learning process, however, it seems important that boundaries be drawn in order to maintain healthy progression and enhance well-being, such as balancing between personal and academic life (e.g., IDS2, IDS5).
Lack of supportive networks
Supportive social networks, discussed by all participants, involved the academic culture, and lack of social and academic support from colleagues, supervisors, and peers. While autonomy over their studies was valued by the participants (6/11), most participants (10/11) remarked the "[l]oneliness features here" (IDS1) and the lack of "community as such within which you can really learn" when, "from all the education years," they are traversing "the most individualistic phase" (IDS2). These circumstances may give the impression that "you have to find everything on your own" (IDS2) and that "right now it seems a very lonely journey" (IDS9). More importantly, however, it detracts from the feeling that the doctoral candidate is "a part of something bigger" (IDS2) or that they could have "learned more or differently" (IDS8). Although this sense of not belonging by some participants was not viewed as a source of stress, but rather a challenge, it does shape scholarly identity in terms of beliefs. In particular, it shapes the belief that learning to be a researcher is an individualistic and lonely process, which may not necessarily be attached to a wider, meaningful view of the research field.
This loneliness and individualism may be moderated by the supervisor-candidate relationship. Supervisors become "[o]ne of [their] most important mentors" (IDS2), "a huge resource" (IDS9) providing support, trust, expert knowledge, guidance, professionalism, mediation between faculty and candidate, time to discuss conceptual and methodological topics, and motivation. However, the bond some participants shared with their supervisors went beyond a mere professional relationship, enhancing a sense of responsibility and motivation (e.g., IDS5, IDS9). The supervisor-candidate bond is not only important for doctoral students in the beginning of their doctoral training but also for the more advanced ones. However, lack of support or bad supervisor-candidate fit may accentuate candidates' feelings of doing unsupported working by themselves.
Beyond developing a relationship with one's supervisor, nurturing a sense of belonging involved actively seeking and providing a supportive social network. Commitment to an academic career might be enhanced by the perceived presence of an international community at the university (e.g., IDS4, IDS7, IDS9), lending participants' scholarly identity an international character which might be "good as a trade for a researcher" later in their career (IDS2). At the same time, however, the Finnish academic communities and their potential for learning should not be discounted, while a strictly international character might also have a negative impact on scholarly identity. As IDS1 remarks, "having a good social circle is also, uh, a good thing, uh, because you can share your concern, your stress, uh, the uncertainty"; yet acquaintances and friends of international backgrounds go abroad once more, making relationships seem transitory. Social networks being "term-limited" (IDS8) and energy-intensive (IDS2) could be a source of stress for some, as the interpersonal support system needs to maintained when there is not enough time and may, at times, seem futile, but it can be countered by active involvement in building a social network of one's own (8/11), encompassing academics and nonacademics (e.g., IDS1, IDS7, IDS9).
Especially concerning a social network of academics, communication (7/11) and collaboration (6/11) were found to be important for half of the participants, followed by connecting through research (4/11), building a researchers' community (4/11), and sharing knowledge (3/11). Communication mostly involved people at the university; it did not only involve the exchange of ideas or feedback (IDS2, IDS6, IDS9) but also the sharing and validation of stress in doctoral studies: it's really nice to share my anxiety with other PhD students and then-it's really nice to know that others also have this kind of stress and anxiety, what I also have, so it's really nice. (IDS4) Stress is viewed as a natural component of becoming a researcher, which can be discussed and countered by peers' insights. However, doctoral candidates might not be good at taking initiative to organize informal gatherings, thus contributing to a lack of peer supportive networks within and beyond the university. Among others, this might obstruct the flow of information one needs (e.g., conferences, administration), as others "don't even know what you know, so they don't even know what they should inform you about" (IDS2). This might be problematic for scholarly identity negotiation, since not only might it make one feel excluded but also cause the loss of opportunities for this negotiation to take place. Collaboration was important for "feel[ing] the stimulus of dynamics or conversations or discussions in a project" (IDS8) and taking advantage of experienced researchers' advice on one's work. Like peers becoming "a sounding board" for the doctoral candidate (IDS9), established researchers become the scaffolding for their professional development: Eh, sometimes it just helps to hear how they think and it helps me a lot to frame also, uh, the ideas in my mind.
[…] But, also, eh, another person with whom we have been collaborating, em, and he-his philosophical way of thinking-I mean, scientifically philosophical-It might sound contradictory, but it is not very much. (IDS2) More experienced colleagues serve as catalysts and guides for contemplation of not only one's work and professional relationships but also the nature and meaning of doing research. In that regard, scholarly identity negotiation draws on the deeper thinking one does on one's own, whereby one learns to be at once a scientist and a philosopher. This deeper thinking is influenced by the ways of thinking used by expert others in close proximity as well as by international authoring partnerships that provide scholarly identity negotiation opportunities by means of conceptual and inspirational frames. Actively partaking in communication with peers and forging collaborative relationships with more knowledgeable researchers can help construct a sense of a research community that can teach how to negotiate or safeguard against stress. In the following chapter, the main issues arising from the findings are discussed.
Discussion
This case study addressed sources of stress as perceived by IDS of Education in Finnish universities and how scholarly identity is involved in response to them. The principal sources of stress were intrapersonal regulation, challenges in doing research, and lack of a supportive network. Intrapersonal regulation encompassed participants' concerns about becoming skilled, positioning themselves as young academics in training, and taking others a point of reference for their scholarly identity. Scholarly identity was employed through goals, values, and motivation to regulate one's reactions to stress and interpret stress as a positive element in their training. Challenges in doing research involved research practicalities, acquiring funding, and reservations about researcher career prospects. Participants' reaction to this source of stress was connected to accentuated feelings of inadequacy and frustration. To counter stress, scholarly identity was employed via participants' views of viewing the process of becoming a researcher as a longitudinal project, demanding constant development of skills, more profound thinking, increasing independence, and ownership of one's own research and progress. Lack of supportive networks concerned the absence or short-lived presence of personal and collegial relationships that can afford academic support and a stronger sense of membership. Scholarly identity involved the acceptance of stress as a shared experience and the knowledge that its negative influences can be moderated by insights, inspiration, and support found in members of the academic community. A noteworthy observation is that, while stress did have negative manifestations, participants largely regarded stress as positive, i.e., eustress, and a necessary part of their studies.
One of the perceived sources of stress was intrapersonal regulation. The desire to become a skilled scholar, time investment and positioning oneself as a doctoral student in relation to the local academic context were informed by participants' professional orientations, motivations, and self-monitoring. This supports the finding that emotional aspects, like self-discipline, motivation, and interest, promote, rather than hinder, international doctoral students' studies (Sakurai et al. 2012). Contrary to Pyhältö et al. (2012b), who identified motivation and selfregulation in doctoral students as problems in general work processes that are rather difficult to solve, the findings of the present study suggest that international doctoral students in Education were very determined, and the positive outlook on stress as a motivator helped orient themselves in terms of learning, role, interests. The way participants understood themselves as professional subjects (Eteläpelto et al. 2014) drew on professional aspirations and interests (Cotterall 2015) and used internal resources (Anderson 2017;Evans and Stevenson 2010;Pyhältö et al. 2012b) helped regard stress as a motivational force in their studies. International doctoral students' high motivation, at least initially, has been seen in their proactive involvement in study abroad (Sakurai et al. 2012;Zhao et al. 2005), while active strategies on the part of doctoral students have been associated with reduced burn-out risk (Stubb et al. 2012). Despite the optimistic view of stress participants seemed to share, the choice for it to be treated as a constructive element indicates the need to foster international doctoral students' persistence in the long run.
Another perceived source of stress was challenges in doing research. Practical aspects of doing research and financial and occupational insecurity were found to affect scholarly identity (see also Ortlieb and Weis 2018). While higher education in Finland is not subject to tuition, IDS have frequently reported problems with finances and lack of research funding, among other departmental issues, as a hindering factor to their studies (Pyhältö et al. 2012b;Sakurai et al. 2012). Financial support affects doctoral students' retention, persistence, and timely completion of their doctoral degree (Ehrenberg et al. 2007;Zhou and Okahana 2019). Yet financial preconditions for doing doctoral studies were considered a problem by only one-fifth of the participants, while the problems participants did emphasize were rather pedagogical in nature (Pyhältö et al. 2012b). Pyhältö et al. (2012b) attributed these findings to most doctoral students registered for full-time studies trying hard to obtain funding from funds, foundations, and institutions. For some participants, having to strive for or lacking funding made scholarly identity feel invalid when comparing oneself to others with "real" jobs. It, further, caused them to question the legitimacy and relevance of their study within the educational contexts researched. Coupled with uncertainty about career prospects in academia, this may affect their outlook on not only present and future circumstances but also one's validity as a developing scholar. This uncertainty is interesting considering the higher levels of career interest IDS in Finland typically entertain in comparison to their native counterparts (Pyhältö et al. 2019).
The third perceived source of stress was the lack of supportive networks. Influential aspects of social inclusion at the university (e.g., information circulation, knowledge sharing, and project participation) were not always present, and participants reported a sense of loneliness in their doctoral training experience. Doctoral students experiencing stress and loneliness are more likely to face burnout and attrition (Cornér et al. 2017;Pyhältö et al. 2015). Research in Finland indicates that the percentage of doctoral students feeling outside a scholarly community is high (30%), with those in Education feeling the most isolated in terms of membership, perhaps because of not completing their doctorate within a research group (Pyhältö et al. 2009). Moreover, unlike Finnish students, IDS have been found to not strongly associate with peers and other doctoral students, seemingly leaving them without close collegial support in case of supervisory challenges (Sakurai et al. 2012). In the present study, participants seemed to have developed positive relationships within their academic community, including mostly supervisors and other international students, but be preoccupied with how transient these relationships can be. This could attenuate a sense of belonging to the university, since it does not involve a close connection to Finnish counterparts in their academic community, and imbue the interpersonal aspect of scholarly identity with a sense of futility.
Some social networks helped participants with perceived sources of stress in their doctoral training, validated their membership with the university as an institution, and facilitated the exchange of ideas. This supports the suggestion made by Sakurai et al. (2012) to explain a lack of statistically significant association between broader communities and participants' satisfaction with their training or desire to drop out; more than the training, a sense of attachment, friendship, and general well-being may be experiences that relationships with peers and colleagues contribute toward. Scholarly identity negotiation may further involve the acceptance of stress as a shared experience and the knowledge that its negative influences could be moderated by insight, inspiration, and support found in members of the academic community, such as supervisors. In Finland, doctoral students in educational sciences reported supervisory challenges more frequently, perhaps because their background in pedagogy raises their awareness concerning educational practices and communication problems (Pyhältö et al. 2012b). The present findings corroborate the importance of doctoral supervisors for students' well-being; the satisfaction from the developed relationship derived from the expert knowledge and guidance participants felt they received, but also from the positive working relationship (Sidhu et al. 2014). Doctoral supervisors strongly influence how the personal and professional attributes of their students will be nurtured toward becoming contributing members of an academic community and how effectively the supervisory process and challenges are managed toward critical thinking and emancipation (Friedrich-Nel and Mac Kinnon 2019). Moreover, a functional supervisory relationship based on mutual trust, sensitivity to the student's needs, clear communication, constructive feedback, and explicit strategies for the completion of the doctoral degree are conducive to students' well-being, satisfaction with their doctoral training, and timely completion of their doctorate (Cornér et al. 2017;Pyhältö et al. 2015). It is worth mentioning, however, that firmer guidance might be more formative in the early years of IDS' training until they gain domain-specific expertise, strengthen feelings of self-efficacy and self-regulation, and abate feelings of loneliness (e.g., Pyhältö et al. 2012b).
Although stress may be negative, resulting from being unsuccessful in satisfying the personal or environmental demands (Szalma and Hancock 2008), more than half of the participants saw stress as a motivating aspect and necessary for development. It was regarded as an ever-present element of their doctoral training, yet one that facilitated their progress (e.g., setting deadlines for oneself), rather than paralyzed them. The findings show that experiencing stress as eustress and seeing stressors as challenges, rather than negative events, become resources themselves for succeeding in one's doctoral training. The findings are in line with O'Sullivan (2011) who argued that eustress supports studying and working on assignments, thus heightening productivity and successful completion of assignments and exams. Having such an outlook on stress might be encouraging for the overall development and acculturation of IDS as young scholars, yet the need for self-regulation skills as well as balance between demands and resources should be paid attention to by both institutional and individual initiative.
Limitations
The study explored stress and scholarly identity, focusing on IDS in educational sciences in Finland. The number of the interviews, while not high, was sufficient for code saturation (e.g., Baker and Edwards 2012). To see if the findings are reflective of other IDS's experiences, future research should include doctoral students in other countries and different disciplines. Moreover, follow-up interviews with the participants of the present study could yield interesting new insights, since most of them were in the early stage of their doctoral training. Another limitation is that the interviews took place in English, which was a second or foreign language for most of the participants. However, all participants were highly proficient in English and used to discussing research-related matters using English. As doctoral programs vary in implementation across and within universities and countries, detailed practical suggestions may be hard to make. However, it can be suggested that information about access to mental healthcare services be more readily available to doctoral students. Moreover, doctoral programs could create a more supportive environment offering the students activities validating their membership. Finally, universities may need to attend to special challenges posed by the in-between state of doctoral students (i.e., neither students nor researchers) by acknowledging their status in the academic career track as researchers in training in formal documents (e.g., using the term "doctoral researcher").
-Would that be the term you would use for yourself? 2. How do you see yourself as a young researcher? 3. What do you see as the (primary) responsibilities of a doctoral student? 4. What do you think you have learned so far? (as a doctoral student or young researcher in training) 5. What do you think are some of the constraints and resources at your current workplace?
-What about Finnish doctoral education and Finnish higher education institutes? -How about your current phase/stage of doctoral training? 6. What are your views on life as a researcher? 7. How do you see yourself as a (developing) scholar?
-Where do you draw your motivation from? -What have you found to be difficult during your doctoral training? -What have you found to be easier during your doctoral training?
|
2022-12-12T14:14:04.782Z
|
2020-01-02T00:00:00.000
|
{
"year": 2020,
"sha1": "77b1ba4d26a5a5bf269cfdfbe164d845b35e8f06",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10734-019-00473-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "77b1ba4d26a5a5bf269cfdfbe164d845b35e8f06",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
55392956
|
pes2o/s2orc
|
v3-fos-license
|
High density polyethylene and zirconium phosphate nanocomposites
Nanocomposite based on high density polyethylene (HDPE) and layered zirconium phosphate organically modified with octadecylamine (ZrPOct) was obtained through melt processing. The ZrPOct was synthesized by precipitation and modified by suspension and sonication procedures. The initial and maximum degradation temperatures (Tonset and Tmax) were increased. A slight decrease of crystallinity degree was detected. Reduction of elastic modulus and elongation at break were noticed. The lamellar spacing was increased (3.3 times higher). The storage modulus decreased and low field nuclear magnetic resonance (LFNMR) revealed an increasing of molecular mobility. The presence of octadecylamine enhanced the entrance of HDPE in the ZrPOct galleries. Several characteristics of HDPE were changed indicating that intercalation was successful. All results indicated that partially intercalated and/or exfoliated nanocomposite was achieved.
Introduction
Over the past two decades great advances in nanotechnology emerged, which is considered a promissing technology of the 21 st century.Great potential for innovations that increase the economic prosperity and sustainable development are expected [1,2] .Recent study indicates that nanocomposites occur naturally through synergistic effect as found in nacre (the outer layer of pearls) consisted of proteins, polysaccharides and nanometric layers of calcium carbonate (CaCO 3 ).To produce advanced materials, researchers are investigating and trying to adjust natural composites to their level of structural control and properties [3] .The addition of fillers enhances mechanical, thermal and barrier properties of the composites [4,5] .Synthetic nanocomposites have been prepared with various polymers and montmorillonite (MMT) is the most common filler used.In turn, the use of layered zirconium phosphate (ZrP), an inorganic and synthetic filler, in the formation of a nanocomposite results in a material with higher aspect ratio, purity and surface energy advantages in relation to MMT [6,7] .
The polyethylene family is diversified.High density polyethylene (HDPE) and linear low density polyethylene (LLDPE) are marked members.Although they are produced by Ziegler-Natta and metalocenic catalyst they are homopolymer and copolymer, respectively.HDPE is a homopolymer obtained by coordination polymerization of ethylene gas.Its polymeric chains are linear and constituted by the catenation of ethylene mers.LLDPE is a copolymer obtained by coordination polymerization of ethylene gas with an alpha-olefin (propylene, butane, hexane, etc).It is considered a branched polymer once its polymeric chains are constituted by mers of ethylene and mers of alpha-olefin (propylene is commercially more common).As a consequence of those branching in LLDPE, its properties differ a lot from those presented by HDPE For instance, the density, degree of crystallinity, melting temperature and Young's modulus are quite different [8] .High-density polyethylene (HDPE) is a semicrystalline polymer with many advantages -low density, strong tenacity, high resistance to impact, abrasion and corrosion.Additionally, inertia to the majority of chemicals, low toxicity and long lifetime contribute for large industrial applications [9] .Nanoparticles of silicalite-1 were used with HDPE.Rheological and physical properties were investigated.The authors observed slight effect on the melting temperature, onset degradation temperature and decreasing of intensity of HDPE diffraction peaks [10] .It was found that ultrasonic treatment enhanced the intercalation of HDPE into lattice layers of clay by increasing d-spacing up to 50% [11] .Impact strength, modulus, and flexural strength of HDPE/exfoliated graphite nanocomposites were compared to others types of reinforcement (glass fibers and carbon black).Polymer nanocomposites from HDPE/exfoliated graphite were equivalent in flexural stiffness and strength to HDPE composites reinforced with glass fibers and carbon black [12] .Synergistic effect introduced by nanoparticles of nano-CaCO 3 and OMMT in HDPE was reported [13,14] inferred that a higher degree of exfoliation for nanosized clay particles is key to enhancing the rheological, mechanical, and flame retarding properties even when small amounts of clay (less than 1%) are used.
Considering that it was not found scientific article related to nanocomposite of HDPE and zirconium phosphate modified with long-chain amine, the aim of this work was to investigate the influence of the intercalation of octadecylamine inside ZrP galleries on the HDPE characteristics.Through thermal, crystallographic, thermo-mechanical, tensile and molecular mobility analyses the formation of intercalated and/or exfoliated nanocomposite was evaluated.
Synthesis of layered zirconium phosphate
The ZrP was synthesized by direct precipitation method [15] .An 12M phosphoric acid solution (H 3 PO 4 ) and zirconium oxychloride were mixed in the proportion of P/Zr = 18.The system was kept under agitation and reflux at 110 °C, for 24 hours.After that, the resulting material was centrifuged (3400 rpm for 30 min) and washed successively with deionized water in order to obtain neutral pH and absence of chloride [16][17][18] .The resulting solid was placed in a freezer at -80 °C for 24 hours and then submitted to lyophilization for 4 days.
Modification of the layered zirconium phosphate
The ZrP was modified by intercalation of amine according to the experimental procedure [19,20] .A certain amount of α-ZrP and a solution of octadecylamine (2:1 alcohol/water) were mixed at an amine/α-ZrP molar ratio of 1.5.The product was centrifuged and washed successively with alcohol to remove the excess of amine.As before, the resulting solid was placed in a freezer at -80 °C for 24 hours and then submitted to lyophilization for 4 days.The final product was labeled as ZrPOct.
Preparation of nanocomposites
Nanocomposites of HDPE and layered zirconium phosphates -neat and organically modified with octadecylamine -with a fixed phosphate percentage of 2% w/w, were processed in a counter-rotating twin-screw extruder.The extruder was adjusted to operate at 100 rpm and temperature profile of 160 °C (input) and 170 °C, 180 °C and 190 °C (output), conditions recommended by the polymer manufacturer.The extrudate was cooled and granulated.In order to homogenize the nanocomposite, the material was reprocessed under the same conditions.In order to characterize the material, thin plates was processed in Carver press at 210 °C, with load of 5000 kg for 7 minutes.
Low field nuclear resonance magnetic (LFNMR)
The relaxation time of the HDPE and nanocomposites, 1 H low-field nuclear magnetic resonance ( 1 HLFNMR) analysis was carried out in a Maran Ultra 23 low-field NMR device.The relaxation time (T 1 ) was measured in time intervals of 10 seconds and 20 points, at 27 °C.The result was expressed in terms of domain curves.
Dynamic-mechanical analysis (DMA)
Dynamic-mechanical analysis (DMA) was carried out in a TA Instruments Q800 instrument, using rectangular specimens with dimensions of 8x 1x 0.1 cm, scanning from -150 to 100 °C, heating rate of 2 °C/min and frequency of 1 Hz, in the single-cantilever mode.The storage modulus (E'), the loss modulus (E') and loss tangent (tanδ) were determined.
Wide angle X-ray diffraction (WAXD)
The WAXD was performed in a Rigaku Miniflex diffractometer, employing CuKα radiation with wavelength of 1.5418Å and Ni filter, at a voltage of 30kV and current of 15mA, with 2θ between 2-35°.From the diffractograms it was identified the crystallographic planes and calculated the interlamellar spacing, using Bragg's equation.
Thermogravimetry (TGA)
The thermal stability was evaluated in a TA Instruments Q500 thermogravimetric analyzer.The thermogravimetric curves were obtained between 30 and 700 °C, at 10 °C/min, under a nitrogen atmosphere.The temperatures of initial, maximum and final degradation (T INICIAl , T MAX and T FINAL ) were determined as well the residue.
Differential scanning calorimetry (DSC)
The calorimetric properties were proceeded using a TA Instruments Q1000 differential scanning calorimeter (DSC).Three thermal cycles were applied.At first, the sample was heated from 40 to 200 °C at 10 °C/min under a nitrogen atmosphere and then kept for 2 minutes in order to eliminate the thermal history.Following, it was cooled to 40 °C at 10 °C/min.Finally, a second heating cycle was carried out under the same conditions as the first.The melting temperature (T M ) was measured considering the curve plotted from the second heating cycle.The crystallization temperature (T C ) was determined when possible.The melting enthalpy (ΔH M ) was used to calculate the crystallinity degree (X C ), considering the melting enthalpy of the 100% crystalline HDPE (290 J.g -1 ) and corrected regarding the HDPE content.
Tensile measurements
Stress-strain test was performed by using an Instron model 5569 universal testing system, according to the ASTM D 638 with a 10-kN load cell and testing velocity of 10 mm/min.The parameters assessed were the elastic modulus, stress and elongation at break and stress and elongation at yield.The results were expressed considering the mean of five test specimens.
Flowability
The effect of the zirconium phosphate nanoparticles on the HDPE melt flow rate (MFR) was analyzed using a Dynisco plastometer following the ASTM D 1238, at 190 °C, 2.16 kg and melt time of 240 seconds.
Hydrogen low field nuclear magnetic resonance
Hydrogen NMR allows obtaining information on sample organization, heterogeneity and particle dispersion.The relaxation data are important to understand the changes in the molecular structural organization and molecular dynamic of nanocomposites.Solid-state 1 HNMR spectroscopy is sensitive enough to assess the different chain mobilities in polyethylene [21] .Semicrystalline polyethylene is composed of domains with widely different polymer chain mobilities.In the crystalline domain, the chains are highly ordered and can only be reoriented very slowly.In contrast, in noncrystalline domains the polymer chains have high mobility.Table 1 shows the T 1 H values and the respective % of domain for the samples.For all the materials, it was not observed notorious displacement of peaks.Figure 1 shows the domain curves of the HDPE and composites.The relaxation curve of the HDPE revealed two domains.At relaxation times shorter than 400,000 s, one of them appeared, attributed to the chain mobility in the amorphous phase, while a second peak was detected at higher relaxation times concerning the rigid region -amorphous chains constricted among HDPE lamellae and crystalline phase -the latter normally being responsible for controlling relaxation process.The absence of relaxation times related to the filler in the composite domain curves is an important indication that good dispersion and filler/ polymer interaction has occurred, according to Tavares et al. [22] and Mendes et al. [23] .
Dynamic-mechanical test
The tanδ curves (Figure 2) and Table 2 showed that the addition of ZrP -modified or not -promoted no influence in glass transition temperature (Tg) of the polymer.It was also observed a decrease in storage and loss modulus.Octadecylamine or stearylamine [CH 3 -(CH 2 ) 17 -NH 2 ] is a long chain amine produced from the chemical reduction of stearic acid.Its chemical structure possesses a hydrocarbon chain with seventeen methylene groups.The HDPE polymer chain is constituted by catenation of thousands ethylene groups (two consecutive methylene groups bonded).The presence of methylene groups in the chemical structure of both should provide strong interaction between HDPE and octadecylamine.It is worth to explain the role of the octadecylamine as modifier of the zirconium phosphate (ZrP) structure.The ZrP is a lamellar crystalline inorganic material which has poor interaction with polymers (organic material).For improving the interaction of ZrP with polyolefins its lamellar structure was organically modified with octadecylamine through acid/base reaction.The presence of octadecylamine between the galleries of ZrP resulted in the lamellae separation -d-spacing of ZrP was increasedand enhanced its organophilic characteristics.Such chemical modification facilitates the entrance of HDPE chains inside of the ZrPOct galleries.Due to the chemical structural similarity with HDPE chains the octadecylamine played a role as plasticizing for polyolefin resulting in decreasing of HDPE storage modulus (E').
WAXD
Figures 3, 4 and 5 show the diffractograms and Table 3 presents diffraction angle and d-spacing of the materials.HDPE diffraction angles occurred at 2θ=22.21º and 2θ=24.52ºas reported in literature [24] .The diffraction angle equivalent to basal spacing of the ZrP occurred at 2θ=12° with an interlamellar distance of 7.32 Å.In HDPE/ZrP, the diffraction angle of ZrP remained constant but the interlamellar spacing decreased (d=7.25 Å) attributed to loss of water in the interlamellar layer by Costantino et al. [6] .In the presence of HDPE, the diffraction angle of ZrPOct decreased from 4.1 to 3.9° while the interlamellar spacing increased from 21.55 to 23.88 Å.It was also observed a slight shift of the HDPE diffraction planes to small angles
Thermogravimetry
Table 4 shows the TGA data of the materials.The T ONSET , T MAX and T FINAL of HDPE and HDPE/ZrP were very similar indicating that a microcomposite was produced.It is currently disseminate that the increasing of thermal stability in polymeric nanocomposites is attributed to layered filler.The lamellae -intercalated or exfoliated -act as barrier to the release of volatile products during degradation [15,25] .All thermal properties were higher for HDPE/ZrPOct indicating that a partially and/or exfoliated nanocomposite was produced.
Differential scanning calorimetry
The calorimetric data in Table 5 showed that T C , T M and X C of HDPE and HDPE/ZrP were very close.T m and X c of HDPE/ZrPOct attained values slightly lower.Particularly, the HDPE degree of crystallinity was reduced as function of octadecylamine linked into the phosphate lamellae.As mentioned, the amine acted as plasticizing agent of HDPE chain and retarded its crystallization process.It is again evidenced that partially and/or exfoliated nanocomposite was achieved.
Mechanical measurements
Mechanical properties are arranged in Table 6.The elastic modulus increased for HDPE/ZrP and a decreasing was observed for HDPE/ZrPOct.The filler was responsible for decreasing of modulus and also the slight increasing of elongation at break as compared to HDPE/ZrP.The decrease of the Young modulus of the produced nanocomposite probably is also associated to the lower degree of crystallinity, as observed in DSC analysis.Then, it is valid to deduce that a partially and/or exfoliated nanocomposite was produced for HDPE/ZrOct.A decreasing of stress and elongation at break was presented for HDPE/ZrP and HDPE/ZrPOct.Two types of material were produced as function of the type of filler.In the HDPE/ZrP, the ZrP acted as reinforcement of HDPE and it is deduced that a microcomposite was achieved [26] .T c = crystallization temperature; T m = melting temperature in the 2 nd heating; ΔH a = melting enthalpy; X c = degree of crystallinity.
For HDPE/ZrPOct the octadecylamine inside of ZrPOct developed a role as plasticizing agent of HDPE.It was responsible for decreasing of modulus and also the slight increasing of elongation at break as compared to HDPE/ZrP.Then, it is valid to to deduce that a partially and/or exfoliated nanocomposite was produced for HDPE/ ZrPOct.
Melt flow rate
MFR values are presented in Table 7.The MFR showed tendency to decrease for HDPE/ZrP and HDPE/ZrPOct.Similar to happen in microcomposites, the fillers provide resistance to flowability of HDPE [27] .The lowest value was found to HDPE/ZrPOct due to the entrance of HDPE chain inside de ZrPOct lamellae.It induced to suppose that a partially and/or exfoliated nanocomposite was reached.The findings corroborated to those in WAXD and mechanical measurements.
Conclusions
Alpha-zirconium phosphate was synthesized and modified with octadecylamine in order to produce nanocomposite based on HDPE.According to some conventional characterization techniques of polymers (DSC, TGA, WAXD, MFR, tensile-deformation) HDPE/ZrP behave as a microcomposite.On the contrary, presence of octadecylamine as intercalation agent of ZrP allowed the increasing of its interlamellar spacing.Also facilitate the entrance of the HDPE chain along the filler galleries.This produced changes in the HDPE behavior.Decreasing of the (d001) diffraction angle, elastic modulus, degree of crystallinity besides increasing of the interlamellar spacing and thermal stability lead to induce that a partially and/or exfoliated nanocomposite was reached.
Figure 2 .
Figure 2. Tan Delta curves of the materials.
(from 22 .
21 to 21.7º and 24.52 to 23.9º.The changes could be attributed to the intercalation of the HDPE along the ZrPOct interlamellar layers.It can deduce that partially intercalated and/or exfoliated nanocomposite was achieved.
Table 1 .
NMR data of HDPE and composites.
Table 2 .
DMA data of HDPE and composites.
Table 3 .
2θ values and interlayer distances of the materials.
Table 4 .
TGA data of the materials.
Table 5 .
Calorimetric properties of the materials.
Table 6 .
Mechanical properties of the materials.
Table 7 .
MFR of the materials.
|
2018-12-13T22:46:41.149Z
|
2015-10-01T00:00:00.000
|
{
"year": 2015,
"sha1": "b14517ea5507809ef2885964ca22dc647030addd",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/po/a/bjcWCg3bd9SZpNMJ59cxnKr/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b14517ea5507809ef2885964ca22dc647030addd",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
258036297
|
pes2o/s2orc
|
v3-fos-license
|
Exploring the Bottom-Up Growth of Anisotropic Gold Nanoparticles from Substrate-Bound Seeds in Microfluidic Reactors
We developed an unconventional seed-mediated in situ synthetic method, whereby gold nanostars are formed directly on the internal walls of microfluidic reactors. The dense plasmonic substrate coatings were grown in microfluidic channels with different geometries to elucidate the impacts of flow rate and profile on reagent consumption, product morphology, and density. Nanostar growth was found to occur in the flow-limited regime and our results highlight the possibility of creating shape gradients or incorporating multiple morphologies in the same microreactor, which is challenging to achieve with traditional self-assembly. The plasmonic–microfluidic platforms developed herein have implications for a broad range of applications, including cell culture/sorting, catalysis, sensing, and drug/gene delivery.
Vapor-phase deposition of the trichloro(1H,1H,2H,2H-perfluorooctyl)silane onto the silicon master was performed as follows: 4 µL of the silane was placed in a reservoir in a vacuum S4 desiccator with the masters (placed vertically on the side walls) and was placed under vacuum for 20 min using a benchtop vacuum pump. The substrates were rinsed with isopropyl alcohol and completely dried with compressed air. The masters could be re-used ~5 times before the silane coating had to be reapplied.
The channels were prepared using soft lithography by thoroughly mixing PDMS at a standard 10:1 base to curing agent ratio by weight. After mixing, the solution was degassed by centrifuge at 3500 rpm for 2 min. Then, the PDMS mixture was poured onto the silicon master, degassed a second time for ~10 min, and then cured for 1 h at 80 °C.
The PDMS-glass channels were fabricated by simultaneously exposing both the glass substrate and the PDMS channel to 30 s -1 min of air plasma (8 sccm, 100 W, Henniker Plasma HPT200). The channel and substrate were immediately placed in contact after plasma treatment, then placed in a 60 °C oven to facilitate binding for at least 1 h before use. Note that they can also be stored in the oven overnight. For ITO-coated glass, the substrates were functionalized with APTES before the plasma activation and binding process: the substrates were incubated in a 5% w/v APTES ethanolic solution at 60 °C for 5 min. The slides were then rinsed well with ethanol and dried with nitrogen. After the APTES functionalization, ITO and the PDMS channels were air plasma treated and bound in the same way as the PDMS-glass devices.
After the channels were successfully assembled, the devices (with inlet and outlet holes already punched) were placed in the air plasma cleaner for 1 min to increase the hydrophilicity of the internal channel walls. The channels were then pre-wetted with 200 proof ethanol at 250-500 µL/min for ~2 min. When herringbone channels were used, extra care was taken during this step to ensure that bubbles did not remain trapped in the three-dimensional (3D) features. Bubbles found within the channels were released by tapping the channel either by hand or with the back of S5 a pair of tweezers while the wetting ethanol solution was flowing through the channel at 250-500 µL/min. Then, a 5% w/v APTES ethanolic solution (for facilitating substrate binding of the colloidal seeds) was introduced and allowed to flow into the channel for 1 min at 250-500 µL/min. Then, the channels were quickly placed in a 60 °C oven for 8 min with tubing attached and liquid still inside. (N.B., without the tubing the APTES solution will evaporate, if this appears to be happening, the tubing should be plugged with a syringe so that the solution is able to warm to 60 °C without evaporating). Immediately afterwards, the channels were rinsed with ethanol at 250-500 µL/min for 5 min.
Microfluidic synthesis
Colloidal seeds were fabricated by a previously established protocol, 1 where aqueous NaBH4 (final concentration of 0.64 mM) was added rapidly to a solution containing 0.27 mM HAuCl4 and 100 mM cetyltrimethylammonium chloride (CTAC) under vigorous stirring. The seed solution was flowed through the channels at 50 µL/min for 60 min using a syringe pump (Harvard, Chemyx 400). Here, it is again important for the herringbone channels that bubbles are not trapped in the 3D features, otherwise the growth will not be uniform. The devices were then gently handrinsed with water using a syringe. (N.B., Other brands of syringe pumps were tested, and we observed that these alternatives would significantly heat up over a long period of use, negatively affecting the reproducibility of the synthesis. It is important to verify that the brand of pump used does not generate considerable heat during operation.) Lastly, the tubing at the inlet of the device is replaced following the seeding step with fresh/clean tubing to avoid growth within the tubing prior to introducing the growth solution to the channel.
S6
Next, a 100 mM aqueous solution of ascorbic acid was prepared and quickly added (to a 5 mM final concentration) to a prepared solution containing gold salt (0.75 mM) and shape directing reagents HCl (10 mM), AgNO3 (0.15 mM), and laurylsulfobetaine (100 mM). The ascorbic acid solution and growth solution must be quickly mixed immediately after addition so that the gold precursor can be uniformly reduced from Au 3+ to Au 1 , which is indicated by the change in color from pale yellow to clear. Once the solution turns clear (within a few seconds), the solution was immediately flowed into the seed-functionalized channel at the selected flow rate for 3 min ( Figure 1AIV, Supporting Information).
NB-
The growth solution must be used immediately; otherwise, secondary nucleation of colloidal stars will occur and these products will introduce competition with the selective growth on the substrate as reported in previous work. S1 The growth solution has been tailored such that significant secondary nucleation will not occur for at least 10 min.
Growth was continued for 3 min after which the pump was stopped, and the channels were immediately rinsed with MilliQ water by hand two times using a syringe to remove any residual growth solution. During the flow step, the start of the growth time is recorded when the growth solution reaches the capillary, which can take 15-30 s depending on tubing length. During flow, the solution coming from the outlet was checked by visual inspection to confirm that it is clear immediately upon exiting the device, rather than red/blue, which would indicate significant secondary nucleation in the solution due to insufficient rinsing of the seeds from of the channel or other contamination. The final products appeared dark blue in color.
Gold nanostar characterization
Two different scanning electron microscopes were used for characterizing the morphology and uniformity for the AuNSTs grown on glass, PDMS, and ITO: ZEISS Supra 40VP SEM, 3-10 kV (California NanoSystems Institute) and FEI QUANTA 200 Field Emission Gun (Institute of Materials Science of Barcelona). The PDMS channels were removed from the glass or ITO base using a razor blade, then the substrates were prepared for electron microscopy characterization.
With the first instrument (ZEISS Supra 40VP), the AuNSTs on glass and PDMS were coated with an Ir thin film (~3 nm) using an ion beam sputtering/etching system (South Bay Technology, Model IBS/e). The ITO substrates were imaged as-is without sputtering. Carbon tape was used to secure the sample to the sample holder along with copper tape to reduce charging. However, characterization for the non-conductive glass and PDMS was primarily performed with the second instrument (FEI QUANTA 200) due to its capability for environmental SEM, operating between 5 and 15 kV under low vacuum (60 Pa).
Oxide
The use of an ITO substrate was essential for fast electron microscopy characterization of the fabricated nanostructures, without requiring environmental SEM. Although the PDMS channels readily bind to air or oxygen-plasma-treated glass via a condensation reaction, we observed that PDMS could not be strongly bound to ITO with this same method. Therefore, in order to facilitate binding of the PDMS channel with ITO, we first functionalized the ITO-coated glass with APTES as an adhesion layer. The air plasma treatment of the APTES-functionalized ITO and the PDMS channel resulted in the activation of both surfaces so that strong binding and S8 a liquid-tight seal could be achieved ( Figure S1). The control experiments presented in Figure S1 show that devices begin to leak within 5 min after flowing solutions with different viscosities between 150 and 500 µL/min in channels that did not have both an APTES coating and air plasma treatment. In Figure S1C-E, the strength of the PDMS-ITO binding can be further appreciated, where the subsequent removal of the PDMS channel leads to removal of the ITO coating from the underlying glass substrate. This effect is observed when the complete binding protocol is followed.
In order to attach the gold seeds and perform in situ AuNST overgrowth, as described in the previous sections, the channels need to be re-functionalized with APTES. We show that the formation of dense AuNST layers on the ITO was achieved (see Main Text).
Figure S1. A: Digital photographs of polydimethylsiloxane (PDMS) channels bound to indium tin oxide (ITO) substrates under different conditions after flowing (A) water with (blue food coloring for contrast) and (B) ethanol (with green food coloring for contrast). Left to right: after
(3-aminopropyl)triethoxysilane (APTES) functionalization and air plasma activation, after rinsing with isopropyl alcohol (IPA) and air plasma treatment, and after only the APTES functionalization. Each row represents a different rate of flow tested over a 5 min period. Absent photographs indicate conditions where device failure occurred within the first 30 s of testing. C-E: Digital photographs of the underlying ITO-coated substrate after removal of the channel assembled with different surface treatments.
S10
Because PDMS is naturally hydrophobic, the assembled channels were exposed to air plasma treatment for 1 min to assist in wetting and the removal of trapped air bubbles, which could be especially problematic for the herringbone channels with 3D features (Figure S2). When the second air plasma treatment is not performed, it is common to observe growth defects like those shown in Figure S3. Figure S3. Brightfield microscope image showing growth defects caused by bubbles trapped in the gap regions (yellow outlined regions).
Environmental Scanning Electron Microscopy Characterization
With the optimized fabrication method and growth solution recipe described in the previous section, environmental SEM revealed branched structures on the PDMS channel and the ITO and glass substrates. In herringbone channels, AuNSTs were present at all areas within the 3D features, namely the "small herringbone," "large herringbone," and "gap" features indicated in Figure S5.
Characterization of Samples Growth with Fixed Growth Volume
In Figure S6, products grown using a fixed growth solution volume are shown. Figure S6. A-C: Digital photographs (top) and scanning electron microscopy images (bottom) of gold nanostars grown in flow in the herringbone channels using the same total growth solution volume with different flow rates and growth times. Figure S7. The difference in extinction at 400 nm between the inlet and the outlet regions of the channels grown at the conditions specified, showing the difference in amount of gold on the substrate at the distinct regions. Figure S8. UV-visible spectroscopic characterization of channels grown at 125 µL/min for 3 min. A: Comparison of the spectra obtained at the inlet, center, and outlet of the channel, showing the gradient effect where the outlet has lower intensity. B: Repetitions of spectra obtained at the center of channels grown under the previously noted conditions. S16 Figure S9. A-C: Originally captured scanning electron microscopy images of the (A) inlet, (B) center, and (C) outlet products synthesized in featureless channels. D-F: The corresponding black and white images created in MATLAB: cropped images were converted to black and white, binarized, then the percentage of white pixels was measured. Finite element analysis was used to simulate the velocity streamlines and flow velocities of contrast particles in a 3D computer aided design (CAD) model of the herringbone channel S18 geometry. The simulations were performed using the Laminar Flow (stationary study) and Particle Tracing for Fluid Flow (time-dependent study) modules of the COMSOL Multiphysics® software package. The Laminar Flow module was used to obtain the velocity and pressure distribution of water inside the device with an initial velocity of 0.00833 m/s at the inlets, matching the 100 µL/min flow condition. Then, a time-dependent study with the particle tracing module was used to visualize the flow profile (particle size: 10 nm, density: 2200 kg/m 3 ). Overall, 10,000 particles were released stepwise (0.1 s) over 0.5 s. Flow velocity streamlines showed vortex-like patterns within grooves of the herringbone channels ( Figure S12A-C). The time lapse of the particle tracing shows that laterally across the channel that the particle flows are directed away from certain herringbone "peaks" depending on the applied flow direction (Figure S12D, E).
Additional Electron Microscopy Characterization of Gold Nanostars
Scanning electron microscopy images of the products obtained at different flow rates in the featureless and herringbone channels are shown in Figures S13 and S14, respectively. For featureless channels, nanostructure formation is observed up to 1 mL/min due to tendency for device failure at flow rates higher than 1 mL/min. For herringbone channels, the upper limit is 250 µL/min because at this point, the surfaces start to present nearly complete coverage with gold. Figure S13. Scanning electron microscopy images of branched products on indium tin oxide substrates after flowing growth solution for 3 min at the indicated flow rate in devices with featureless channels. Each image in the row for each tested flow rate corresponds to a separate sample. Excessive growth into films is increasingly observed at 500 µL/min and 1 mL/min, and the device begins to fail at higher flow rates. Figure S14. Scanning electron microscopy images of branched products on indium tin oxide substrates after flowing growth solution for 3 min at the indicated flow rate in herringbone (HB) channels. Each image in the row for each tested flow rate corresponds to a separate sample. For the HB channels, flow rates above 250 µL/min are not shown because this is the point at which the appearance of gold films dominates. The reduction of gold precursor in the outlet solution is complete when the measured extinction at 400 nm remains constant, corresponding to the onset of interband transitions of metallic gold.
Based on our measurements, the gold solutions are sufficiently aged for the performance of our spectroscopic estimation of gold atom concentration after 2 h. Figure S16. UV-visible spectra of the growth solution collected at the outlet aged from 20-120 min to evaluate the quantity of left over gold following the microfluidic synthesis.
The dense coverage of AuNSTs on the surface was observed at lower magnification, as well. From previous work, the consistency of the coverage and density at larger scales is determined by the uniformity and yield of the APTES and seeding steps (Figures S17 and S18). S1 Figure S17. Additional low-magnification scanning electron microscopy images of gold nanostars synthesized in microfluidic devices with featureless channels.
|
2023-04-09T15:08:14.570Z
|
2023-04-07T00:00:00.000
|
{
"year": 2023,
"sha1": "10a6b0353efe9a7c3768a76a510e81f5802831e3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1021/acsanm.3c00440",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "37bcf8953ce5e500720620905fd7d686251e9573",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": []
}
|
259092857
|
pes2o/s2orc
|
v3-fos-license
|
Retinal degeneration in rpgra mutant zebrafish
Introduction: Pathogenic mutations in RPGR ORF15 , one of two major human RPGR isoforms, were responsible for most X-linked retinitis pigmentosa cases. Previous studies have shown that RPGR plays a critical role in ciliary protein transport. However, the precise mechanisms of disease triggered by RPGR ORF15 mutations have yet to be clearly defined. There are two homologous genes in zebrafish, rpgra and rpgrb. Zebrafish rpgra has a single transcript homologous to human RPGR ORF15 ; rpgrb has two major transcripts: rpgrb ex1-17 and rpgrb ORF15 , similar to human RPGR ex1-19 and RPGR ORF15 , respectively. rpgrb knockdown in zebrafish resulted in both abnormal development and increased cell death in the dysplastic retina. However, the impact of knocking down rpgra in zebrafish remains undetermined. Here, we constructed a rpgra mutant zebrafish model to investigate the retina defect and related molecular mechanism. Methods: we utilized transcription activator-like effector nuclease (TALEN) to generate a rpgra mutant zebrafish. Western blot was used to determine protein expression. RT-PCR was used to quantify gene transcription levels. The visual function of embryonic zebrafish was detected by electroretinography. Immunohistochemistry was used to observe the pathological changes in the retina of mutant zebrafish and transmission electron microscope was employed to view subcellular structure of photoreceptor cells. Results: A homozygous rpgra mutant zebrafish with c.1675_1678delins21 mutation was successfully constructed. Despite the normal morphological development of the retina at 5 days post-fertilization, visual dysfunction was observed in the mutant zebrafish. Further histological and immunofluorescence assays indicated that rpgra mutant zebrafish retina photoreceptors progressively began to degenerate at 3-6 months. Additionally, the mislocalization of cone outer segment proteins (Opn1lw and Gnb3) and the accumulation of vacuole-like structures around the connecting cilium below the OSs were observed in mutant zebrafish. Furthermore, Rab8a, a key regulator of opsin-carrier vesicle trafficking, exhibited decreased expression and evident mislocalization in mutant zebrafish. Discussion: This study generated a novel rpgra mutant zebrafish model, which showed retinal degeneration. our data suggested Rpgra is necessary for the ciliary transport of cone-associated proteins, and further investigation is required to determine its function in rods. The rpgra mutant zebrafish constructed in this study may help us gain a better understanding of the molecular mechanism of retinal degeneration caused by RPGR ORF15 mutation and find some useful treatment in the future.
Introduction: Pathogenic mutations in RPGR ORF15 , one of two major human RPGR isoforms, were responsible for most X-linked retinitis pigmentosa cases. Previous studies have shown that RPGR plays a critical role in ciliary protein transport. However, the precise mechanisms of disease triggered by RPGR ORF15 mutations have yet to be clearly defined. There are two homologous genes in zebrafish, rpgra and rpgrb. Zebrafish rpgra has a single transcript homologous to human RPGR ORF15 ; rpgrb has two major transcripts: rpgrb ex1-17 and rpgrb ORF15 , similar to human RPGR ex1-19 and RPGR ORF15 , respectively. rpgrb knockdown in zebrafish resulted in both abnormal development and increased cell death in the dysplastic retina. However, the impact of knocking down rpgra in zebrafish remains undetermined. Here, we constructed a rpgra mutant zebrafish model to investigate the retina defect and related molecular mechanism.
Methods: we utilized transcription activator-like effector nuclease (TALEN) to generate a rpgra mutant zebrafish. Western blot was used to determine protein expression. RT-PCR was used to quantify gene transcription levels. The visual function of embryonic zebrafish was detected by electroretinography. Immunohistochemistry was used to observe the pathological changes in the retina of mutant zebrafish and transmission electron microscope was employed to view subcellular structure of photoreceptor cells.
Results: A homozygous rpgra mutant zebrafish with c.1675_1678delins21 mutation was successfully constructed. Despite the normal morphological development of the retina at 5 days post-fertilization, visual dysfunction was observed in the mutant zebrafish. Further histological and immunofluorescence assays indicated that rpgra mutant zebrafish retina photoreceptors progressively began to degenerate at 3-6 months. Additionally, the mislocalization of cone outer segment proteins (Opn1lw and Gnb3) and the accumulation of vacuole-like structures around the connecting cilium below the OSs were observed in mutant zebrafish. Furthermore, Rab8a, a key regulator of opsin-carrier vesicle trafficking, exhibited decreased expression and evident mislocalization in mutant zebrafish.
Discussion: This study generated a novel rpgra mutant zebrafish model, which showed retinal degeneration. our data suggested Rpgra is necessary for the ciliary transport of cone-associated proteins, and further investigation is required to
Introduction
Hereditary retinitis pigmentosa (RP) is a condition leading to photoreceptor degeneration that affects approximately 1/3,000 to 1/ 7,000 people worldwide, commonly resulting in severe visual loss and, eventually, blindness (O'Neal and Luther. 2022;Wright et al., 2010). It is a genetically and clinically heterogeneous progressive disease of the retina (Sahel et al., 2014). X-linked RP (XLRP) is one of the most severe forms. Approximately 70%-90% of XLRP cases are caused by mutations in the RPGR gene, with RPGR mutations accounting for 10%-15% of all RP cases (Gill et al., 2019). RPGR mutations are also associated with other retinal dystrophies, such as cone-rod dystrophy and atrophic macular degeneration, indicating that RPGR is crucial for the maintenance of retinal stability (Ayyagari et al., 2002;Wang et al., 2021).
Various animal models have been utilized to study the function of RPGR. Two dog models (XLPRA1 and XLPRA2) with different mutations in exon ORF15 of the RPGR gene were described in 2002 (Zhang et al., 2002). The XLPRA1 mutant dog (five base deletions resulting in a frameshift and immediate premature stop) had a normal retina function until 6 months, followed by a retina degeneration that first involved rods. The dog with XLPRA2 mutation that caused a long frameshift with 34 additional basic residues had a more severe degeneration with abnormal retinal development (Beltran et al., 2012(Beltran et al., , 2006. To date, several mouse models have been characterized, including one carrying a deletion of exons four to six in the Rpgr gene. It demonstrated slow degeneration with initial opsin mislocalization followed by decreased rhodopsin protein level (Hong et al., 2000). Another mouse model with a 5bp deletion in exon 8 also displayed slow but progressive age-related retinal degeneration (Hu et al., 2020), while the Rpgr exon1 conditional knockout mice demonstrated faster retinal degeneration compared to Rpgr-KO mice (Huang et al., 2012). A naturally occurring 32bp deletion in Rpgr ORF15 in rd9 mice caused much slower degeneration with features resembling XLRP with mutations in RPGR exon ORF15 (Falasconi et al., 2019).
The zebrafish model has now become a valuable instrument in the investigation of human eye diseases (Lieschke and Currie, 2007;Raghupathy et al., 2013). Two zebrafish genes, rpgra and rpgrb, have been identified as homologous to human RPGR. rpgra is located on chromosome 9 and contains 13 exons that encode a protein of 1,698 amino acids. rpgrb is located on chromosome 11 and has two transcripts: one is rpgrb ORF15 consisting of 14 exons encoding 1,413 amino acids, and the other one is rpgrb ex1-17 , encoding 708 amino acids with 17 exons (Shu et al., 2010). Comparing genes up and downstream of rpgr between zebrafish, Fugu, Xenopus, lizard, chicken, and humans, rpgra shared syntenic genes with mammals, while rpgrb shows the same syntenic relationships as the Fugu . Bioinformatic alignments revealed Rpgra and Rpgrb ORF15 are homologous to human RPGR ORF15 , of which Rpgra displays greater amino acid identity with human RPGR in the ORF15 domain, and Rpgrb ORF15 displays greater identity in the RCCL domain (Shu et al., 2010). Previous studies showed that knockdown of rpgrb ORF15 in zebrafish resulted in reduced length of Kupffer's vesicle (KV) cilia and is associated with ciliary anomalies including shortened body-axis, kinked tail, hydrocephaly, and edema but does not affect retinal development (Ghosh et al., 2010). Moreover, the simultaneous knockdown of rpgrb ORF15 and rpgrb ex1-19 expression led to developmental defects, affecting gastrulation, tail, head, and eye development. Developmental abnormalities in the eye included lamination defects, failure to develop photoreceptor outer segments, and a small eye phenotype, associated with increased cell death throughout the retina, while the inhibition of rpgra expression was not detected as a significant defect (Shu et al., 2010). However, Gerner et al. found that morpholino knockdown of rpgra ORF15 caused developmental defects including abnormal body curvature, cerebral abnormalities, underdeveloped eyes, and pronephric cysts (Gerner et al., 2010). To ascertain the function of rpgra in the zebrafish eyes, investigate the pathogenic process, and obtain a better understanding of the molecular mechanisms of disease caused by RPGR ORF15 mutations, a stable defect model is urgently needed.
In our study, we constructed a rpgra mutant zebrafish model using a Transcription activator-like effector nuclease (TALEN) technology. In the mutant line, we found that early retinal function was affected; the length of the photoreceptor outer segment (OS) and the thickness of the outer nuclear layer (ONL) decreased progressively with age. Furthermore, the mislocalization of red opsin protein (Opn1lw), Gnb3, and Rab8a was observed along with the accumulation of abnormal vacuole-like structures in photoreceptors. These observations indicate a progressive retinal degeneration in rpgra mutant zebrafish and highlight the crucial role played by Rpgra in opsin proteins transportation, thereby enhancing our understanding of the function of Rpgra in zebrafish retina and RPGR ORF15 mutant disease pathogenesis. Frontiers in Cell and Developmental Biology frontiersin.org 03 2 Materials and methods
Zebrafish maintenance
The AB strain of zebrafish was kept in a recirculating water system at 26°C-28.5°C under a 14-h light/10-h dark cycle. Embryos were kept in an E3 medium at 30°C. They were fed three times a day with fresh paramecia or brine shrimp. Control wild-type lines in this paper were derived from wild-type immature zygotic embryo culture.
TALEN construction and microinjection
The gene sequence information for zebrafish rpgra (ENSDART00000079095.4) was acquired from Ensembl (http://asia.ensembl.org/index.html) (Cunningham et al., 2022). We used online tools TAL Effector Nucleotide Targeter2.0 (https://tale-nt.cac.cornell.edu/) (Doyle et al., 2012) to design TALENs that target the exon13 of rpgra; the left target sequence was 5′ AACAGAATCTCAATCATCAA 3′ and the right was 5′ GCATCTCCAGGCTGGGCTT 3′. The plasmids of the TALENs were assembled by using the Golden Gate TALEN kit according to the operating manual (Cermak et al., 2011); the 86 library vectors in this kit were available from Addgene, and TALEN mRNAs were in vitro transcribed and purified using T3 mMessage mMachine Kit (Ambion, Austin, TX, United States). A pair of left and right TALEN mRNAs was mixed at a ratio of 1:1; the final concentration of each arm was 100 ng/μL and it was then microinjected into one-cell stage egg yolks of wild-type zebrafish.
rpgra mutant zebrafish screening
Two days after injection, 10 embryos were collected from each dish of injected eggs, and genomic DNA was extracted. A 496bp DNA fragment containing the rpgra target site was amplified by PCR, using the flowing primers: Forward primer: ACGTATTTC AGCAGGCTCTG and Reverse primer: GGAGATTGGACCTCT TGAGTG. The efficiency of the TALEN-mediated mutagenesis was determined by the restriction enzyme NsiI digestion analysis ( Figure 1A). The rest of the embryos were raised to sexual maturity (3 months old) and outcrossed with wild-type zebrafish to obtain the first-generation heterozygous mutant zebrafish. The F1 zebrafish genotype also was determined by PCR and sequencing, then the same zebrafish genotype was selected to obtain the second generation, which contained the homozygote mutant zebrafish.
Electroretinography (ERG)
Protocols for the zebrafish larvae ERG recordings were described previously (Fleisch et al., 2008;Han et al., 2018;Lu et al., 2017). In brief, after 30 min of dark adaption, zebrafish larvae at 5 dpf were paralyzed with Esmeron (0.8 mg/mL in E3 medium; MedChem Express). The larvae were then placed on a wet filter paper over the reference electrode and the recording electrode was placed on the center of the cornea. ERGs were recorded after 5 min of complete dark adaption. A 1-2 s single stimulus with 6,000 lux illuminance was used to generate a typical ERG trace. All the traces were collected within 20 min, and the average of the top five b-wave amplitudes was regarded as the larva's b-wave amplitude.
Histologic analysis
Zebrafish eyes were isolated and fixed with 4% paraformaldehyde (PFA) for 8-12 h at 4°C; 4% PFA was removed and rinsed in 1x PBS, and the specimen was soaked in 30% sucrose and dissolved in 1x PBS at room temperature until the eyes sank to the bottom of the tube, and the eyes were embedded in OCT compound (SAKURA Tissue-Tek ® OCT compound, United States). Embedded tissues were sliced along the vertical meridian of each eyeball (12 µm thick). Sections containing the whole retina were stained with hematoxylin and eosin (Beyotime, C0105S, China). For each section, digitized images of the retina were captured using Olympus-BX53. At least eight eyes from each genotype group were included in this analysis.
Immunofluorescence
For immunofluorescence staining, cryosections were rinsed with PDT (PBS solution containing 1% DMSO and 0.1% Triton X-100) for 10min and blocked with blocking solution (PDT containing 1% BSA and 10% normal goat serum) for 1 h at RT. Primary antibodies (Supplementary Table S1) were prepared in a blocking solution containing 2% normal goat serum and the slides were incubated overnight at 4°C. The slides were washed three times with PDT and incubated with Alexa Fluor 488 nm or 594 nm secondary antibodies (1:1,000; Molecular Probes ® ) for 1 h at 37°C. DAPI was diluted with PBS to a final 5 µg/mL and was used to label the nucleus. The slides were washed three times with PBS and then mounted under glass coverslips. Fluorescence images were captured using a confocal laserscanning microscope (FluoViewTM FV1000 confocal microscope, Olympus Imaging).
Image analysis
For the analysis, we designated a reference region in the dorsal retina located 100 μm-200 μm away from the optic nerve ( Figure 3A). This reference region was subsequently utilized for comparative analysis in various contexts. The quantification of the outer nuclear layer (ONL) thickness, photoreceptor layer (outer retinal) thickness, and outer segment length in Figures 3, 4, and Supplementary Figure S4 were performed by averaging measurements in our reference region from three sections chosen from each retina (eight retinas from four individual fish per group). All chosen sections had a visible optic nerve. The average thickness or length was assessed using the measurement tool in Photoshop; eight points from the reference region were used to measure and the results were averaged. The thickness of ONL was measured as the interface Frontiers in Cell and Developmental Biology frontiersin.org between the outer plexiform layer and the photoreceptor inner segment. The photoreceptor layer thickness was measured from the outer plexiform layer to the inner surface of the RPE layer. In addition, the length of the cones or rod outer segments depicted in Figure 4 was measured and averaged from 10 random photoreceptors. The cones within the reference region were manually counted based on the staining signals of a specific cone opsin.
TUNEL staining
TUNEL staining was performed using the TUNEL BrightRed Apoptosis Detection Kit (Vazyme Biotech) according to the manufacturer's instructions. Generally, cryosections were airdried at RT and then fixed with 4% paraformaldehyde in PBS for 30 min. The slides were washed two times with PBS for 15 min and incubated with the proteinase K buffer for 10 min. After that, the slides were washed 2-3 times with PBS and incubated with the equilibration buffer for 10-30 min. Then, the retinal sections were incubated in TdT buffer at 4°C overnight. The next day, following DAPI labeling, the slides were mounted under glass coverslips.
Transmission electron microscopy
Zebrafish eyes were isolated and left in the fixative (2.5% glutaraldehyde in 0.1 M PBS buffer, pH 7.4) overnight at 4°C. After they were fixed, the eyes were sent to Servicebio Company, and the subsequent operation was mainly completed by the company, briefly described as follows. After three washes with PBS, the eyes were further fixed in 1% osmium tetroxide for 2 h at room temperature (RT) and then dehydrated through an ethanol gradient, followed by treatment with propylene oxide and embedded in an epoxy medium. Embedded eyes were sliced into ultrathin sections (100 nm) using a Reichert-Jung ultramicrotome (Leica). Sections were stained with 3% uranyl acetate and 3% lead citrate for 15 min and visualized with a transmission electron microscope system (HT7700, Hitachi).
RT-PCR
The total RNA of zebrafish was extracted using TRIzol (Takara) and quantitated by NanoDrop spectrometry (Thermo Scientific, Wilmington, DE). The cDNA was generated by HiScript Q RT SuperMix (Vazyme). Realtime PCR was performed using AceQ ® qPCR SYBR ® Green Master Mix (Vazyme) according to the manufacturer's instructions, and relative gene expression was quantified using the StepOnePlusTM Real-Time PCR System (Life Technologies). Gene primers are listed in Supplementary Table S2.
Western blot
Zebrafish eyes were isolated and homogenized in a cold RIPA lysis buffer with a protease inhibitor cocktail. Protein concentration was determined using the BCA protein assay kit (Beyotime, China). Proteins were separated on SDS-PAGE and transferred to nitrocellulose membranes. The membranes were blocked for 2 h at room temperature (RT) in 5% skimmed milk dissolved in TBST buffer, and then incubated with the dilution solution of primary antibodies (Supplementary Table S1) overnight at 4°C with gentle agitation. After washing in TBST buffer (20 mM Tris-HCl, 150 mM NaCl, 0.05% Tween 20, and pH 7.6), the membranes were incubated with HRP-conjugated secondary antibodies (1:20,000; Thermo) for 2 h at RT. The membranes were then developed using SuperSignal ® ELISA Femto Maximum Sensitivity Substrate (Thermo) and ChemiDoc XRS + imaging system (Bio-Rad laboratories). Quantitative analysis of protein bands was performed by the Quantity One 4.62 software.
Statistical analysis
All the experiments were independently repeated at least three times. All data are presented as mean ± SD. Statistical analyses were performed with a two-tailed Student's t-test by GraphPad Prism 6.0 Software. Differences between groups were considered statistically significant if p < 0.05. The statistical significance is denoted by asterisks (*, p < 0.05; **, p < 0.01; ***, p < 0.001).
Generation of rpgra mutant zebrafish using TALENS
Zebrafish rpgra genomic sequence (ENSDART00000079095.4) was downloaded from the Ensembl database. The target sites were designed by Internet tools (https://tale-nt.cac.cornell.edu/). In the middle of the TALENs binding sites there is a 17bp spacer containing NsiI restriction enzyme cut site for mutant screening ( Figure 1A). Through several rounds of crossing and mutation screening, we obtained a homozygous rpgra mutant zebrafish line carrying a deletion-insertion mutation (c. 1675_1678delinsTAAGATTGCTTGATGATTGAG) ( Figure 1B), which led to the identification of a truncated Rpgra protein p. Met559*. The protein structure of Rpgra WT and Rpgra p. Met559* are displayed in Supplementary Figure S1. Real-time PCR analysis showed that rpgra mRNA expression was decreased by 50% in homozygous mutant zebrafish eyes at 2mpf (month postfertilization) and 6mpf ( Figure 1C). Considering the existence of rpgrb, the paralogous gene of rpgra, we amplified the CDS sequences of two rpgrb transcripts using the mutant zebrafish eyes cDNA as a template; the sequences were verified by sequencing, and the results showed that the CDS sequences of the two transcripts of rpgrb did not mutate (results not shown). Then in the Western blotting analysis of zebrafish eyes lysate, using an anti-RPGR antibody (Shu et al. , 2010), the result showed that Rpgra protein was markedly decreased in mutant lines ( Figure 1D); moreover, the levels of the two isoform proteins of Rpgrb did not change. These results indicated that the mutant of rpgra is effective. In the rest of the research, we considered the homozygous zebrafish as rpgra −/− . Frontiers in Cell and Developmental Biology frontiersin.org Frontiers in Cell and Developmental Biology frontiersin.org 07 3.2 rpgra −/− zebrafish showed a diminished light response in early development As reported previously, the loss of RPGR function in human and mouse models causes retinal degeneration with the mislocalization of rod and cone opsin, the reduction of ERG function at early ages, and the progressive loss of photoreceptor cells with aging (Huang et al., 2012;Thompson et al., 2012). In zebrafish, knockdown rpgrb and rpgra showed different phenotypes. To confirm the role of rpgra in the zebrafish retina, we carried out electroretinography (ERG) measurements to check the visual function of mutant zebrafish. The scotopic b-wave amplitudes of rpgra −/− zebrafish were significantly
FIGURE 4
Photoreceptor outer segment is affected in rpgra −/− zebrafish. Retinal cryosections from WT and rpgra −/− zebrafish were labeled rods (A), red cones (B), green cones (C), blue cones (D), and UV cones (E), with specific antibodies at the ages of 1, 3, and 6 months. The yellow lines indicate the thickness of the outer segment layer of the photoreceptor; the white arrows indicate the mistrafficked Opn1lw1 protein. RPE, retinal pigment epithelium; OS, outer segment; IS, inner segment; ONL, outer nuclear layer; INL, inner nuclear layer. Scale bars, 50 μm. The statistical data are presented in (F) for rods and (G-J) for cones. At least three images from three eyes of each group were quantified and analyzed using a two-tailed Student's t-test. The results are shown as mean ± SD. **, p < 0.01; ***, p < 0.001.
Frontiers in Cell and Developmental Biology frontiersin.org decreased compared to the wild-type controls at 5dpf (day postfertilization) (Figures 2A-C), suggesting that Rpgra deficiency may impact early age visual function. Then, we conducted a histological analysis using hematoxylin and eosin (H&E) staining of 5dpf zebrafish retina cryosections. Compared with wildtype controls, the retinal lamination displayed no evident difference in heterozygous and homozygous mutant zebrafish ( Figures 2D-F, D′-F′). Further, we labeled the outer segments of rods and all four types of cones using antibodies against their respective opsins (Rhodopsin, Opn1lw1, Opn1mw1, Opn1sw2, and Opn1sw1) to identify the rod and cone photoreceptors more specifically. The rpgra −/− zebrafish showed no abnormalities in the development of the morphology of photoreceptor cells compared with the wild type at 5dpf (Supplementary Figure S2), suggesting that deficiency of Rpgra did not affect the development of the zebrafish retinal tissue structure.
rpgra −/− zebrafish showed progressive retinal degeneration
To further investigate whether the rpgra mutant has an effect on the adult zebrafish retina, H&E staining was performed on retinal sections obtained from both wild-type and rpgra −/− zebrafish at various time points ranging from 1 to 18 months post-fertilization (mpf) (Figure 3). Data were collected from the middle segment of the dorsal retina ( Figure 3A). The results indicated a significant reduction in the thickness of the outer nuclear layer (ONL) in rpgra −/− zebrafish, compared to wild-type control after 5mpf ( Figures 3B, C), and the outer segments in mutant zebrafish retina showed disorder with age ( Figure 3B, lower panel). In addition, we used TdT-mediated Dutp Nick-end Labeling (TUNEL) staining on retinal cryosections to investigate the extent of apoptosis. Cell death signals were detected in the rpgra −/− zebrafish retinas but hardly in the control (Supplementary Figure S3). These indicated the presence of retinal degeneration in rpgra −/− zebrafish. Meanwhile, H&E staining was performed on the retinal section of 8-month-old rpgra +/− zebrafish, revealing no significant alterations in the retinal structure (Supplementary Figure S4).
The length of outer segments in mutant zebrafish retina tended to be shorter than that in the wild type ( Figure 3B). To confirm this suggestion and distinguish which photoreceptor cell type was affected, we extended our observation by analyzing immunofluorescence of the retinal cryosections, using specific antibodies (rhodopsin, opn1lw, opn1mw, opn1sw2, and opn1sw1) to label the outer segments of the rods and four types of cones (red, green, blue, and UV). Data collection regions were the same as above ( Figure 3A). Comparing the changes in the outer segment length of each photoreceptor cell, we found that the outer segments of rod cells in rpgra −/− zebrafish became significantly shorter at 3mpf (Figures 4A-F), and for the cone cells, the outer segments of the red and blue cones became shorter at 6mpf, while the green and UV cones did not change significantly until 6mpf (Figures 4B-E, G-J). Similarly, we used specific antibodies to label the rod and red cone outer segments of rpgra +/− zebrafish at 8mpf and found no significant changes in the length of the outer segments of the rod and red cone (Supplementary Figures S4B, D-F).
Furthermore, we conducted cone counts at 1, 3, and 6 mpf and observed no significant decrease in cone numbers prior to the age of 6 months (Supplementary Figure S5). We hypothesized that the reduction in thickness of the outer nuclear layer at 6mpf may be attributed to a decline in rod density. To confirm the abnormality of the rod, we first examined the expression of phototransduction proteins through quantitative PCR. The results showed that the expression levels of phototransduction genes in the rpgra −/− zebrafish retina were significantly downregulated at 6 mpf compared with the wild type (Supplementary Figure S6), and the protein levels of represented rod-specific genes (Gnat1, Grk1, and Rhodopsin) were significantly decreased in rpgra −/− retina, but the protein levels of cone-specific gene (Gnb3 and Gnat2) did not change significantly (Supplementary Figure S6). Taken together, our observations demonstrated that the absence of Rpgra led to progressive retinal degeneration and affected both the rods and cones, with the rods being influenced first.
Abnormal ciliary trafficking in rpgra −/− zebrafish retinas
During the above detection, we observed a mislocalization of red opsin in the inner segment, perinuclear space, and outer plexiform layer of 6mpf rpgra −/− zebrafish photoreceptors ( Figure 4B). To investigate ciliary trafficking of phototransduction components in the rpgra −/− zebrafish retina, we detected the protein G-protein beta subunit (Gnb3) which is localized to the outer segments of cone photoreceptors as a marker (Nikonov et al., 2013). Similar mislocalization was observed in the rpgra −/− zebrafish retina ( Figure 5A). As reported previously, newly formed disk membranes at the base of the photoreceptor outer segments were notably disorganized while the structure of the connecting cilia appeared well maintained in Rpgr-KO mice (Hong et al., 2000). To explore the ultrastructural alterations of the photoreceptors in rpgra −/− zebrafish, we performed a transmission electron microscopy assay. Compared with wild-type controls, the disk membranes of photoreceptor outer segments exhibited significant disorganization and loose stacking in 6-month-old rpgra −/− zebrafish ( Figures 5B′, C′). Furthermore, some vesicle-like structures were observed to accumulate around the connecting cilium below the OSs ( Figure 5E′).
The small GTPase RAB8A plays a direct role in the trafficking of opsin-carrier vesicles, and defects in RAB8A can result in the accumulation of vesicles within photoreceptors (Moritz et al., 2001;Deretic, 2014, 2012). Additionally, RPGR ORF15 interacts with RAB8A to modulate its intracellular localization and function . To investigate whether the expression of Rab8a was affected in rpgra deleted zebrafish, we examined Rab8a protein level and its localization using Rab8 antibody. The results showed that Rab8a was mislocalized throughout the cell body in rpgra −/− retina, while it was localized to the base of the outer segment in controls ( Figure 5F). Subsequently, Western blot results indicated a significant decrease in Rab8a protein levels ( Figures 5G, H). Abnormal expression of Rab8a might be one of the reasons for vesicle accumulation and impaired ciliary transport in photoreceptor cells of rpgra −/− zebrafish.
Frontiers in Cell and Developmental Biology frontiersin.org Mutations in the RPGR gene are associated with X-linked retinitis pigmentosa. Approximately 75% of all XLRP cases are caused by mutations in RPGR ORF15, which is a highly repetitive and purine-rich region, thus considered a hotspot for mutations (Iannaccone et al., 2004;Vervoort et al., 2000). Most ORF15 mutations cause truncation of the C-terminal domain of the RPGR protein, and these mutations are associated with slightly milder disease than mutations in the N-terminal RCC1-like domain (Sharon et al., 2003). Mutations in the RCC1-like domain may impact the interaction of RPGR with other proteins, leading to aberrant cellular function (Megaw et al., 2015). However, the function of the RPGR exon ORF15 repeat domain remains unclear. The phenotypes resulting from RPGR mutations display heterogeneity across diverse genetic backgrounds (Huang et al., 2012), necessitating the use of multiple animal models to more comprehensively elucidate the function of ORF15.
In this study, we generated a new rpgra mutant model to provide a new perspective for research into the function of RPGR ORF15 . The mutant zebrafish exhibited visual impairment at 5 dpf, while the retinal structure remained normal until 5 mpf, and the length of the rod outer segments was reduced at 3 mpf, accompanied by a slight downregulation of the rhodopsin protein level (data not shown). The thickness of the outer nuclear layer in mutant zebrafish retina progressively decreases, accompanied by shortening and disarrangement of rod outer segments, as well as a reduction in length of the cone's outer segment at 6mpf. However, the number of cones remained unchanged compared with the wild type, suggesting that the reduction of rods may be the primary cause for the thinning of the outer nuclear layer. Increased apoptotic signaling served as an additional indicator of cellular degeneration in the mutant retina (Chang et al., 1993;Li et al., 2014). These alterations bear a striking resemblance to certain RP models of zebrafish (Liu et al., 2015;Noel et al., 2020;Yu et al., 2017). However, the retinal phenotype of rpgra mutant zebrafish is relatively mild compared to some RPGR ORF15 patients (Huang et al., 2012).
In zebrafish, there are two homologous genes, rpgra and rpgrb, both of which show high expression patterns in the zebrafish embryonic retina, brain, and neural tube and are expressed at the connecting cilia of adult zebrafish optic photoreceptor cells. Bioinformatic alignments revealed the rpgra is homologous to the RPGR ORF15 in mammals; in contrast, the rpgrb has two transcripts that are homologous to RPGR ORF15 and RPGR ex1-19 , respectively (Shu et al., 2010). Thus, we suspect that the presence of rpgrb ORF15 may modulate the severity of the rpgra mutated zebrafish phenotype. However, protein assays showed that rpgra ORF15 was highly expressed in adult zebrafish eyes, followed by rpgrb ex1-17 , and rpgrb ORF15 was the lowest ( Figure 1D). Previous studies showed that knockdown of rpgrb ORF15 expression in zebrafish resulted in reduced length of Kupffer's vesicle (KV) cilia and is associated with ciliary anomalies including shortened body-axis, kinked tail, hydrocephaly, and edema but does not affect retinal development (Ghosh et al., 2010). In contrast, the inhibition of rpgra expression was not detected as a significant defect (Shu et al., 2010). This suggests that rpgrb ORF15 plays a more crucial role in the early developmental stages of zebrafish, while rpgra exerts dominance in the eyes of adult zebrafish. Moving forward, targeted disruption of rpgrb ORF15 in zebrafish can be performed and combined with rpgra mutant zebrafish to investigate their distinctions and complement the role of rpgr ORF15 in zebrafish.
Compared to rd9 mice and XLPRA1 dogs, the phenotypic similarities between rpgra mutant zebrafish and rd9 mice were more pronounced. These include the presence of retinal pathology and the reduction of ERG function at early ages. Our analysis of the disease progression in rpgra mutant zebrafish showed that ERG b-wave amplitudes were reduced as early as 5dpf, consistent with the temporal expression of rpgra. Unfortunately, the ERG test we used was only applicable to assess zebrafish larvae, so we could not monitor the subsequent progression of visual loss. Further study of the changes in the inner retina of rd9 mice showed that although the cone-rod was intact, slow or absent renewal of outer segments may affect the synaptic level, resulting in a worsening of the transfer of information from photoreceptors to inner retinal neurons. It demonstrated that alterations in retinal physiology can be detected before any major morphological change besides rod loss (Falasconi et al., 2019).
In addition to detecting the abnormal outer nuclear layer and outer segment, we also found that the red cone opsin (opn1lw1) and G-protein beta subunit (Gnb3) were mislocalized to the IS and INL in rpgra −/− zebrafish retina at 6 months, while Rhodopsin did not appear to be mislocalized ( Figures 4B, 5A). This is similar to the phenotype of rd9 mice (Thompson et al., 2012;Zhang et al., 2019), suggesting that zebrafish rpgra may be functionally homologous to the mouse Rpgr ORF15 , and the rpgra mutant zebrafish model constructed in this experiment is suitable for the functional and pathological studies of RPGR ORF15 .
Some other retina degeneration animal models such as CC2D2A, RP2, and EYS are characterized by opsin mislocalization and further photoreceptor cell loss, and these genes are associated with cilia transport (Liu et al., 2015;Moritz et al., 2001;Renault et al., 2001). The specific function of RPGR in photoreceptor cell ciliary transport has been determined (Bachmann-Gagescu et al., 2011;Deretic et al., 1995;Moritz et al., 2001;Renault et al., 2001). RPGR ORF15 is localized to the connecting cilium of the photoreceptors and may be involved as cargo in protein transport processes (Khanna et al., 2005). Given this, we used electron transmission microscopy to observe changes in the subcellular structure of photoreceptor cell membrane discs and connecting cilia; the result showed that the membrane discs of the rpgra −/− zebrafish photoreceptor cells were loosely disorganized, and some vesicle-like structures were accumulated around the connecting cilium below the OSs, showing a resemblance phenotype occurring in cc2d2a and Whirlin defect models (Bachmann-Gagescu et al., 2011;Yang et al., 2010). Combined with the observed unaffected internal structure of the connecting cilia in Rpgr-ko mouse photoreceptor cells (Hong et al., 2000), we considered that rpgra deletion mainly affects the protein transport process in connecting cilia.
RPGR is a GTPase regulator whose domain encoded by exons 1-11 is homologous to chromosome condensation regulator 1 (RCC1), which is a small GTPase guanine exchange factor (GEF). GEFs can catalyze the conversion of inactive GDP-bound GTPases to the active GTP-bound form (Renault et al., 2001). The application of different N-terminal regions of RPGR revealed that RPGR preferentially interacts with the GDP-bound form of the GTPase RAB8A and catalyzes the conversion of RAB8A-GDP to RAB8A-GTP. Knockdown of RPGR expression in hTERT-RPE1 cells resulted in reduced retention of RAB8A at the cilia and shortened cilia length (Murga-Zamalloa Frontiers in Cell and Developmental Biology frontiersin.org et al., 2010). In the present study, a significant downregulation and mislocalization of Rab8a protein showed in rpgra −/− zebrafish eyes ( Figures 5F-H). RAB8A is a major participant in rhodopsin-bearing vesicle trafficking and plays a critical role in the delivery of rhodopsincontaining post-Golgi vesicles to the base of the connecting cilium (Deretic et al., 1995;Moritz et al., 2001). Furthermore, the MICAL3-NINL-CC2D2A complex interacting with RAB8A is required for correct opsin-carrier-vesicle fusion at the periciliary membrane (Bachmann-Gagescu et al., 2015;Ojeda et al., 2017). However, there was no mislocalization of rhodopsin in rpgra −/− zebrafish eyes ( Figure 4A). Based on these, we hypothesize that in the zebrafish retina, Rpgra or Rab8a is involved in cones and rods protein transport through different mechanisms. Further investigation is needed to determine whether the protein specificity of Rab8a-related vesicle trafficking is directly regulated by Rpgra. In addition, we found a large accumulation of lipid droplets in the retinal pigment epithelium (RPE) layer of the rpgra −/− zebrafish retina ( Figure 5C). Although there have been no reports of lipid accumulation in the retinal pigment epithelium (RPE) in animal models or patients with RPGR mutations, macular degeneration has been observed in some RPGR ORF15 patients, and altered RPE integrity has also been noted in rd9 mice (Charng et al., 2016;Falasconi et al., 2019). These findings suggest that RPGR may have an important role in RPE. More importantly, lipid droplets were already present in zebrafish RPE at 3 months, which is consistent with the time of the rod outer segment degeneration (data not shown). Further research is needed to determine whether the abnormality of RPE in rpgra −/− zebrafish is associated with photoreceptor cell degeneration, especially the degeneration of the rod's outer segment. Also, the function of RPGR ORF15 in RPE is worthy of further exploration.
Conclusion
In conclusion, we have successfully established a novel rpgra mutant zebrafish model that exhibits a retinal degenerative phenotype. Our findings confirm the essential role of Rpgra in opsin protein transport from inner segments through the connecting cilium to outer segments in the zebrafish retina. This model provides an opportunity for future investigation into the cellular function of RPGR ORF15 and elucidation of the underlying disease mechanisms, as well as enabling the development of drug candidates for the treatment of conditions caused by RPGR ORF15 mutations.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding author.
Ethics statement
The animal study was reviewed and approved by Laboratory Animal Center, Huazhong University of Science and Technology.
|
2023-06-07T13:05:00.686Z
|
2023-06-07T00:00:00.000
|
{
"year": 2023,
"sha1": "da6fc33b51b725fe8f82409f65a6f0125c8de978",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "da6fc33b51b725fe8f82409f65a6f0125c8de978",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
263332998
|
pes2o/s2orc
|
v3-fos-license
|
High Incidence of Appendiceal Neoplasms in the Elderly: A Critical Concern for Non-Surgical Treatment
Abstract Objective Appendiceal neoplasms (ANs) are rare tumors that are often discovered incidentally during histopathological examinations. The increasing incidence of ANs is a critical issue in the non-operative management of acute appendicitis. This study aimed to document the temporal trends over a 12-year period by analyzing the clinical presentation, imaging findings, and histopathological features of ANs. Subjects and Methods Health records of patients who underwent appendectomy from 2011 to 2022 were examined. Demographic and clinical data, laboratory results, imaging findings, and histopathological features were documented. The characteristics of both ANs and non-neoplastic cases were evaluated. Results A total of 22,304 cases were identified, of which 330 (1.5%) were diagnosed with ANs. The odds ratio for ANs increased with age, with the highest odds ratio observed in patients aged 70 or older. Receiver Operating Characteristic analysis showed that age and appendiceal diameter were significant predictors of ANs. An optimal age cut-off point of 28.5 years was determined, yielding a sensitivity of 72% and a specificity of 64%. For appendiceal diameter, the optimal cut-off was found to be 9.5 mm, exhibiting a sensitivity of 77% and a specificity of 56%. Conclusion Although the incidence of ANs remains relatively low, a steady increase has been observed over the past decade. The increasing rate of ANs raises concerns regarding non-surgical management options. The results of this study highlight the importance of considering ANs as a potential diagnosis in older patients and in patients with an appendix diameter greater than 9.5 mm. These findings may have implications for treatment and management.
Despite the diverse spectrum of ANs, they often present with appendicitis-like symptoms and are often discovered incidentally during histopathological examination [4,5].Imaging modalities such as ultrasonography and computed tomography (CT) are helpful in identifying features of ANs, such as an enlarged or thickened appendix, calcifications, or nodules [5,6].However, these findings are not specific to neoplasms and may also be seen in benign conditions.On the other hand, imaging features of ANs may be subtle or resemble those of appendicitis [5,6].
Surgery is the gold standard of treatment for acute appendicitis (AA).However, studies suggest that medical therapy may be a safe alternative to surgery [7].Avoiding the potential risks of surgery is a major advantage of medical treatment.However, non-surgical treatment options may lead to tumors getting missed.Recent studies have shown that the prevalence of ANs can be as high as 28% in patients treated with non-surgical methods who subsequently underwent interval appendectomy [7][8][9][10][11].These findings highlight the need for caution when considering non-surgical management of AA, particularly in patients who may be at higher risk of ANs.Therefore, defining the clinical, imaging, and histopathological features of ANs and their distribution by age may be helpful in selecting patients for medical treatment.
In this study, we documented temporal trend analyses of appendectomies performed over 12 years and evaluated the prevalence, clinical presentation, imaging, and histopathological features of ANs.
Study Design and Study Population
This was a multicenter retrospective observational cohort study.The digital health records of four tertiary referral hospitals (Erzincan Binali Yildirim University, Mengücek Gazi Training and Research Hospital [TARH], Umraniye TARH, Sultan II.Abdulhamid Han TARH, and Eskisehir City Hospital) were examined, and patients who underwent appendectomy for presumed AA from 2011 to 2022 were identified.Histopathological and imaging features, demographics, and clinical data of the cases were collected.Cases with missing medical records, laboratory, or imaging results were excluded.The slides of cases whose histopathological features were not clearly described in the pathology report were re-evaluated.Among these cases, those whose slides or blocks were not available in the pathology archive were also excluded.Six cases were excluded from the study due to missing data.
Approval for this study was granted by the Local Ethics Committee (approval number: 2023-04/02 dated February 16, 2023), and the research was carried out following the guidelines outlined in the Declaration of Helsinki.Patients provided informed consent for their participation and the publication of their clinical information.
Data Extraction
Data were extracted from the electronic health records, including demographic data (age, gender), clinical presentation, laboratory results (white blood cell [WBC] level), imaging findings, macroscopic features, histopathological features, and diagnosis.Information on the diameter of the appendix (distance between the outer walls of the appendix measured on gross examination) was extracted from pathology reports.
Classification of Cases
The cases were classified into four groups according to their histopathological features.Cases without evidence of inflammation (neutrophilic infiltration, mucosal ulceration, and fibrinopurulent exudate in the appendix lumen) were classified in a negative appendectomy group [12].Cases with neutrophilic inflammatory infiltrates were classified in the appendicitis group [12].Cases with a primary or metastatic tumor were classified in the ANs group [3].Cases with other abnormalities, such as parasites, diverticular disease, endometriosis, polyps, etc., were classified in the unusual findings group.
Afterward, the ANs group was divided into four subgroups according to diagnosis.Subgroup I: LAMN/HAMN; subgroup II: NENs; subgroup III: primary adenocarcinoma; and subgroup IV: secondary tumor infiltration.
Cases were also grouped according to age at diagnosis, and a total of six age groups were created (0-15 years, 16-39 years, 40-49 years, 50-59 years, 60-69 years, and over 70 years).Cases within the age range of 0-15 years were evaluated in the pediatric group, while the remaining cases were evaluated in the adult group.A diagram illustrating the selection of study participants and the classification of cases is provided in Figure 1.
Statistical Analysis
The statistical analysis was performed using SPSS (version 25) software.Differences in age, gender, WBC, and appendiceal diameter among groups were assessed using the χ 2 or Kruskal-Wallis tests, depending on the data characteristics.The statistical confidence level was set at 0.95 (alpha = 0.05).Odds ratios (ORs) were calculated to evaluate the risk of developing ANs in different age groups.A logistic regression analysis was performed, and ORs along with their corresponding 95% CIs were determined.A receiver operating characteristic (ROC) analysis was conducted to evaluate the diagnostic performance of age and appendiceal diameter for predicting ANs.The sensitivity, specificity, and area under the ROC curve (AUC) were calculated.The optimal cutoff value was determined based on the maximum Youden Index.The Youden Index, a measure of the test's overall performance across all possible cutoff values, was calculated as follows: Youden Index = Sensitivity, + Specificity -1.
A Joinpoint Regression Analysis was conducted to determine the time trend of AN incidence from January 2011 to December 2022.The study period was divided into segments based on changes in the trend of AN incidence.The slope of the trend line between Joinpoints was used to calculate the annual percentage change (APC) of AN incidence during that period.In the model, years (2011-2022) were considered independent variables, while the ratio of ANs to all appendectomy cases was included as the dependent variable.A logarithmic transformation was applied to the AN rate during modeling.The optimal model was designed using the least squares method, and the regression parameters were generated using Grid Search methods.
Results
A total of 22,310 patients who underwent an appendectomy between 2011 and 2022 were identified.Six cases were excluded from the study due to missing data.Of the 22,304 cases included in the final analysis, 13,181 (59%) were males and 9.123 (41%) were females, with a mean age of 26 years (range: 0-97 years, SD 16).The distribution of cases based on histopathological features was as follows: 3,473 (16%) cases in the negative appendectomy group, 18,169 (81%) cases in the appendicitis group, 332 (1.5%) cases in the unusual findings group, and 330 (1.5%) cases in the neoplasm group.The baseline demographic and histopathological features of the study population are presented in Table 1.
The overall rate of ANs in the cohort was 1.5% (one neoplasm per 67 appendectomies).Joinpoint analysis showed a significant change in the ANs rate during the study period.The AN rate increased more rapidly in 2011-2014, Trends and Characteristics of Appendiceal Neoplasms followed by a slower increase from 2014 to 2020, and a possible decrease from 2020 to 2023.Changes in the incidence of ANs over time are presented in Table 2 and Figure 2.
In the pediatric group, the incidence of ANs was relatively low, with one neoplasm per 238 appendectomies (0.42%).NENs were the most common type, accounting for approximately 67% of all cases of ANs.LAMNs were the second most common type, accounting for 20% of the cases.Metastatic tumors were extremely rare, and only 2 cases showed infiltration of hematological malignancies in the subserosal area of the appendix.Representative examples of ANs are presented in Figure 3.
In the adult group, the most common types of neoplasms are LAMN/HAMN (44.8%), followed by NEN (43.5%), secondary tumor infiltration (8.1%), and primary adenocarcinoma (3.6%).We observed that the distribution of subtypes of ANs was strongly related to the age of patients (Table 3).Increasing patient age was correlated with a higher rate of adenocarcinoma and a lower rate of NENs.The majority (90%) of adenocarcinoma cases were detected in individuals over 40 years of age, whereas a significant proportion (75%) of NENs cases were detected in individuals under 40 years of age.
We also found that the age of the patients was strongly related to the rate of ANs.There was a significant increase in the rate of ANs with age, from 0.93 for those aged 16-29 years to 9.05 for those aged 70 years or older (shown in Fig. 4a).The mean age of patients in the ANs group was 46 years.There was a significant difference in the mean age at presentation between the ANs group and the other groups (p < 0.001) (shown in Fig. 4b).We also found that the odds of having an AN were significantly higher in patients aged 30 years and older compared to those younger than 30 years, with an OR of 1.36 (95% CI: 1.04-1.78)for patients aged 30-39 years, an OR of 1.92 (95% CI: 1.43-2.58)for those aged 40-49 years, an OR of 2.42 (95% CI: 1.72-3.41)for those aged 50-59 years, an OR of 4.47 (95% CI: 3.09-6.47)for those aged 60-69 years, and an OR of 7.31 (95% CI: 5.09-10.49)for patients aged 70 years or older.Furthermore, ROC analysis showed that age was a significant predictor of ANs, with an AUC of 0.734 (95% CI: 0.706-0.761,p < 0.001).The optimal cut-off point was found to be 28.5, with a sensitivity of 72% and a specificity of 64% (Fig. 4c).
In 90% of cases with ANs, the presenting complaint was right lower quadrant abdominal pain, and physical examination revealed tenderness or rebound tenderness.Ultrasonography and CT scans were performed in 40% and 60% of the cases, respectively.Preoperative imaging findings favored neoplasms or raised suspicion of neoplasms in only 16% of cases (n = 52).Among these cases, 1 had primary adenocarcinoma, 2 involved hematological malignancies, and 49 had LAMN or HAMN morphology.None of the NENs were detected by the preoperative imaging modality.The preoperative mean of the WBC count was 11.5 × 10 3 /μL (range: 3.51-32.0)in cases of ANs, and 14.31 × 10 3 /μL (range: 4-29.3) in non-neoplastic cases, while a standard WBC count in our laboratory ranges from 4,500 to 11,000 × 10 3 /μL.We found a statistically significant difference in the pre-operative WBC count between ANs and non-neoplastic cases (p < 0.001).Concurrent neutrophilic inflammatory infiltrates were detected in 68% of cases of ANs, and perforation was noted in 8% of cases.
The mean appendix diameter was 12.4 mm (5-36 mm) in ANs, and 8.4 mm (4-30 mm) in non-neoplastic cases.The mean appendix diameter was significantly larger in ANs (p = 0.002).A ROC analysis was performed to evaluate the effect of appendix lumen diameter on the prediction of ANs.The optimal cut-off value was determined to be 0.95 cm, generating an AUC of 0.746 (95% CI: 0.709-0.783,p < 0.001), with a sensitivity of 0.77 and a specificity of 0.56 for distinguishing ANs from non-neoplastic cases (shown in Fig. 5).
Discussion
In the current study, we analyzed the clinical, imaging, and histopathological features of ANs.The overall rate of ANs in our cohort was 1.5%, and the rate increased from 0.53 to 1.81 over the past 12 years.A similar pattern has also been observed in population-based studies, which have reported an increase in the incidence of ANs in different age groups, genders, and histological types [1,[12][13][14][15].Various hypotheses have been proposed to explain the reasons; however, the reasons are not fully understood yet.
The increase in the incidence of ANs has been attributed mainly to the change in the rate of appendectomy.Johansson et al. [14] suggested that the increasing incidence of ANs may be related to the decreasing incidence of appendectomy, based on their hypothesis that the removal of the appendix could potentially protect against the development of ANs.However, Singh et al. [13] reported an increase in the incidence of ANs despite a lack of decrease in the rate of appendectomies.Another proposed explanation for this situation is that the increased rate of appendectomy may have played a role.As ANs are often discovered as incidental findings during appendectomies, the increased number of these procedures could be associated with an increase in the detection of tumors.However, Orchard et al. [15] noted a small increase in the rate of appendectomies and stated that "the much larger increase in the incidence of ANs cannot be explained by the increase in appendectomies alone".
Our results highlight a different perspective.We observed that the increase in the ANs rate may be associated with the decrease in the rate of negative appendectomies.Over the course of our study, the rate of negative appendectomy decreased steadily, starting at 17.1% in 2011 and reaching 10.7% in 2022 (Fig. 6).Thus, a reduction in the number of negative appendectomies could lead to a proportional increase in the rate of ANs.This situation has also been highlighted by Singh et al. [13].Furthermore, according to Johansson et al. [14], the reduction in the rate of negative appendectomy ensures the preservation of the appendix, which allows the possibility of observing any tumor development.
Another potential explanation for the increasing incidence of ANs may be related to changes in techniques for pathological assessment.Studies suggest that a more extensive examination of specimens, with a greater representation of sections submitted for each case, may be influential in the detection of tumors [4,16].In our daily practice, we often submit the entire appendix for pathological examination.Among the subjects included in the study, there were cases where tumors were 2-3 mm in diameter; these tumors were not easily detected through macroscopic examination.However, due to the limited number of studies on this subject and their retrospective nature, it is not possible to make a definitive interpretation regarding the effect of pathological sampling on the incidence of ANs.
On the other hand, the aging population may have contributed to the increase in the incidence of ANs.As the incidence of primary adenocarcinoma and metastatic tumors is higher in the elderly, it seems likely that the
Trends and Characteristics of Appendiceal Neoplasms
Med Princ Pract DOI: 10.1159/000534347 incidence of ANs will increase as the population ages.Our results showed that the potential for detecting neoplasm increases with the age of the patient, with one neoplasm found in every 12 appendectomies over the age of 60.Our findings also showed that age is a good predictor of the risk of ANs.Patients over 28 years of age had an increased risk of ANs (4.4-fold), and 90% of primary adenocarcinomas were detected in patients over 40 years of age.In line with our findings, studies have reported that the age of the patient is associated with the risk of ANs.The incidence of ANs was reported to be higher in older patients [17] and increasing age has been found to be a risk factor for ANs in non-elective appendectomy [5,18].Patients over 40 years of age who underwent appendectomy were more likely to be diagnosed with ANs [19,20].Age over 50 years was identified as an independent risk factor for ANs with an OR of 6.6 (95% CI: 3.0-14.7)and an OR of 3.6 [1.1-11.4]respectively [21,22].
In our cohort, the presenting symptom of most cases of ANs was right lower abdominal pain, accompanied by defense and/or rebound tenderness.Numerous studies have shown that ANs rarely have distinct clinical features and often present appendicitis-like symptoms [4,5].We observed that the mean preoperative WBC count in AN cases was 11.5 × 10 3 /μL, whereas the accepted standard WBC count in our laboratory ranges from 4,500 to 11,000 × 10 3 /μL.The mean WBC count showed a significant difference between AN and non-neoplastic cases (11.5 vs. 14.31 × 10 3 /μL).In agreement with our results, Koç and Çelik [23] showed that the preoperative WBC count of ANs was significantly lower than that of nonneoplastic cases (9.3 vs. 12.8 × 103/μL).Despite the higher WBC count in non-neoplastic cases compared to ANs, studies have shown that the WBC count cannot serve as a reliable diagnostic marker for appendicitis [24].In our opinion, the reliability of using WBC as a single measure to determine neoplasm risk is not acceptable, as 68% of our cases with ANs show concurrent neutrophilic infiltration, and cases of AA may exhibit normal WBC counts [25].
Studies indicate that imaging methods provide only limited assistance in diagnosing ANs [5,6].However, consideration of the diameter of the appendix may serve as a warning sign for ANs.Studies reported that the mean normal appendix diameter can range from 5.6 ± 1.3 mm to 8.19 ± 1.6 mm in CT [26,27].Traditionally, an appendix diameter greater than 6 mm has been considered the cut-off point for diagnosing appendicitis [28].In our cohort, the mean appendix diameter was significantly larger in ANs; the mean was 12.4 mm for ANs and 8.4 mm for non-neoplastic cases.Increased appendix diameter has been reported to be an independent risk factor for ANs, with an OR for greater than 10 mm of 1.06 (95% CI: 1.01-1.12)and an OR for 13 mm and greater of 3.2 (95% CI: 1.0-10.3)respectively [21,23].Furthermore, isolated dilatation in the distal segment of the appendix with a regular proximal segment has been shown to be highly associated with mucinous neoplasms [29].
Non-surgical treatment options have become more popular for AA in recent years.However, there is a concern that non-surgical treatments may lead to the tumors getting missed.The incidence of tumors was observed to be significantly higher in patients who underwent interval appendectomy than in those who underwent emergency appendectomy (12.6 vs. 1.2%) [11].A high rate of tumors was detected in patients who did not undergo interval appendectomy when closely followed up with imaging [10].Therefore, assessment of risk factors for ANs may be useful in identifying patients for interval appendectomy or follow-up.The findings of this study suggest that surgeons should carefully consider the possibility of ANs in patients over 40 years of age with an appendix diameter of 0.95 cm or greater.Failure to diagnose ANs in these patients may result in tumor growth, stage migration, or adverse patient outcomes.Therefore, it is important for clinicians to be aware of the risk factors associated with ANs and to consider them in their diagnostic approach to appendicitis.Future research should aim to develop effective screening tools and diagnostic algorithms to improve the preoperative detection of ANs.
Our study has some limitations that may have affected the results.First, the retrospective nature of the study and the cohort, which included only patients who underwent an appendectomy for possible appendicitis, may limit the generalizability of our findings.In addition, our data may have been influenced by both the patient population and the treatment choices of surgeons at the study centers.Furthermore, our findings only include data from patients who underwent an appendectomy.Long-term follow-up of patients who received medical treatment might provide more comprehensive information on the risk of missed ANs.
Fig. 1 .
Fig. 1.Diagram of the study design.Flowcharts are shown for the study participants and the classification of cases.Six cases with missing data were excluded from the study.*Collision tumors were detected in 5 cases.
Fig. 4 .
Fig. 4. a Percentage of AN in all appendectomies in age groups.b The box plot graphic of the patient's age by histopathological classification at shows the median, 25th, and 75th percentile values (horizontal bar, bottom, and top bounds of the box).c ROC curve analysis of patient's age for prediction of cases with neoplasm.The area under the curve (AUC) shows the prediction power of the patient's age.The optimal cut-off point value for age was 28.5 years, with a sensitivity of 72% and a specificity of 64%.ROC, receiver operating characteristic; AUC, area under the curve; CI, confidence interval.
Fig. 5 .
Fig. 5.The box plot graphic of the appendix diameter of ANs and non-neoplastic cases (including negative appendectomy, appendicitis, and unusual findings groups) shows the median, 25th and 75th percentile values (horizontal bar, bottom, and top bounds of the box).The (a) plot represents the ROC curve analysis of appendiceal diameter for the prediction of ANs.The area under the curve (AUC) indicates the predictive power of patient age.The optimal cut-off point value for diameter was 9.5 mm, achieving a sensitivity of 0.77 and a specificity of 0.56 (b).
Fig. 6 .
Fig.6.The temporal pattern of the ratio of negative appendectomy cases and appendicitis cases in relation to the total number of appendectomies performed from 2011 to 2022.
Table 1 .
Baseline demographic characteristics of patients undergoing appendectomy, stratified by histopathological findings and age groups *The χ 2 statistic is significant at the level 0.05.**Kruskal-Wallis test is significant at the level 0.05.
Table 2 .
Temporal trends of cases according to the histopathological findings; 2011-2022 Fig. 2. Joinpoint regression model with trends in the percentage of ANs from 2011 to 2022.Trends and Characteristics of Appendiceal Neoplasms Med Princ Pract DOI: 10.1159/000534347
Table 3 .
The distribution and rate of ANs based on age groups
|
2023-10-03T06:16:47.958Z
|
2023-10-01T00:00:00.000
|
{
"year": 2023,
"sha1": "2fac86d01b4b6a626078c8728804cda6808e0d76",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1159/000534347",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8eaf583332d15d9cc0841d0efdeb59051def0cc1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
138604671
|
pes2o/s2orc
|
v3-fos-license
|
Second quantization model for surface plasmon polariton in metallic nano wires
A model of effective Hamiltonian is proposed in second quantization representation for system of surface plasmons and photon (polariton) in metallic nano wires. The dispersion relation curves of surface plasmon polariton was calculated by mean of the Bogoliubov diagonalization method. The surface plasmon photon vertexes are considered. The conditions for excitation surface plasmon, existence plasmon radiate modes, and a possible application of metallic nano wires were also discussed.
Introduction
In the last tens years, the new field of plasmon has been developed, which discussed the interesting features of metals with nano structure, exciting with an electromagnetic radiation: the plasmon resonance.
The plasmon is a quasiparticle resulting from the quantization of plasma oscillations. Plasmons are collective oscillations of the free electron gas density. A Plasmon can couple with a photon to create another quasiparticle called a plasmon polariton. Surface plasmons are plasmons which are restricted in surfaces and interact strongly with light resulting in a polariton. They occur in the interface of a vacuum and material with a small positive imaginary and large negative real dielectric constant. It is usually a metal or doped dielectric.
Since for all these surface plasmon polariton (SPP) systems the downscaling of the cross section is not limited by the light wavelength, they represent a promising alternative to dielectric optical waveguides. Thus, there is a substantial interest in the fundamental properties of SPP propagation in nanoscale structured matter, which is determined by the respective dispersion relations.
Nowadays, the experimental determination of the dispersion relation in the confined metallic films with nanoscopic cross sections nanowires is a challenging task, as the methods such as total internal reflective attentuated spectroscopy cannot be applied easily because of the small size of the nanowires. An alternate experimental technique, conventional spectroscopic extinction, was proposed in [5], allowing the measurement of plasmon resonances in metallic nanowires with finite length. Absorption and scattering in a nanowire give rise to an extinction band, the maximum of which is used to define the frequency of the resonance of the SPP mode.
In the article [17], the experimental determination of the dispersion relation for Ag and Au nanowires was reported. Interestingly, it was shown at the multipolar plasmon resonances of metal nanowires can be described in terms of standing plasmon waves, allowing one to deduce the dispersion relation from optical extinction measurements. The proposed model is supported by additional investigations with a modified experimental setup.
The focus was taken into the metallic nanowire systems which have already great interests in the SPP waveguides, and could produce a strong evanescence field near their surface. It is also known as an intrinsic problem restricting the applications of surface plasmons, which is the energy loss. To overcome this obstacle, hybrid surface plasmon modes supported by a composite waveguide of metal, spacer and dielectric, have been introduced.
In the previous work [19 ], a simple second quantization Hamiltonian for SPP with planar geometry was proposed.
In this research, we studied the quantum theory of surface plasmons, and plasmon polaritons with metallic nanowires. The Drude model was reviewed first and then applied for the metallic nanowires with cylindrical symmetry. Using the Mie theory for small radius metallic nanowires, the dispersion relation of surface plasmon polaritons was found. Based on that results, a simple two branch surface plasmon polariton model was considered for metallic nanowires with a definition of plasmon-photon effectiveness correlation constant. The dispersion relation SPP was found by applying the Bogoliubov diagonalization technique to the the second quantization model Hamiltonian for system of surface plasmon and photon in metallic nano wires. The comparison of the theoretical results obtained from our model and experimental data give good agreement.
Drude model for single mode surface plasmon polaritons
In this part we presented a simple semi-classical model, like Dude model for single mode surface plasmon polariton in metallic nano particle.
In the semi-classical theory, the dielectric response of surface palsmon is characterized by the Drude formula, which is the generalized Lorentz model for metals where γ P is the damping constant, ω P = ηω SP is the surface plasmon frequency with the parameter η defined by the dimension and geometry properties of metallic nano particles, ω P = ne 2 / 0 m * is the bulk plasmon frequency, ∞ and 0 are the high frequency and static dielectric constants, n is the density and m * is the effective mass of electrons in the metal. Considering γ P << ω SP -the damping constant is much smaller than surface plamon frequency, the standard Drude model is Like in the case of Lorentz model, we can separate the real and imaginary parts of the dielectric constant by analyzing the function around the value of ω ω P (3) and for absorption part It is noted here that the asymmetric absorption peak around ω SP is the common behavior of plasmon absorption. For the case of metallic half space with planar geometry, we have η planar = 1/ √ 1 + d or η planar = 1/ √ 2 in the vacuum d = 0 = 1. The dispersion laws of surface plasmon polariton Ω (k) are defined from the boundary conditions.
Drude like model for surface plasmon polariton in nano wires
To calculate the group velocity dω = dk as a function of a metallic nano wire radius R, a model consisting of a metallic cylinder with a dielectric constant m surrounded by a dielectric medium of dielectric constant d was used (see the figure 1).
For the special case with TM mode (H z = 0) and fundamental mode with no winding m = 0, continuity of the remaining tangential components E z and H φ at the boundary lead to the equation for SPP dispersion relations. In this case the surface plasmon propagation is governed by the dispersion relation for the fundamental transverse magnetic modes, which is given by [16] where k i = √ i ω/c = k 2 i⊥ + k 2 i and J m , H m are Bessel and Hankel functions of the first kind, respectively. By numerically solving the above equation (6), we can calculte the group velocity as a function of R with frequency ω and given d and m . In the limit of k i⊥ = k 2 i − k 2 i ik we have where K m , I m are modified Bessel and Hankel functions, respectively. Taking the case of metal/vacuum interface d = 1, from the Drude model the frequency ω k equals In the Drude like model, the SPP dispersion relation for metallic nano wires is or using the properties of modified Bessel and Hankel functions, it can be rewritten as follow and is plotted in the figure for the cases ∞ = 1, and ∞ = 3.7 (Ag). For almost metals m / d where a = γ E − ln2 , γ E = 0.577 is the Euler's constant. Denote k = nk 0 we got where the function C is For k 0 R → 0, the phase velocity v ph = c/n → 0, and the group velocity v gr = c/ [d (nω) /dω] → 0.
In the case of small radius nanowires k R ≤ 1, we obtain the dispersion relation for SPP Figure 3. The dispersion relations for SPP obtained by equation (10) curve, and (14) dashed line In the figure 3, we plot the dispersion relations for SPP obtained by equation (10) curve, and (14) dashed line. For guiding, the photon line (thick) also is plotted. The dispersion relation from (14) valid only for small nanowires.
Second quantization quantum Hamiltonian model for surface plasmon polaritons in metallic nano wires
In the similar cases of exciton plariton and phonon polariton, we consider a model Hamiltonian in 2nd quantization form for surface plasmon polariton in metallic nano wires with single mode where a k (a + k ) and b k (b + k ) are the annihilation (and creation) photon and plasmon operators correspondingly with momentum k, ω P 1 is the surface surface plasmon energy with m = 1, and ω γk = ck/ √ d is the photon dispersion law. We denote g k the plasmon-photon transition vertex (or coupling constant), this vertex is absence in traditional plasmon theory g Bk = 0 because the bulk plasmon is longitudinal excitation, while photon is transverse excitation. For the case of surface plasmons due to the existence of the boundary conditions that electromagnetic waves must be satisfied at the interface, the plasmon-photon transition vertex might not be zero, and being the main parameter of our theory.
Bogoliubov transformation and dispersion relation
We use the Bogoliubov transformation technique taken from the theory of superconductivity for the plasmon polariton diagonalized Hamiltonian where γ ik (andγ + ik ) are the annihilation (and creation) operators of the surface plasmon polarition SPP with momentum k, and i is the branch number i = L = 1 for lower and i = U = 2 for upper branch. The transformations with unity condition u 2 k + v 2 k = 1 are Using the commutative relations for annihilation and creation operators a k , a + k = 1, [b k , b+] = 1, γ ik , γ + ik = 1, and equalling to zero in other cases, by standard calculation as in [4] we obtain the surface plasmon polariton SPP dispersion relations for lower branch and for upper branch Note that in the case of planar boundary geometry, the upper branch is lying in the energy gap where damping is too high so only the lower branch exist, while in the case of metallic nano spherical geometry, both branches could be exist. The surface plasmon polariton dispersion relation Ω k depends on wave vector k and coupling constant g k = 0.3ω * P is presented in the figure 4.
Note that the simple two bands model with the effective g k = const of surface plasmon polariton might be best in the most important bottom region but may be fare in the long wave limit k = 0.
6. Coupling constant g k with k-dependence As mentioned above, the simple two bands model of surface plasmon polariton might be best in the most important neck region but may be failed in the long wave limit k = 0. In this part, we propose to overcome this problem by investigation the k-dependence of the plasmon-photon coupling constant.
Assuming the two dispersion relations of Drude and lower SPP branch with ω γk ≤ ω P 1 of our quantum model are equal ω DW = Ω Lk , and putting k = k we got the equation for finding the plasmon-photon coupling constant g W k The solution of this equation is Figure 6. The dispersion relation obtained by our model where for the SPP lower branch k = k if k < k p1 , and k = k P 1 if k ≥ k p1 . The value of the plasmon-photon coupling constant g k depends on wave vector k at the metal/vacuum interface ( d = 1) is plotted in the figure 5, here , and will be taken as parameter of our model.
With this plasmon-photon coupling constant, the dispersion relation obtained by our model is presented in the figure 6.
Two branches of surface plasmon polariton of metallic nano wires (curve and Dashing [Large]) is plotted in the figure 7 with the proposed coupling constant g k (DotDashed).
Note that the existence of upper branch. This branch might play an important role in some physics phenomena.
Discussion
In this work, we reviewed and studied several semiclassical and quantum models of surface plasmons and surface plasmon polariton in the cylindrical symmetry. We proposed a simple two branch model Hamiltonian for surface plasmon polarization in second quantification representation for metallic nanowires. The main parameter of the model is surface plasmon photon transition vertex (surface plasmon photon effective coupling constant) was obtained. We compared the two cases planar and cylindrical geometries. For the case of planar geometry, the conditions for excitation are needed, while that do not need for the case of cylindrical geometry. The existence of upper surface plasmon polariton branch with the photon-like behavior at high frequency will play an important role in explanation the some experimental results.
|
2019-04-29T13:12:30.974Z
|
2016-06-01T00:00:00.000
|
{
"year": 2016,
"sha1": "16284bc0b05e41facc1d6fce748e2cbec27147a0",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/726/1/012025",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ed9cedf211b020da693f662e536c1bf263cf1549",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
53309186
|
pes2o/s2orc
|
v3-fos-license
|
The Study of Functional Outcome of Lumbar Spine Disorders Treated with Laminectomy: The Surgical Management
The central spinal stenosis denotes the involvement of the area between the facet joints, which includes dura and its contents. the reasons for the stenosis here are protrusing disc, bulging annulus, osteophyte formation or thickened ligamentumflavumcental canal stenosis clinically presents as claudication and the lateral canal stenosis presents as radiculopathy the lateral recess also referred as lee’s entrance zone, begins from lateral border of dura and extends to medial border of pedicle. This is where the nerve root exits. Zones of lateral canal is divided into entrance zone, mid zone and the exit zone the reason for stenosis here are lateral disc herniation, thickened ligamentumflavum extending into the foramen, facet arthritis or spondylolisthesis. Laminectomy is the procedure of choice especially in the elderly. The present study is done to find out the functional outcome of lumbar spine disorders treated with laminectomy.
Introduction
Park et al 12 did retrospective comparative study looking at the SPORT study results to determine the effect of multilevel stenosis on surgical and conservative treatment outcomes. Patients with multiple levels of stenosis had somewhat less severe pain at baseline on the SF-36 bodily pain scale compared to one and two levels. Patients with single level stenosis were less likely to present with neurogenic claudication (p <0.001) and more likely to report dermatomal pain radiation. Other baseline symptoms were similar across groups. When comparing surgical to conservative treatments for one, two and three level isolated stenosis, there was a significant surgical treatment effect in most outcomes measures within each subgroup at each time point. The only significant difference in treatment effects between subgroups was at two years for patient satisfaction with symptoms.
Laminectomy is the procedure of choice especially in the elderly. The central spinal stenosis denotes the involvement of the area between the facet joints, which includes dura and its contents. the reasons for the stenosis here are protrusing disc, bulging annulus, osteophyte formation or thickened ligamentumflavumcental canal stenosis clinically presents as claudication and the lateral canal stenosis presents as radiculopathy the lateral recess also referred as lee's entrance zone, begins from lateral border of dura and extends to medial border of pedicle. This is where the nerve root exits. Zones of lateral canal is divided into entrance zone, mid zone and the exit zone the reason for stenosis here are lateral disc herniation, thickened ligamentumflavum extending into the foramen, facet arthritis or spondylolisthesis 1-10 .
Weinstein JN, et al 11 .combined the randomized and observational cohorts of patients with spinal stenosis (SpS), those treated surgically showed significantly greater improvement in pain, function, satisfaction, and self-rated progress over four years compared to patients treated non-operatively. Results in both groups were stable between two and four years.
Patients with single level stenosis had a smaller difference in satisfaction between surgery and conservative treatment, that is, a smaller treatment effect than the other two groups. This study provides Level III therapeutic evidence that patients with spinal stenosis without associated degenerative spondylolisthesis or scoliosis can be managed nonoperatively irrespective of the number of levels involved. Surgical intervension never affects the number of level of the spinal stenosis.
Amundsen et al 13 dis a case control, comparative study of 100 patients with symptomatic spinal stenosis Atlas SJ et al 14 did a study on long term outcome of surgical and non surgical management of lumbar canal stenosis 8 to 10 years of follow-up. A prospective observational cohart study Of 148 eligible consenting patients initially enrolled, 105 were alive after 10 years (67.7% survival rate). Among surviving patients, long-term follow-up between 8 and 10 years was available for 97 of 123 (79%) patients (including 11 patients who died before the 10-year follow-up but completed a 8 or 9 year survey); 56 of 63 (89%) initially treated surgically and 41 of 60 (68%) initially treated nonsurgically. Patients undergoing surgery had worse baseline symptoms and functional status than those initially treated nonsurgically. Outcomes at 1 and 4 years favored initial surgical treatment. After 8 to 10 years, a similar percentage of surgical and nonsurgical patients reported that their low back pain was improved (53% vs. 50%, P = 0.8), their predominant symptom (either back or leg pain) was improved (54% vs. 42%, P = 0.3), and they were satisfied with their current status (55% vs. 49%, P = 0.5). These treatment group findings persisted after adjustment for other determinants of outcome in multivariate models. However, patients initially treated surgically reported less severe leg pain symptoms and greater improvement in back-specific The present study is done to find out the functional outcome of lumbar spine disorders treated with laminectomy.
Aims and Objectives
To study of functional outcome of lumbar spine disorders treated with laminectomy.
Materials and Methods
This study was done in Department of Orthopedics, Srinivas Institute of Medical Sciences, Mangalore. Thirty people who were treated with laminectomy procedures were selected randomly and the functional scores were studied.
Exclusion criteria 1. Old fracture spine
The statistical Analysis was done using the latest SPSS software 2015 California.
Discussion
Twenty-two patients were assigned to each group. Only 32 of 44 patients were randomly assigned into each group. The mean functional status at one year was improved in both groups. Conservative treatment consisted of bed rest, use of a semirigidorthosis, physical therapy and appropriate exercise program. Mariconda et al 15 reported an incompletely randomized, prospective study of 44 patients comparing single or multilevel laminectomy in patients with mild to moderate leg pain to patients treated with medical/ interventional therapy. Outcomes were assessed using the Beaujon Scoring System. At four years, the good results were 68% in the surgical group and 33% in the medi-cal/ interventional group. Only 2.6% of patients experienced an increase in their spondylolisthesis. There was a reoperation rate of 9% and a cross over rate of 9%. Arinzonet al 14 performed a prognostic case control studies investigating the effect of decompression for lumbar spinal stenosis in elderly diabetic patients.
Arinzonet al 16 did a retrospective, prognostic study of the effects of age on decompressive surgery for lumbar spinal stenosis. 283 patients were grouped according to age. One group was aged 65-74 years old and the second group was > 75-years-old. Follow-up was up to 42 months with a minimum of nine months. Within both treatment groups there was a significant (p<0.0001) subjective improvement in low back and radicular pain as well as the ability to perform daily activities. When compared to preoperative levels, the oral scores for pain while performing daily activities were significantly improved (p<0.001) in both treatment groups. The authors concluded that the overall postoperative complication rate was similar between the groups and that age is not a contraindication for surgical decompression of lumbar spinal stenosis. Both groups are equally likely to suffer minor perioperative complications.
The study included 62 diabetic patients and 62 gender-and age-matched non diabetic controls. The mean follow-up was 40.3 months. Comorbidities were as assessed and outcomes were measured using the visual analog scale (VAS), basic activities of daily living (BADL) and walking distance. The authors concluded that decompression for symptomatic spinal stenosis is beneficial in elderly diabetic patients. However, the results are related to successful pain reduction, physical and mental health status, severity of clinical presentation, insulin treatment and duration of diabetes. The benefits in diabetic patients are low as compared with non diabetic patients with regard to symptom relief, satisfaction, BADL function and rate of complications.
Conclusion
In this study the functional outcome was better for a period of 6 months after surgery.
|
2018-10-15T01:17:59.915Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "175423d06e0e75c657805bb193d9994f973eeb31",
"oa_license": null,
"oa_url": "https://doi.org/10.21275/art20178249",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bed52bf0506da76ad20fecc1fa8a1687e721ca9a",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
}
|
251559100
|
pes2o/s2orc
|
v3-fos-license
|
Persistent vs Recurrent Cushing's Disease Diagnosed Four Weeks Postpartum
Background Cushing's disease (CD) recurrence in pregnancy is thought to be associated with estradiol fluctuations during gestation. CD recurrence in the immediate postpartum period in a patient with a documented dormant disease during pregnancy has never been reported. Case Report. A 30-year-old woman with CD had improvement of her symptoms after transsphenoidal resection (TSA) of her pituitary lesion. She conceived unexpectedly 3 months postsurgery and had no symptoms or biochemical evidence of recurrence during pregnancy. After delivering a healthy boy, she developed CD 4 weeks postpartum and underwent a repeat TSA. Despite repeat TSA, she continued to have elevated cortisol levels that were not well controlled with medical management. She eventually had a bilateral adrenalectomy. Discussion. CD recurrence may be higher in the peripartum period, but the link between pregnancy and CD recurrence and/or persistence is not well studied. Potential mechanisms of CD recurrence in the postpartum period are discussed below. Conclusion We describe the first report of recurrent CD that was quiescent during pregnancy and diagnosed in the immediate postpartum period. Understanding the risk and mechanisms of CD recurrence in pregnancy allows us to counsel these otherwise healthy, reproductive-age women in the context of additional family planning.
Introduction
Despite a relatively high prevalence of Cushing's syndrome (CS) in women of reproductive age, it is rare for pregnancy to occur in patients with active disease [1]. Hypercortisolism leads to infertility through impairment of the hypothalamic gonadal axis. Additionally, while Cushing's disease (CD) is the leading etiology of CS in nonpregnant adults, it is less common in pregnancy, accounting for only 30-40% of the CS cases in pregnant women [2]. It has been suggested that in CD there is hypersecretion of both cortisol and androgens, impairing fertility to a greater extent, while in CS of an adrenal origin, hypersecretion is almost exclusively of cortisol with minimal androgen production [3]. Regardless of the cause, active CS in pregnancy is associated with a higher maternal and fetal morbidity, hence, prompt diagnosis and treatment are essential.
Pregnancy is considered a physiological state of hypercortisolism, and the peripartum period is a common time for women to develop CD [3,4]. A recent study reported that 27% of reproductive-age women with CD had onset associated with pregnancy [4]. e high rate of pregnancy-associated CD suggests that the stress of pregnancy and peripartum pituitary corticotroph hyperstimulation may promote or accelerate pituitary tumorigenesis [4][5][6]. During pregnancy, the circulating levels of corticotropin-releasing hormone (CRH) in the plasma increase exponentially as a result of CRH production by the placenta, decidua, and fetal membranes rather than by the hypothalamus. Unbound circulating placental CRH stimulates pituitary ACTH secretion and causes maternal plasma ACTH levels to rise [4]. A review of the literature reveals many studies of CD onset during the peripartum period, but CD recurrence in the peripartum period has only been reported a handful of times [7][8][9][10]. Of these, most cases recurred during pregnancy. CD recurrence in the immediate postpartum period has only been reported once [7]. Below, we report for the first time a case of CD recurrence that occurred 4 weeks postpartum, with a documented dormant disease throughout pregnancy.
Case Presentation
A 30-year-old woman initially presented with prediabetes, weight gain, dorsal hump, abdominal striae, depression, lower extremity weakness, and oligomenorrhea with a recent miscarriage 10 months ago. Diagnostic tests were consistent with CD. Results included the following: three elevated midnight salivary cortisols: 0.33, 1.38, and 1.10 μg/ dL (<0.010-0.090); 1 mg dexamethasone suppression test (DST) with cortisol 14 μg/dL (<1.8); elevated 24 hr urine cortisol (UFC) measuring 825 μg/24 hr (6-42); ACTH 35 pg/mL (7.2-63.3). MRI of the pituitary gland revealed a left 4 mm focal lesion (Figure 1(a)). After transsphenoidal resection (TSA), day 1, 2, and 3 morning cortisol values were 18, 5, and 2 μg/dL, respectively. Pathology did not show a definitive pituitary neoplasm. She was rapidly titrated off hydrocortisone (HC) by six weeks postresection. Her symptoms steadily improved, including improved energy levels, improved mood, and resolution of striae. She resumed normal menses and conceived unexpectedly around 3 months post-TSA. Hormonal evaluation completed a few weeks prior to her pregnancy indicated no recurrence: morning ACTH level, 27.8 pg/mL; UFC, 5 μg/ 24 hr; midnight salivary cortisol, 0.085 and 0.014 μg/dL. Her postop MRI at that time did not show a definitive adenoma (Figure 1(b)). During pregnancy, she had a normal oral glucose tolerance test at 20 weeks and no other sequela of CD. Every 8 weeks, she had 24-hour urine cortisol measurements. Of these, the highest was 93 μg/ 24 hr at 17 weeks and none were in the range of CD (Table 1). Towards the end of her 2 nd trimester, she started to complain of severe fatigue. Given her low 24 hr urine cortisol level of 15 μg/24 hr at 36 weeks gestation, she was started on HC. She underwent a cesarean section at 40 weeks gestation for oligohydramnios and she subsequently delivered a healthy baby boy weighing 7.6 pounds with APGAR scores at 1 and 5 minutes being 9 and 9. HC was discontinued immediately after delivery. Around four weeks postpartum she developed symptoms suggestive for CD. Diagnostic tests showed an elevated midnight salivary cortisol of 0.206 and 0.723 μg/dL, and 24-hour urine cortisol of 400 μg/24 hr. MRI pituitary illustrated a 3 mm adenoma in the left posterior region of the gland, which was thought to represent a recurrent tumor (Figure 1(c)). A discrete lesion was found and resected during repeat TSA. Pathology confirmed corticotroph adenoma with MIB-1 < 3%. On postoperative days 1, 2, and 3, the cortisol levels were 26, 10, and 2.8 μg/dL, respectively. She was tapered off HC within one month. Her symptoms improved only slightly and she continued to report weight gain, muscle weakness, and fatigue. ree months after repeat TSA, biochemical data showed 1 out of 2 midnight salivary cortisols elevated at 0.124 μg/dL and elevated urine cortisol of 76 μg/24 hr. MRI pituitary demonstrated a 3 × 5 mm left enhancement, concerning for residual or enlarged persistent tumor. Subsequent lab work continued to show a biochemical excess of cortisol, and the patient was started on metyrapone but reported no significant improvement of her symptoms and only mild improvement of excess cortisol. After a multidisciplinary discussion, the patient made the decision to pursue bilateral adrenalectomy, as she refused further medical management and opted against radiation given the risk of hypogonadism.
Discussion
e symptoms and signs of Cushing's syndrome overlap with those seen in normal pregnancy, making diagnosis of Cushing's disease during pregnancy challenging [1]. Potential mechanisms of gestational hypercortisolemia include increased systemic cortisol resistance during pregnancy, decreased sensitivity of plasma ACTH to negative feedback causing an altered pituitary ACTH setpoint, and noncircadian secretion of placental CRH during pregnancy causing stimulation of the maternal HPA axis [5]. Consequently, both urinary excretion of cortisol and late-night salivary cortisol undergo a gradual increase during normal pregnancy, beginning at the 11 th week of gestation [2]. Cushing's disease is suggested by 24-hour urinary-free cortisol levels greater than 3-fold of the upper limit of normal [2]. It has also been suggested that nocturnal salivary cortisol be used to diagnose Cushing's disease by using the following specific trimester thresholds: first trimester, 0.25 μg/dL; second trimester, 0.26 μg/dL; third trimester 0.33, μg/dL [11]. By these criteria, our patient had no signs or biochemical evidence of CD during pregnancy but developed CD 4 weeks postpartum.
A recent study by Tang et al. proposed that there may be a higher risk of developing CD in the peripartum period, but did not test for CD during pregnancy, and therefore was not able to definitively say exactly when CD onset occurred in relation to pregnancy [4]. Previous literature suggests that there may be a higher risk of ACTH-secreting pituitary adenomas following pregnancy as there is a significant surge of ACTH and cortisol hormones at the time of labor. is increased stimulation of the pituitary corticotrophs in the immediate postpartum period may promote tumorigenesis [6]. It has also been suggested that the hormonal milieu during pregnancy may cause accelerated growth of otherwise dormant or small slow-growing pituitary corticotroph adenomas [4,5]. However, the underlying mechanisms of CD development in the postpartum period have yet to be clarified. We highlight the need for more research to investigate not only the development, but also the risk of CD recurrence in the postpartum period. Such research would be helpful for family planning.
Conclusion
Hypothalamic-pituitary-adrenal axis activation during pregnancy and the immediate postpartum period may result in higher rates of CD recurrence in the postpartum period, as seen in our patient. In general, more testing for CS in all reproductive-age females with symptoms suggesting CS, especially during and after childbirth, is necessary. Such testing can also help us determine when CD occurred in relation to pregnancy, so that we can further understand the link between pregnancy and CD occurrence, recurrence, and/or persistence. Learning about the potential mechanisms of CD development and recurrence in pregnancy will help us to counsel these reproductive-age women who desire pregnancy.
Data Availability
e data used to support the findings of this study are included within the article.
Additional Points
Note. Peripartum refers to the period immediately before, during, or after pregnancy and postpartum refers to any period after pregnancy up until 1 year postdelivery. Disclosure is case report is a follow up to an abstract that was presented in ENDO 2020 Abstracts. https://doi.org/10.1210/ jendso/bvaa046.2128.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
|
2022-08-15T15:07:50.093Z
|
2022-08-13T00:00:00.000
|
{
"year": 2022,
"sha1": "290af1e1f8bae652bc7907af8d15b991ef9da70b",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/crie/2022/9236711.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f84f5ca3b14191e91d114bfa708e8aa7b6f03b3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
53337814
|
pes2o/s2orc
|
v3-fos-license
|
Effect of a Novel Curcumin Formulation on Adaptogenic and Endogenous Anti-oxidant /Oxidative Stress in Chronic Mild Unpredictable Stress Model in Rats
This study investigates the anti-oxidant effect of Curcumin formulation (UltraSol CurcuWin) on stress induced rats. Fisher 344 N male rats that were 6-8 weeks old were chosen for the study. 30 animals were divided into 5 groups: I, II, III, IV and V. Group I was taken as the control group and was administered only with vehicle. All other groups were administered with various stressors every 24 hours over a period of 3 weeks. Group II which is administered with stressors and vehicle is used as a positive control. Curcumin powder 95% was administered to Group III with a dose of 100 mg/kg/day. UltraSol CurcuWin is administered per orally at 100 mg/kg/day and 200 mg/kg/ day to groups IV & V respectively. Anti-oxidant effect was evaluated based on behavioral studies and tissue necropsy studies. Data revealed that UltraSol CurcuWin provided significant protection against loss of body weight (p<0.01), ameliorated anxiety (p<0.01) and depression behavior (p<0.01). Tissue necropsy studies on UltraSol CurcuWin20 % administered animals indicated a significant lower levels of Glutathione (GSH) in brain (p<0.01), heart (p<0.01) and liver (p<0.05). Lipid Peroxide (LPO) was significantly higher in brain (p<0.05), heart (p<0.01), liver (p<0.05) and kidney (p<0.01). Activity of Catalase decreased in the brain (p<0.05), heart (p<0.01) and kidney (p<0.01). The levels of super oxide dismutase (SOD) were significantly lower (p<0.01) in brain, heart and kidney. There is no significant difference between the plasma cortisol levels of unstressed rats and those that were administered with high dose UltraSol CurcuWin 20 % (Group V). There is no effect on protein content between the control and treatment groups. Histology studies on the hypothalamus indicated no stress related or treatment related changes. In conclusion UltraSol CurcuWin showed a potential role on endocrine function and also demonstrated the ability in alleviating the stress induced changes. UltraSol CurcuWin can act as a potent adaptogen.
Introduction
Curcumin is a versatile compound having a plethora of pharmacological activities. Curcumin has been shown to exhibit antioxidant, anti-inflammatory, [1][2][3][4] anti microbial, and anticarcinogenic [5][6][7][8][9] activities. Additionally, the hepatoand nephro-protective, [10][11][12] thrombosis supressing, [13] myocardial infarction protective, [14][15][16]. hypoglycemic, [17][18][19][20] and anti rheumatic [21] effects of curcumin are also well established. Various animal models [22,23] or human studies [24][25][26][27] proved that curcumin is extremely safe even at very high doses. For example, three different phase I clinical trials indicated that curcumin, when taken as high as 12 g per day, is well tolerated [25][26][27]. Similarly, the efficacy of curcumin in various diseases including cancer has been well established [28]. Several clinical studies dealing with the efficacy of curcumin in humans can also be cited [29,30]. The pharmacological safety and efficacy of curcumin makes it a potential compound for treatment and prevention of a wide variety of human diseases. Different strategies have been pursued to improve the absorption of curcumin including nanocrystals, emulsions, liposomes, self-assemblies and nanogels [55]. In animals, coadministration of curcumin with an extract obtained from the black pepper has been shown to increase the absorption (AUC) of curcumin by 1.5-fold. Whereas, a complex of curcumin with phospholipids increased absorption by 3.4-fold [56] and a formulation of curcumin with a micellar surfactant (polysorbate) has been shown to increase the absorption of curcumin in mice 9.0-fold [57]. A micro emulsion system of curcumin, which consists of Capryol 90 (oil), Cremophor RH40 (surfactant), and Transcutol P aqueous solution (co-surfactant) has been shown to increase the relative absorption in rats by 22.6-fold [58]. Polylactic-co-glycolic acid (PLGA) and PLGA-polyethylene glycol (PEG) (PLGA-PEG) blend nanoparticles increased curcumin absorption by 15.6-and 55.4-fold, respectively, compared to an aqueous suspension of curcumin in rats [59].
Food-grade formulations to enhance the absorption of curcumin have been studied in human clinical trials [60][61]. A proprietary formulation of curcumin has been developed retaining and utilizing more components of the raw turmeric root which are usually eliminated during extraction. The combination of curcuminoids and volatile oils of turmeric rhizome (CTR) resulted in a 6.9-fold increase in human absorption of curcumin [60]. The inclusion of curcumin in a lipophilic matrix (Phytosomes, Curcumin: Soy Lecithin: Microcrystalline Cellulose 1:2:2, CP) has been shown to increase the relative human absorption of curcumin by 19.2-fold [61].
A formulation made by mixing curcumin with glycerin, gum ghatti, and water, followed by wet milling and dispersion by highpressure homogenization has been shown the increase curcumin appearance in the blood by 27.6-fold [62]. A novel curcumin formulation which was made water soluble by dispersing curcumin and antioxidants (tocopherol and ascorbyl palmitate) on a water-soluble carrier such as polyvinyl pyrrolidone has been shown to have greater antidepressant action compared to conventional curcumin [63]. One study conducted at our research center demonstrated that a combination of hydrophilic carrier, cellulosic derivatives and natural antioxidants significantly increases curcuminoid appearance in the blood in comparison to unformulated standard curcumin [64].
Although molecular mechanisms of action of curcumin are not fully understood, curcumin has proven to be a safe agent for the treatment of various ailments. Our formulation has demonstrated significantly improved solubility in vitro and therefore is expected to have more bioavailability. Under routine conditions curcumin proved to be an ailment for various disorders that arise due to oxidative stress. Obviously a formulation with improved bioavailability must eliminate or decrease stress induced changes . In this study we investigated the effectiveness of our Curcumin formulation (UltraSol CurcuWin) on adaptogenic and endogenous antioxidant /oxidative stress in chronic mild unpredictable stress model in rats and compared with that of the curcumin powder.
Instrumentation
Spectrophotometer (25, Thermo Fisher Scientific, USA) is used for all the estimation. Microscopical examination is made using a Motic DMB1-2MP, China. Data analysis is performed using Graph Pad Prism Version 5.0 software for deriving statistical parameters.
The elevated plus maze was made of wood painted in black, and consisted of two opposite open arms (50 × 10 × 0.5 cm), an open platform (10 ×10 cm) in the center and two opposite closed arms (50 ×10 ×40 cm). Open field maze was made of black wood and consisted of a floor (96 × 96 cm) with 50 cm walls The box floor was painted with white lines (6 mm) to form 16 equal squares.
Management of animals
The institutional animal ethics committee of Sri Ramachandra University, Chennai approved the study protocol. The study was performed at CEFT, which is an approved laboratory vide registration number 189/PO/bc/1999/CPCSEA. All ethical practices as laid down in the guidelines for animal care were followed during the study. Fisher 344 N male rats aged between 6-8 weeks were used in this study, with the experiment lasting for a period of 3 weeks. The animals were procured from the breeding stock from National Institute of Nutrition, Hyderabad, India. Upon procurement, the rats were kept at the Center for toxicology and developmental research, Sri Ramachandra, University, Chennai. Animals are randomized based on stratified body weight method and divided into 5 groups of 6 animals each (I, II, III, IV and V) and were acclimatized for 5 days. The animals were marked with black permanent market at the base of the tail. Standard rodent feed purchased from M/s Provimi Animal Nutrition India was provided ad libitum. Mili Q RiOs filtered water was provided ad libitum.
The animal room was well ventilated with 12-15 air
Journal of Pharmacology & Clinical Research
exchanges/hour and maintained with a temperature range of 19-23 ºc and humidity ranging from 30-70% throughout the study. Animal room was built with automatic 12hr light and dark cycles. Animals were housed in polypropylene cages with de-dusted and autoclaved husk as bedding material. Caging and bedding materials were changed on alternate days.
Administration of compounds and Induction of stress
Group I was taken as the control group and was administered with vehicle. All other groups were administered with various stressors every 24 hours over a period of 3 weeks. Group II which is administered with stressors and vehicle is used as a positive control. Curcumin powder 95% was administered with 100 mg/kg/day per orally to Group III. UltraSol CurcuWin20 % is administered per orally at 100 mg/kg/day and 200 mg/ kg/day to groups IV & V respectively [65][66]. Test /Reference compounds were formulated freshly prior to administration every day using 0.05% w/v CMC as vehicle. Test / reference items were administered 30-45 minutes before the induction of stress to the animals by oral gavage using ball tipped 18G needle and polypropylene syringe. The dose volume is fixed at 10 ml/ kg body weight. Stressors were administered using the procedure described by Wu et al. [67]. Accordingly, except for the 24 hour stressors, all stressors were administered once daily in the morning between 9.30 and 12.30. A complete schedule of different stressors administered during the 21 day study was given in (Table 1). The different stressors were administered at an interval of at least 7 days. With the exception of cold swim stress that was administered 3 times during the study, all other stressors were administered at least twice during the study period.
Measurement of anxiety in elevated plus maze
Anxiety behavior was assessed using elevated plus maze test following Lister method [68]. The elevated plus maze was made of wood painted in black, and consisted of two opposite open arms (50 × 10 × 0.5 cm), an open platform (10 ×10 cm) in the center and two opposite closed arms (50 ×10 ×40 cm). The entire maze was elevated 50 cm from the floor in a dimly lit room (20 Lux). Rat was placed on the central platform facing one of the open arms. The time spent and the numbers of entries in the open and closed arms were recorded for a period of three minutes. An entry was considered when all the four paws placed in one arm. The maze was cleaned following each trial to remove any residue or odors.
Measurement of depression in open field exploratory test
Depression was assessed using open field test following Lister [69]. The open field maze was made of black wood and consisted of a floor (96 × 96 cm) with 50 cm walls. The apparatus was illuminated with a low intensity diffuse light (45 W) situated 45 cm above the floor level. The box floor was painted with white lines (6 mm) to form 16 equal squares. During a three min observation period, the rat was placed at one corner of the apparatus facing the wall. The number of squares crossed (ambulation), rearing, grooming, immobility period, urination and fecal pellets were recorded. The maze was cleaned following each trial to remove any residue or odors.
Histology studies
All animals were euthanized at the end of the experimental period using anesthetic ether in closed chamber. Adrenal glands, Pituitary gland and brain was collected and fixed in 10% neutral buffered formalin for 48 hours. The tissues were processed for paraffin embedment, sectioned and then stained with Hematoxylin and Eosin [70] for microscopic evaluation.
Biochemical Studies
After completion of the study, the enzymatic antioxidants levels in the brain, heart, liver and kidney were estimated using standard protocols. The antioxidant markers such as Superoxide dismutases Kakkar et al. [71], Catalase Beers & Sizer [72], lipid peroxidation Ohkawa et al. [73] and reduced Glutathione Jollow et al. [74] were estimated along with Total protein Lowry et al. [75] were available in literature. All methods were standardized in the laboratory before testing the study samples. The stress hormone cortisol is estimated by ELISA method [76]. Values were expressed in mean ± SEM; n= 6 animals/group; Mean difference between the groups were analyzed using one way ANOVA followed by tukey's multiple comparison test in graph pad prism 5.0.
Results and Discussion
Behavioral studies Body weight: A significant decrease (p< 0.01) in body weight was observed in stress induced rats when compared to unstressed rats from day 10 till the completion of the study. UltraSol CurcuWin treatment significantly ameliorated these changes in comparison to vehicle treated stressed rats [Day 10 (p < 0.05), 15 (p<0.01), 20 (p<0.01) and 22 (p<0.01)]. The values were found to be comparable with Curcumin Powder 95%. The drop in body weight on day 22 in all the groups is due to fasting. The results are given in (Table 2) & ( Figure 1). Open field exploratory behavior: A significant increase (p<0.01) in immobility period and decrease in ambulation (p<0.01), rearing (p<0.01) and grooming (p<0.05) were observed in vehicle treated stressed rats when compared to unstressed rats. UltraSol CurcuWin-low and high dose treatment significantly and dose dependently increased (p<0.05 and 0.01, respectively) ambulatory behavior in comparison to vehicle treated stressed rats. However, treatment with Curcumin Powder 95% and UltraSol CurcuWin-low and high dose produced a non-significant decreased in immobility period and increase in rearing and grooming behavior when compared to the vehicle treated stressed rats. The results are given in (Table 3) Values were expressed in mean ± SEM; n= 6 animals/group; Mean difference between the groups were analyzed using one way ANOVA followed by Tukey's multiple comparison test in graph pad prism 5.0. #, ## indicates p value <0.05 and 0.01, respectively vs group I; *, ** indicates p value < 0.05 and 0.01, respectively vs group II. (Table 4). Values were expressed in mean±SEM; n= 6 animals/group; Mean difference between the groups were analyzed using one way ANOVA followed by Tukey's multiple comparison test in graph pad prism 5.0. #, ## indicates p value <0.05 and 0.01, respectively vs group I; *, ** indicates p value < 0.05 and 0.01, respectively vs group II.
Biochemical studies
Reduced glutathione: Reduced glutathione content was found to be significantly decreased in brain (p<0.01), heart (p<0.01) and liver (p<0.05) tissues of stressed rats when compared to unstressed rats. Treatment with UltraSol CurcuWin significantly increased reduced glutathione content in brain [p<0.05 in UltraSol CurcuWin-low and high dose] and heart [p<0.01 and 0.05 in UltraSol CurcuWin-low and high dose, respectively] tissues when compared to vehicle treated stressed rats. However, a non-significant increase in liver glutathione content was observed in UltraSol CurcuWin treated rats in comparison to vehicle treated stressed rats. The values were found to be comparable with Curcumin Powder 95%. The results are given in (Table-5 Values were expressed in mean±SEM; n= 6 animals/group; Mean difference between the groups were analyzed using one way ANOVA followed by Tukey's multiple comparison test in graph pad prism 5.0. #, ## indicates p value <0.05 and 0.01, respectively vs group I; *, ** indicates p value < 0.05 and 0.01, respectively vs group II.
Lipid peroxide (LPO):
A significant increase in LPO content was observed in brain, heart, liver and kidney (p<0.05, 0.01, 0.01 and 0.01, respectively) tissues of stressed rats when compared to unstressed rats. In comparison to vehicle treated stressed rats, UltraSol CurcuWin treatment significantly decreased these alterations in heart [p<0.01 in UltraSol CurcuWin-low and high dose], liver [p<0.05 and 0.01 in UltraSol CurcuWin-low and high dose, respectively] and kidney [p<0.05 and 0.01 in UltraSol CurcuWin-high and low dose, respectively] tissues. The protective effect of UltraSol CurcuWin was found to be better than Curcumin Powder 95% in heart, liver and kidney tissues. The results are given in (Table 6) & (Figure 4). Values were expressed in mean ± SEM; n= 6 animals/group; Mean difference between the groups were analyzed using one way ANOVA followed by Tukey's multiple comparison test in graph pad prism 5.0. #, ## indicates p value <0.05 and 0.01, respectively vs group I; *, ** indicates p value < 0.05 and 0.01, respectively vs group II. Values were expressed in mean±SEM; n= 6 animals/group; Mean difference between the groups were analyzed using one way ANOVA followed by Tukey's multiple comparison test in graph pad prism 5.0. #, ## indicates p value <0.05 and 0.01, respectively vs group I; *, ** indicates p value < 0.05 and 0.01, respectively vs group II. Superoxide dismutase (SOD) activity: Superoxide dismutase activity was found to be significantly decreased (p<0.01) in brain, heart, liver and kidney tissues of stressed rats when compared to unstressed rats. In comparison to vehicle treated stressed rats, UltraSol CurcuWin treatment significantly increased SOD activity in brain [p<0.05 in UltraSol CurcuWinlow and high dose], heart [p<0.05 in UltraSol CurcuWin-high dose] and kidney [p<0.01 in UltraSol CurcuWin-low and high dose] tissues. However, a non-significant increase in SOD activity was observed in liver tissues of UltraSol CurcuWin treatment when compared to vehicle treated stressed rats. Further, UltraSol CurcuWin increased SOD activity better than Curcumin Powder 95% in brain and kidney tissues. The results are given in (Table 8) & (Figure 6). Values were expressed in mean ± SEM; n= 6 animals/group; Mean difference between the groups were analyzed using one way ANOVA followed by Tukey's multiple comparison test in graph pad prism 5.0. #, ## indicates p value <0.05 and 0.01, respectively vs group I; *, ** indicates p value < 0.05 and 0.01, respectively vs group II.
Journal of Pharmacology & Clinical Research
Cortisol: A significant increase (p<0.01) in plasma cortisol level was observed in stressed rats when compared to unstressed rats. High dose of OAHT (B) treatment significantly decreased (p<0.01) the cortisol level in comparison to vehicle treated stressed rats. The values were found to be comparable with that of reference drug, OAHT (A). OAHT (B) decreased plasma cortisol levels better than OAHT (A) stressed animals. The results are given in (Table 9) & (Figure 7). Total Protein: The levels of total protein quantified using the Lowry method [75] indicated no significant differences in the organs of treated and untreated groups. This indicates that turmeric safe for administration and that damage to the DNA or protein synthesis is not impaired by the administration of curcumin and/or due to the presence of stress.
Histopathology Studies
Histopathological examination (Figure 8) of the three organs (hypothalamus, pituitary and adrenal glands) from the stressed rats and Curcumin Powder 95%, UltraSol CurcuWin -low and high dose treated rats revealed no stress related or treatment related changes and remained apparently normal in comparison to the unstressed rats. Reason for the no alterations in the pathology may be due to less intensity and duration of the stress administered to the animals.
Journal of Pharmacology & Clinical Research Summary and Conclusion
Each cell in the human body maintains a condition of homeostasis between the oxidant and antioxidant species [76]. Up to 1-3% of the pulmonary intake of oxygen by humans is converted into ROS [77]. Under conditions of normal metabolism, the continuous formation of ROS and other free radicals is important for normal physiological functions like generation of ATP, various catabolic and anabolic processes and the accompanying cellular redox cycles. However, stress condition is known to cause the excessive release of free radicals either due to endogenous biological or exogenous environmental factors, such as chemical exposure, pollution, or radiation. The overall physiological impact of these factors and the adaptation ability of the body determine the variations in growth, development, productivity, and health status of the animals [78][79][80]. Strong and sustained exposure to stress [79,81,82] may result in higher energy negative balance and may ultimately result in reduction in adaptation mechanisms, increase in the susceptibility to infection by pathogens, decline in productivity, and finally a huge economical loss [79,83,84]. ROS can attack the lipids of cell membranes and DNA and protein content of the cells with lipid peroxidation of cellular membranes, calcium influx, and mitochondrial swelling and lysis [85][86].
In several studies, Curcumin showed significantly poor bioavailability. While pharmacological actions are usually dose dependent, the efficacy can also impaired by poor absorption and distribution characteristics. Our test formulation of Curcumin is aimed to increase the bioavailability and thus the overall pharmacological effect. In several animal models it has been demonstrated to exert potent anti-inflammatory and anti-tumor and hypolipidemic properties. In view of these properties, we have investigated the effectiveness of Curcumin as an antioxidant under stress conditions [87][88].
Stress can be acute, episodic or chronic depending on its occurance, duration and treatment approaches. Symptoms such as emotional distress, anger or irritability and depression are most commonly attributed to acute stress. Episodic stress is mostly due to disorderliness and lack of proper planning leading to chaos and misery. Chronic stress arise from traumatic, prolonged experiences and generally end up with suicide, violence, heart attack, stroke and, perhaps, even cancer. Keeping in view of the various types of stress and their impact on the physiological functions, we had designed the protocol to monitor the behavioral, biochemical and histological effects. From (Table 2), it is clearly evident that animals prone to stress have eventually demonstrated a loss of body weight. In comparison to the unstressed group, the group treated with Curcumin powder showed relatively less body weight growth. Animals treated with our Curcumin test formulation displayed significant and comparable levels of recovery from stress. From the behavioral studies (Table 3 & 4) it can be inferred that the curcumin test formulation ameliorated anxiety and depressive behavior and thus Curcumin can act as a potent adaptogen.
Curcumin has strong free radical scavenging activity and it is evident from this study that it can protect the biological systems against oxidative stress. Comparitive analysis of the curcumin formulations with that of the Curcumin powder revealed that the content of reduced glutathione did not vary significantly in the brain. In the heart and kidney, the amount of reduced glutathione is relatively low in the animals treated with Curcumin test formulation. In comparison with the animals treated with Curcumin powder, there is a drastically lower level of reduced glutathione in the liver of animals treated with test formulation. This can attributed to the hepatic metabolism of Curcumin.
Comparison of the LPO content indicated that Curcumin test formulation displayed high effectiveness in lowering the LPO in the heart & liver. The LPO content in the brain for rats treated with either the test formulation or the powder did not show significant difference. Except in the liver, the activity of the catalase is significantly higher in the animals treated with test formulation as compared with the animals treated with Curcumin powder. The animals treated with the test formulation of Curcumin showed increased SOD activity than those treated with Curcumin powder.
In response to stress, cortisol is released into the body as a homeostatic mechanism. A comparison of the cortisol levels in the positive control and treated groups indicated that Curcumin is effective in reducing the levels of cortisol. There is insignificant difference between the levels of cortisol in animals treated with high dose of test formulation and the animals of the normal control group. Therefore Curcumin has a potential role on endocrine function and also the ability to alleviate the stress induced changes. Histological studies indicated no stress related or therapy related changes and remained apparently normal in comparison with the unstressed rats.
As expected the levels of biochemical and histological markers due to stress induced changes are significantly less. Also, our formulation proved to be effective even in chronic conditions of stress. Therefore Curcumin can act as a potent antioxidant and as an adaptogen.
|
2019-04-02T13:03:51.036Z
|
2016-09-01T00:00:00.000
|
{
"year": 2016,
"sha1": "a51ad6e8d7a2a2a5029cad7e62ceb4ee172d03bb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.19080/jpcr.2016.01.555567",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "95fbfc25ff67f9db355c269d632e2667b2b6da15",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
261383047
|
pes2o/s2orc
|
v3-fos-license
|
Gender as a moderating variable in online misinformation acceptance during COVID-19
Misinformation remained a critical consideration during the COVID-19 that further cultivated fears leading to strong unrest among the public globally. This study clarifies certain misconceptions related to the pandemic by investigating whether factors such as altruism, entertainment, information-sharing, information-seeking, comprehensibility have impact on COVID-19 misinformation acceptance and sharing it in the UAE culture, with gender as a moderating factor. An online survey (google.com) was used, with a sample of 200 university students, and analyzed using PLS-SEM software to determine the effects of constructed factors. The findings indicated that entertainment, information-sharing, and information-seeking factors have impact on sharing COVID-19 misinformation, while comprehensibility has impact on acceptance of misinformation. Interestingly, gender was found to have no impact on all the constructed factors, suggesting that other moderating factors (e.g., age) need to be considered in future research. Generally, online users need to learn how to verify online information that they receive/share on other social media, especially regarding health concern.
Introduction
The fabrication of information for either political or financial gains goes back deep in the history of human communication.Rosetti and Matthews [1] sketched a timeline of the "information disorder" that starts with a propaganda campaign waged by Octavian to discredit Mark Antony in 44 BCE by claiming that Antony belonged to Egypt, not Rome.Recently, information communication technology (ICT), including social media, has greatly facilitated the dissemination of fake news [2,3], making it highly prevalent and turning it into a global concern [4][5][6].
Social media users are constantly exposed to a hard-to-control tide of information that originates primarily from independent and amateur content creators.Such users can spread this content and multiply its reach by sharing and liking it.This repetitious act of sharing unverified content would turn social media into venues for propagating false information and fake news [7].They are also a powerful tool for the spread of copious amounts of uncensored media [8], which increases the spread of misinformation and manipulates the public's worldview [9][10][11].
Duffy et al. [12] indicate that fake news is intentionally created media that mimics legitimate news and is portrayed deceptively to make readers perceive it as legitimate, and it comes more prevalent in the digital world.While misinformation is false information that is spread, regardless of whether there is intent to mislead.The letter implies on some officials, media producers, and citizens that spread incorrect or misleading information to a large audience in order to further their goals [7].With the spread of misinformation during the COVID-19 pandemic, it has affected almost every aspect of our lives.Several reports concerning COVID-19 were proliferating, making it difficult to differentiate between real and fake news [13].As the worldwide quest for a cure for COVID-19 proceeded, the spread of misinformation on social media has become worse, undermining the efforts made by governments and healthcare professionals worldwide [14].
It is not clear, however, why users create and share misinformation during crises [15], which calls attention to examining and understanding the origins and causes of such attitude and its exponential rise on social media.With the outbreak of COVID-2019, misinformation started to online circulate vehemently, which was termed by the World Health Organization [16] as an infodemic.According to Awan et al. [17], the sharing of COVID-19-related misinformation was at its peak in the early months of the pandemic.For instance, Li et al. [18] reported that about one-quarter of the most-viewed YouTube videos on COVID-19 presented false information.A few studies focused on examining the reasons why users spread misinformation during the COVID-19 pandemic [19,20].Others identified specific motivations for spreading misinformation during the pandemic (e.g., entertainment and self-promotion) [21]; altruism, or sharing without expecting anything in return [19]; and promotion of one's own perceptions rather than seeking scientific knowledge [22].
This active sharing of false health news on social media during the pandemic aroused both public and governmental concerns.Misinformation about medical conditions has been discovered to pose possible harm to the health of the public.Corroboration from the past has revealed that misinformation about medical conditions is common.Nevertheless, social media, which allows users to share information freely, has accelerated the spread of misinformation in the health ecosystem [15].The dissemination of misinformation has made the authorities aware of the harm that it can do to the nation's political equilibrium.For example, the UAE government, like some countries, was aware of the influence of social media on its citizens, perhaps because of the Arab 2011 uprisings.
Thus, using the theories of electronic word-of-mouth (eWOM) and uses and gratifications, we created a hybrid holistic model to understand the predictors of sharing misinformation about COVID-19 on social media.We also incorporated an element of altruism to expand this hybrid model.To comprehend such an impact, we contend that findings from previous research on news-sharing can be extended to examine the dissemination of misinformation [23].Given the scarcity of literature attempting to explain the spread of misinformation, this study builds on the work of Ma and Chan [23] and Thompson et al. [24], who studied and evaluated fake news sharing using those news-sharing factors.More significantly, the study aims at examining the impact of gender in a traditionally patriarchal culture on the acceptance of misinformation among social media users to understand the importance of raising consciousness and attention about the impact of gender on the sharing and acceptance of misinformation on social media.
Hypotheses and the Research model
The models of technology acceptance suggest that specific attributes of innovations predict their adoptability in any social system.This study combines the literature on both the models of technology acceptance and the electronic word-of-mouth model to identify the major factors underlying the acceptability of misinformation (Fig. 1).Accordingly, the primary variable is the users' perceptions of the uses of technology that determine its acceptance (Davis, 1989).The adoption of technology has already been addressed in a variety of contexts, including e-learning [25], and e-governments [26,27].However, the sharing and acceptance of misinformation on social media did not receive much attention in the same literature.
The eWOM model addresses the opinions, ideas, experiences, and preferences online shared by consumers.This user-generated content may provide readers with either real or false news.Several studies analyzed specific aspects of electronic word-of-mouth publications e.g., message comprehensibility and users' willingness to share different electronic word-of-mouth messages [28,29].We also relied on the models of technology acceptance to assess why social media users accept online misinformation.Although the literature on technology adoption focuses more on technology than messaging, it adds value by providing methods for assessing and measuring the factors that determine users' acceptance of technology.Fig. 1 illustrated the factors that could influence the sharing and acceptance of misinformation based on the literature on the models of technology adoption and electronic word-of-mouth.The uses and gratifications theory (U&G) was also integrated into the model, as we postulated an association between altruism, entertainment, information seeking, and information sharing on social media, and the sharing of COVID-19 misinformation.
The U&G theory suggests that people actively choose media to fulfil their needs [30].For audience activity, the theory predicts how the media affect individuals according to the gratifications they seek from the services offered by a medium.Accordingly, an individual's motivation to gratify their needs influences their choice of a specific medium, how that medium is used, and how its information is interpreted [31].The U&G concept was applied in developing several research models [32].For example, research shows gender-based differences in what individuals seek to gratify by using information and communication technology.Nasar et al. [33] reported that women used cellphones to satisfy more safety needs.They also used the Internet for more interpersonal communication and social gratifications, while men used it more for leisure gratifications [34].The current study expands and combines those models to investigate how gender influences the spread and acceptance of misinformation about COVID-19 on social media in the United Arab Emirates.
Comprehensibility
Sasseen et al. [35] reported that the traffic from social media platforms accessed from the US. to news sites has increased by 57%.Almost a decade later, a survey conducted by the Pew Research Center (2021) showed that 48% of American adults get news from social media.The new smart devices have facilitated users' access, search, and retrieval of news, which has created new patterns of news consumption in the past two decades.Newman et al. [36] reported that the use of online news in Britain started to level off in 2009.Whether users consume news on social media networks incidentally or intentionally, the patterns of news dissemination and consumption have changed significantly globally, especially among the tech-savvy and digital-oriented audience.
During COVID-19, individuals sought more cognition to cope with the uncertainty that surrounded the spread, causes, and preventions of the viral infection.This led them to rely more heavily on social media to gather and share information about the pandemic [37,38].More Slovenian users were found to processed information about the pandemic on social media and acquired more knowledge, the more they believed that social media news comprised "all essential facts about COVID-19" [39].Ahmed and Rasul [40] found that those who relied on social media more frequently for news were more prone to believe COVID-19 misinformation and share it on social media.
Altruism
Altruism is defined as "the voluntary dissemination of information without expectation of reward" [41].Sharing content with others without expecting anything in return is an example of altruistic behavior.An altruist social media user is voluntarily seeking ways to help others by sharing self-perceived beneficial information and news with those in need without asking for anything in return.Research about the sharing of knowledge, information, and news online has demonstrated several aspects of online altruistic actions.For example, research found that online altruism is positively connected with the voluntary collection and sharing of information.Those social media altruists seek to enlighten others and foster social cohesion without expecting money or rewards [23,41].
The act of online altruism might come with a price, however.Duffy et al. [12] indicated that people who share information to help others do not always check the authenticity of that information or make sure it does not contain inappropriate safety advice.Therefore, it is logical to infer that such acts of informative online kindness, especially during times of crises, facilitate the spread of fake news and acts of kindness.Those who are more altruistic in nature may have spread misinformation about COVID-19 even though their motivation was to provide guidance and help others.
Entertainment
Using social media as a form of recreation and therapy has become common practice.People can satisfy their craving for entertainment when they use social media to kill time, engage in pleasurable pursuits, and escape from their mundane lives.Studies have A. Mansoori et al. found that Facebook is primarily used for fun and recreation [42].Kim et al. [43] found that the "like" button on Facebook is used to express opinions on a wide range of topics.They concluded that Facebook usage is positively associated with leisure time.However, some found that people generally do not like to share news online, suggesting that such behavior is unrelated to using social media for personal amusement [44].
Social media users might share news to alleviate boredom and fill their leisure time.Choi [45] reported that American adults "get enjoyment and feel pleasure from expressing their thoughts about news content" (p.254).Social media users tend to fill their leisure time finding and sharing useful information with others [46] and might enjoy sharing information on social media sites because they want to share it with others in a social setting [47].However, the literature does not show a consistent impact of the entertainment motivation on sharing news on social media.Baek et al. [42] did not find a significant correlation between sharing news on Facebook and the pass time gratification.Thompson et al. [24] also concluded that socializing and entertainment gratifications did not have a significant effect on sharing news online.We expect, however, an association between entertainment gratification and sharing news on social media because of the curfew imposed by authorities in the UAE during the pandemic.People had to involuntarily deal with social isolation and seek information about the virus to dispel insecurities and seek guidance.This could encourage some individuals to upload unverified information to pass time and seek entertainment.
Information-sharing
People are motivated to share information for different purposes [24].The history of research into how information is shared is extensive [47].McGonagle [5] found that the use of social media accelerated the spread of fake news online.Tandoc Jr. et al. [48] suggested that the ease with which news can now be shared via social media is to blame.This is because anyone can take part in the creation and dissemination of information.Chen et al. [49] stated that the pleasure that information-sharers derive from their actions is significantly correlated with the prevalence of misinformation.People are more likely to share misinformation for the sake of education than for the sake of pleasure.Due to the sheer volume of COVID-19-related content already available on social media, we suggest that the spread of misinformation is inevitable if users do not take the time to verify the accuracy of the data before sharing it.During the pandemic, when everyone wanted to be a reporter, we indicated that double-checking information before disseminating it is highly unlikely.
Information-seeking
Seeking information is the process of trying to get relevant and timely information through various sources.During COVID-19, news articles shared on social media platforms were among the sources used by users seeking news and information about the pandemic.Lampos et al. [14], stated that as the number of COVID-19 cases worldwide increased, the number of misleading or false stories also increased.This implied that many users turned to the Internet for advice on combating the virus, despite the spread of misinformation.Online users who tended to seek information about the pandemic were experiencing emotional distress because of misinformation and the exaggeration of risks [50].This emotional distress, combined with individuals' natural tendency to reduce ambiguity during crises, may have driven them to seek and share more information on social media.Research also shows that the sharing of news on social media and the search for more information go hand in hand [49].This could explain why social media users shared this amount of unverified information during the pandemic.
Acceptance and sharing of Covid-19 misinformation
The technology acceptance model (TAM) identifies the characteristics that enhance the adoptability of technology in different contexts.The model differentiates between individuals' intentions to use technology and their actual adoption of it [51].Factors such as the perceived usefulness of technology and its perceived efficacy are expected to facilitate its acceptance among prospective users.Applying the same perspective to messaging, this study suggests that social media users would share the electronic word-of-month about COVID-19 they accept and perceive as beneficial with their social networks.The study, therefore, assumed.
H1.The perceived comprehensiveness of social media news has a significant impact on users' acceptance of COVID-19 misinformation.
H3. Entertainment gratification has a significant effect on the spread of COVID-19 misinformation.H4.Information-sharing gratification has a significant impact on sharing COVID-19 misinformation.H5.Information-seeking gratification has a significant impact on sharing misinformation about COVID-19.
H6.
Acceptance of misinformation about COVID-19 is associated with sharing news on social media.
Gender as a mediator
Previous research has found a gender gap in news consumption, with men consuming more news than women [52][53][54].The same gender gap in the use of new communication technologies was also detected [55,56].This gender gap is attributed to the widespread A. Mansoori et al. belief that technology is male-dominated and that men are more proficient users of technology [56], social and cultural norms also allow men more access to technology [54,57].
However, with new online communication technology, women are more likely as men to have easy access to and greater anonymity on social networks.Aman and Jayroe [58] argued that the anonymity provided by the Internet has empowered women in the patriarchal society of Saudi Arabia.Balfaqeeh [59] suggested that online users are less likely to be held accountable for their activities when they hide behind a mask of anonymity, such as when they engage in trolling, fury, stalking, or deception.Accordingly, more than half of Saudi Arabia's bloggers are women, and they write mostly about issues affecting women.Celebrity women bloggers in Saudi Arabia, such as Farah's Sowaleep, Saudi Eve, and Thought in the Kingdom of Lunacy, challenge the authority of men [7].Thanks to a petition started by women campaigners online, Saudi women can now legally drive.
Studies have found that the gender gap in adopting new communication technology is narrowing as online communication technology has become more affordable and accessible globally [60,61].Research also shows that women tend to use social media more frequently, and they engage in discussing family activity and maintaining their relationships on social media platforms [62].This accumulated literature about gender differences in consuming news, using technology, and seeking gratifications by using social media suggests gender would mediate the acceptance and sharing of misinformation about COVID-19 (Fig. 1).
Participants
After the UAE University Students Research Evaluation Committee (Ref: ERSC_2022_703) granted approval to conduct this study in the fall of 2021-2022, we attached a consent form and information sheet to our questionnaire and shared with students who were assured of their anonymity.Respondents were also instructed that they were free to withdraw at any time, and they did not receive any remuneration for their participation.The survey was emailed to respondents and shared on the respective Facebook and WhatsApp groups for the university to increase response rates (Table 1).Out of 200 responses, only 176 completed questionnaires were received (88% completion rate).
Pilot study
Twenty students were asked randomly from the target population for this pilot study to double-check the items' wording and length.The findings were incorporated into the main study, but additional data were also gathered from the pilot study.The 19-item achieved a satisfactory level of validity and discriminant reliability.The Cronbach alpha test was applied to assess internal reliability.Table 3 showed the reliability coefficients for the items measuring each construct were greater than 0.70, indicating a satisfactory level of reliability.
Table 1
The sample demographic characteristics.
Data analysis
Data were analyzed by applying a two-step assessment approach that incorporates the structural model and measurement model by using the partial least squares structural equation modelling method (PLS-SEM).Given the research aim (e.g., prediction), the PLS-SEM is considered to be appropriated approach.It has the ability to model composites and factors making it a formidable statistical tool for new technology.It was also chosen because it can easily manage introspective research with complex models [67].It is also an ideal option for conducting research that aims to advance an existing theory.PLS-SEM analyzes the entire model rather than breaking it up into pieces [68].Hair et al. [67] clearly indicate that the PLS-SEM method is very appealing in social sciences research as it enables them to estimate complex models with many constructs, indicator variables and structural paths without imposing distributional assumptions on the data.
Table 2
The adapted items and their sources.
Convergent validity
The Cronbach's alpha which is used to assess construct reliability, ranges in value from 0.753 to 0.911, exceeding the threshold of 0.7 (Table 3).Hair et al. [69] indicate that to evaluate the measurement model's construct validity, which comprises the convergent and discriminant validity, as well as construct reliability (containing composite reliability, Dijkstra-Henseler's rho (ρA), and Cronbach's alpha).The findings demonstrated that the CR has values between 0.784 and 0.927, which were higher than the recommended value of 0.7 [70].Alternatively, Dijkstra-Henseler's rho (ρA) reliability coefficient should be used to assess and publish construct reliability [71].The reliability coefficient ρA, like Cronbach's alpha and composite reliability, should show values of 0.7 in the introspective study and values greater than 0.80 or 0.90 at more sophisticated levels.Each measurement construct's reliability coefficient ρA is higher than 0.70.
These findings support the construct reliability, and, in conclusion, all the constructs were adequately error-free.The results show that the proposed value of 0.7 stayed lower than the average value for each factor loading.To measure convergent validity, the average variance extracted (AVE) and factor loading must be put to trial [69,72].According to Table 1, the AVE generated numbers between 0.533 and 0.810, which are greater than the 0.5 threshold level.Depending on these findings, convergent validity can be effectively obtained for all constructs.
Discriminant validity
Table 4 showed the results of the Fornell-Larker criterion support the prerequisites of validity testing because the AVE and square root of each construct are higher than its correlations with all other constructs [69,73,74].Fornell-Larker and the Heterotrait-Monotrait ratio (HTMT) were the two parameters that were advised to be measured for discriminant validity [75].The findings of the HTMT ratio demonstrate how the threshold value of 0.85 continues to be higher than the value of each construct [70].These findings are used to calculate discriminant validity.Accordingly, there were no issues found while assessing the model's validity and reliability.
Hypotheses testing and coefficient of determination
Every route's variance description (R 2 value) and each connection's path significance in the research model were evaluated.Fig. 2 and Table 5 showed the formalized path coefficients and path implications.The combined assessment of the research hypotheses was conducted using structural equation modelling (SEM).
Four out of six research hypotheses were substantiated by the data (Table 5).All the constructs from earlier research were confirmed in the model (comprehensibility, altruism, entertainment, information-sharing, information-seeking).The R 2 values for accepting and sharing misinformation on COVID-19 ranged from 0.378 to 0.395, indicating that these constructs have a moderate predictive power [76].The statistical testing accordingly supported hypotheses H1, H3, H4, and H5, but not H2 and H6.
Finally, the subsequent analysis of the moderating factor (gender) on the perceived comprehensiveness, altruism, entertainment, information-sharing, and information-seeking constructs was revealed (Table 6).The moderator influence can be used to describe how factor(s) impact the orientation/strength of correlation between the dependent variable and independent variables.The results demonstrated that none of the five assumptions were adopted, indicating that gender did not influence the association between the five constructs.
Table 4
The discriminant validity tests.
Discussion
Our research on COVID-19 focused on how different motivations for sharing and seeking information contributed to the spread of misinformation.A model using the uses and gratifications and e-word-of-month theories was developed to predict and explain the acceptance and sharing of COVID-19 misinformation through social media.The findings showed that those who perceived social media as a sufficient and comprehensive source of news tended to accept social media COVID-19 misinformation.Excessive exposure to information and news about COVID-19 on social media might have caused information overload for users, leading them to be less motivated to verify it [77].Users who primarily relied on social media for news about the pandemic were more receptive to unverified information because it was, to them, the best and most up-to-date information available at a time that was filled with ambiguity and concern over public health.Over time, they have had an overabundance of news about the pandemic, leading them to develop a unique prior knowledge about it.According to Kožuh and Čakš [39], higher prior knowledge about COVID-19 leads to a higher level of trust in social media news.This trust might also explain the higher levels of acceptance of misinformation among those who heavily relied on social media for news.The findings also showed that social media users in the UAE circulated COVID-19 misinformation out of a desire for entertainment.Not only did those social media users share news to inform their social networks about the pandemic, but they were also motivated by their need for entertainment.Research indicates that using social media habitually or as a hobby leads to fatigue, which in turn increases the likelihood that users may share false information to dispel such fatigue and boredom [44].During the pandemic, users were concerned about their health, education, and employment.This multi-faceted concern, coupled with the curfew imposed on them, led them to consume and share news from social media to entertain themselves and be informed.This might have also been enhanced by the widespread sharing of humorous content and memes during the pandemic, which were found to provide a coping mechanism for the pandemic, especially for users suffering from anxiety [78,79].This also replicates the correlation found between the gratification to pass time and users' intentions to share news on Facebook [24].
Our assumption is consistent with research showing that sharing news on social media helps maintain peace and harmony in people's lives.In the UAE, people tend to help one another when they heard of a potentially dangerous situation, regardless of the veracity of the report.People do this because they care about others and are motivated by the potential emotional impact and significance of the news on others [12].During this time of heightened anxiety (the pandemic), people may also be more likely to distribute misinformation by sharing unproven preventative measures to deal with it.This support (H4) of the study, which predicted that information-sharing is another factor that would explain the spread of COVID-19 misinformation, conforming Lampos et al. [14] results' indicating the increase of misinformation/news headlines alongside the worldwide increase in COVID-19 cases.
Our research showed that curiosity about COVID-19 contributed to the proliferation of misinformation about the disease.This factor was the fourth most effective element in circulating misinformation during COVID-19.This finding supports our H5 suggesting that many people rely on possibly misinformation found online about how to deal with the virus.According to Ma et al. [23], the primary benefit of social media use is the enhanced access to relevant information.The need to know everything drives people to read and share misinformation on social media, as research suggests [12].
We expected that UAE social media users would share misinformation about the pandemic out of altruistic reasons, but the data did not support this hypothesis.Although previous research shows that online users are driven by a desire to help others by sharing information voluntarily [23,41] our findings did not show a similar impact for altruism.This might be explained by the collectivistic UAE culture, which values modesty and low self-enhancement.The UAE is considered as a collectivist society, where harmony is more valued than competition.In such society, people are less likely to be vocal about how they help others, as modesty best explains their low self-enhancement [80].
Furthermore, the empirical examination of the impact of gender on misinformation acceptance in the UAE is still lacking in the Arab region, although the impact of gender on spreading misinformation has received high attention in the West e.g., the US 2018 election.We found that combining these models produces a robust assessment of the prevalence of misinformation about COVID-19 and its correlation with other factors.
This quantitative approach to investigate the relationships between potential factors and the consumption of social media misinformation, first using a web-based survey and then with confirmatory factor analysis via structural equation modelling and Smart PLS.The mediator variable (gender) contributes minimally to spreading of misinformation about COVID-19.The results indicated that gender plays a relatively minor role in determining whether COVID-19 misinformation is accepted, suggesting that the sharing of misinformation about the pandemic is not predicted according to the users' gender.
Theoretical implication
Our results suggested that people's beliefs about spreading COVID-19 misinformation revolve around the four factors of comprehensibility, entertainment, information-sharing, and information-seeking, but not altruism.Practical implications are established.Our analysis found that information-seeking was the most significant factor in predicting the spread of COVID-19 misinformation.Users are encouraged to be caution when sharing online information, especially when such information includes instructions for keeping oneself and others safe.Our findings suggested that respondents were spreading COVID-19 misinformation, which has the potential to spread misconceptions and endanger people's health.This was necessary because some users may opt for treatments they read about online despite lack of medical evidence for their effectiveness [44].Whether or not users realized it, their consumption and dissemination of COVID-19-related information contributes directly to the spread of misinformation and the detrimental impacts it has on society.
Our research highlighted several factors that led users to spread COVID-19 misinformation on social media.While other research has highlighted the importance of being aware of misinformation, this study confirms its function in mitigating the negative consequences of spreading misinformation during the pandemic.Furthermore, the entire global population, healthcare professionals, and in particular the UAE, must communicate with the public and control information flaw during such epidemic.It is very crucial that authentic information should be distributed through both online/offline channels.This will reduce the prevalence of Internet hoaxes claiming to offer effective therapies and prevention measures.
Mitigation suggestions based on the study
Based on the study's findings, some suggestions on actions that can be taken to mitigate the spread of fake news can be provided.These can be directed toward policy makers, governments, and other relevant stakeholders: (1) Increase media literacy by promoting media literacy programmes and initiatives that educate online users about the importance of verifying information before accepting and sharing it e.g., providing resources and tools to help users critically evaluate news and information on social media platforms.(2) Provide accurate and up-to-date information by the local government, health organisations, and reliable news sources should actively disseminate accurate and verified information about any issue through official channels.For example, ensuring the availability of reliable information, users are less likely to rely solely on social media for news.(3) Foster critical thinking by encouraging users to question and critically evaluate the information they encounter on social media.Promote a culture of fact-checking and encourage users to seek information from multiple reliable sources before accepting and sharing it.For example, emphasise the importance of verification by raising awareness about the importance of verifying information before sharing it.Encourage users to fact-check claims, look for corroborating evidence, and consult reliable sources before spreading any information.(4) Encourage responsible sharing by reminding users to consider the potential impact of their actions when sharing information.Encourage them to think critically about the potential consequences of spreading misinformation and to refrain from sharing unverified or sensationalised content.(5) Collaborate with social media platforms by working collaboratively with social media platforms to develop and implement measures that curb the spread of misinformation.This can include algorithms that prioritise reliable sources, warning labels on potentially misleading content, and penalties for repeat offenders.Overall, it's important that these suggestions should be implemented in a comprehensive and multifaceted approach involving various stakeholders, including local governments, social media platforms, news organisations, educators, and individuals themselves.
Limitations and future research
While we believe that our research adds to the existing body of literature, we also acknowledge that it has certain limits.To start with, we concentrated on the spread of COVID-19 misinformation globally, while focused on the UAE culture.The findings may, however, be generalized to other countries whose cultures are more like the UAE.Future research may choose to broaden this study's scope to research a different setting.Second, while altruism gratification did not influence the spread of misinformation, future research may examine other aspects such as social media weariness, self-disclosure, and online trust [39].Third, we could not definitively indicate that gender would reduce the effect of providing misinformation.Therefore, others may try to replicate our study by including additional demographic factors (such age, wealth) in their own models.Our independent variables were shown to have sufficient and significant predictive power, even though our samples were on the small scale.Potential research can increase the sample size to better representative of the population.
Conclusion
Among our construct's comprehensibility, entertainment, information-sharing, and information-seeking that driver of the spread of misinformation during COVID-19, our analysis found that information-seeking was the strongest predictor in doing so among the surveyed students.However, this study did not find any conclusive evidence that altruism was significantly associated with the spread of misinformation.However, we indicated that users need to verify the accuracy of the material that they read and (re)-post on social media considering our research's findings and the escalating health concern caused by the spread of misinformation during the COVID-19.To do this, one must verify the source credibility [39], read beyond the story's headlines, investigate further to confirm the story's accuracy (e.g., checking its dates, authorship, data, and statistics), avoid falling for fabricated images (e.g., checking their authenticity), seek alternative viewpoints (e.g., consulting other sources), and if all else fails, consult experts.Based on our research, gender is neither a direct nor an indirect factor of the adoption or spread of misinformation regarding COVID-19.
For the Ethics statement, the ethics approval number is (ERSC_2022_703).
Table 3
Convergent validity results which assure acceptable values.
Table 5
Path analyses and testing of hypotheses.
Table 6
Moderator analysis results.
|
2023-08-31T15:09:08.447Z
|
2023-08-01T00:00:00.000
|
{
"year": 2023,
"sha1": "839b0bd916c9b78df587e1718ed5babe87e1f164",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844023066331/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "35948f6a0483de5b46c6fe2c11e47c5c7ebaf30a",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
245877593
|
pes2o/s2orc
|
v3-fos-license
|
HyperTransformer: Model Generation for Supervised and Semi-Supervised Few-Shot Learning
In this work we propose a HyperTransformer, a Transformer-based model for supervised and semi-supervised few-shot learning that generates weights of a convolutional neural network (CNN) directly from support samples. Since the dependence of a small generated CNN model on a specific task is encoded by a high-capacity Transformer model, we effectively decouple the complexity of the large task space from the complexity of individual tasks. Our method is particularly effective for small target CNN architectures where learning a fixed universal task-independent embedding is not optimal and better performance is attained when the information about the task can modulate all model parameters. For larger models we discover that generating the last layer alone allows us to produce competitive or better results than those obtained with state-of-the-art methods while being end-to-end differentiable.
Introduction
In few-shot learning, a conventional machine learning paradigm of fitting a parametric model to training data is taken to a limit of extreme data scarcity where entire categories are introduced with just one or few examples. A generic approach to solving this problem uses training data to identify parameters φ of a solver a φ that given a small batch of examples for a particular task (called a support set) can solve this task on unseen data (called a query set).
One broad family of few-shot image classification methods frequently referred to as metric-based learning, relies on pretraining an embedding e φ (·) and then using some distance in the embedding space to label query samples based 1 Google Research. Correspondence to: Andrey Zhmoginov <azhmogin@google.com>.
Proceedings of the 39 th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s). on their closeness to known labeled support samples. These methods proved effective on numerous benchmarks (see Tian et al. (2020) for review and references), however the capabilities of the solver are limited by the capacity of the architecture itself, as these methods try to build a universal embedding function.
On the other hand, optimization-based methods such as seminal MAML algorithm (Finn et al., 2017) can fine-tune the embedding e φ by performing additional SGD updates on all parameters φ of the model producing it. This partially addresses the constraints of metric-based methods by learning a new embedding for each new task. However, in many of these methods, all the knowledge extracted during training on different tasks and describing the solver a φ still has to "fit" into the same number of parameters as the model itself. Such limitation becomes more severe as the target models get smaller, while the richness of the task set increases.
In this paper we propose a new few-shot learning approach that allows us to decouple the complexity of the task space from the complexity of individual tasks. The main idea is to use the Transformer model (Vaswani et al., 2017) that given a few-shot task episode, generates an entire inference model by producing all model weights in a single pass. This allows us to encode the intricacies of the available training data inside the Transformer model, while producing specialized tiny models for a given individual task. Reducing the size of the generated model and moving the computational overhead to the Transformer-based weight generator, we can lower the cost of the inference on new images. This can reduce the overall computation cost in cases where the tasks change infrequently and hence the weight generator is only used sporadically. Note that here we follow the inductive inference paradigm with test samples processed one-by-one (by the generated inference model) and do not target other settings like, for example, transductive inference that consider relationships between test samples.
We start by observing that the self-attention mechanism is well suited to be an underlying mechanism for a fewshot CNN weight generator. In contrast with earlier CNN- (Zhao et al., 2020) or BiLSTM-based approaches (Ravi & arXiv:2201.04182v3 [cs.LG] 14 Jul 2022 Larochelle, 2017), the vanilla 1 Transformer model is invariant to sample permutations and can handle unbalanced datasets with a varying number of samples per category. Furthermore, we demonstrate that a single-layer self-attention model can replicate a simplified gradient-descent-based learning algorithm. Using a Transformer to generate the logits layer on top of a conventionally end-to-end learned embedding, we achieve competitive results on several common few-shot learning benchmarks. For smaller generated CNN models, our approach shows significantly better performance than MAML++ (Antoniou et al., 2019) and RFS (Tian et al., 2020), while also closely matching the performance of many state-of-the-art methods for larger CNN models. Varying Transformer parameters we demonstrate that this high performance can be attributed to additional capacity of the Transformer model that decouples its complexity from that of the generated CNN. While this additional capacity proves to be very advantageous for smaller generated models, larger CNNs can accommodate sufficiently complex representations and our approach does not provide a clear advantage compared to other methods in this case.
We additionally can extend our method to support unlabeled samples by appending a special input token that encodes unknown classes to all unlabeled examples. In our experiments outlined in Section 5.3, we observe that adding unlabeled samples can significantly improve model performance. Interestingly, the full benefit of using additional data is only realized if the Transformers use two or more layers. This result is consistent with the basic mechanism described in Section 4.2, where we show that a Transformer model with at least two layers can encode the nearest-neighbor style algorithm that associates unlabeled samples with similar labeled examples. In essence, by training the weight generator to produce CNN models with best possible performance on a query set, we teach the Transformer to utilize unlabeled samples without having to manually introduce additional optimization objectives.
We also explore the capability of our approach to generate all weights of the CNN model, adjusting both the logits layer and all intermediate layers producing the sample embedding. We show that by generating all layers we can improve both the training and test accuracies of CNN models below a certain size. Above this model size threshold, however, generation of the logits layer alone on top of a episode-agnostic embedding appears to be sufficient for reaching peak performance (see Figure 3). This threshold is expected to depend on the variability and the complexity of the training tasks.
Another important advantage of our method is that it allows to do learning end-to-end without relying on complex nested gradients optimization and other meta-learning approaches, where the number of unrolls steps is large. Our optimiza-1 without attention masking or positional encodings tion is done in a single loop of updates to the Transformer (and feature extractor) parameters. The code for the paper can be found at https://github.com/google-research/googleresearch/tree/master/hypertransformer.
Related work
Few-shot learning received a lot of attention from the deep learning community and while there are hundreds of fewshot learning methods, several common themes emerged in the past years. Here we outline several existing approaches and show how they relate to our method.
Metric-Based Learning. One family of approaches involves mapping input samples into an embedding space and then using some nearest neighbor algorithm to label query samples based on the distances from their embeddings to embeddings of labeled support samples. The metric used to compute the distances can either be the same for all tasks, or can be task-dependent. This family of methods includes, for example, such methods as Siamese networks (Koch et al., 2015), Matching Networks (Vinyals et al., 2016), Prototypical Networks (Snell et al., 2017), Relation Networks (Sung et al., 2018) and TADAM (Oreshkin et al., 2018). It has recently been argued (Tian et al., 2020) that methods based on building a powerful sample representation can frequently outperform numerous other approaches including many optimization-based methods. However, such approaches essentially amount to the "one-model solves all" approach and thus require larger models than needed to solve individual tasks.
Optimization-Based Learning. An alternative approach that can adapt the embedding to a new task is to incorporate optimization within the learning process. A variety of such methods are based on the approach called Model-Agnostic Meta-Learning, or MAML (Finn et al., 2017). The core idea of MAML is learning initial model parameters θ 0 that produce good models for each episode after being adjusted with one or more gradient descent updates minimizing the corresponding episode classification loss. This approach was later refined (Antoniou et al., 2019) and built upon giving rise to Reptile (Nichol et al., 2018), LEO (Rusu et al., 2019) and others. One limitation of various MAMLinspired methods is that the knowledge about the set of training tasks T train is distilled into parameters φ = θ 0 that have the same dimensionality as the model parameters. Therefore, for a very lightweight model f (x; θ) the capacity of the solver a φ producing model weights from the support set is still limited by the size of θ. Methods that use parameterized preconditioners that otherwise do not impact the model f (x; θ) can alleviate this issue, but as with MAML, such methods can be difficult to train (Antoniou et al., 2019).
Weight Modulation and Generation. The idea of using a task specification to directly generate or modulate model weights has been previously explored in the generalized supervised learning context (Requeima et al., 2019;Ratzlaff & Li, 2019), few-shot learning (Guo & Cheung, 2020) and in specific language models (Pilault et al., 2021;Mahabadi et al., 2021;Tay et al., 2021;Ye & Ren, 2021). Some fewshot learning methods described above also employ this approach and use task-specific generation or modulation of the weights of the final classification model. For example, in LGM-Net (Li et al., 2019b) the matching network approach is used to generate a few layers on top of a task-agnostic embedding. Another approach abbreviated as LEO (Rusu et al., 2019) utilized a similar weight generation method to generate initial model weights from the training dataset in a few-shot learning setting, much like what is proposed in this article. However, in Rusu et al. (2019), the generated weights were also refined using several SGD steps similar to how it is done in MAML. Here we explore a similar idea, but largely inspired by the HYPERNETWORK approach (Ha et al., 2017), we instead propose to directly generate an entire task-specific CNN model. Unlike LEO, we do not rely on pre-computed embeddings for images and generate the model in a single step without additional SGD steps, which simplifies and stabilizes training.
Transformers in Computer Vision and Few-Shot Learning. Transformer models (Vaswani et al., 2017) originally proposed for NLP applications, had since become a useful tool in practically every field of deep learning. In computer vision, Transformers have recently seen an explosion of applications ranging from state-of-the-art classification results (Dosovitskiy et al., 2021;Touvron et al., 2021) to object detection (Carion et al., 2020;Zhu et al., 2021), segmentation (Ye et al., 2019), image super-resolution (Yang et al., 2020), image generation (Chen et al., 2021) and many others. There are also several notable applications in few-shot image classification. For example, in Liu et al. (2021), the Transformer model was used for generating universal representations in the multi-domain few-shot learning scenario. And closely related to our approach, in Ye et al. (2020), the authors proposed to accomplish embedding adaptation with the help of Transformer models. Unlike our method that generates an entire end-to-end image classification model, this approach applied a task-dependent perturbation to an embedding generated by an independent task-agnostic feature extractor. In Gidaris & Komodakis (2018), a simplified attention-based model was used for the final layer generation.
Analytical Framework
Here we establish a general framework that includes fewshot learning as a special case, but allows us to extend it to cases when more information is available beyond few supervised samples, e.g. using additional unlabeled data.
Learning from Generalized Task Descriptions
Consider a set of tasks {t|t ∈ T } each of which is associated with a loss L(f ; t) that quantifies the correctness of any model f attempting to solve t. A task can be associated with a classification, regression, learning a reinforcement learning policy or any other kind of problem. Along with the loss, each task also is characterized by a task description τ (t) that is sufficient for communicating this task and finding the optimal model that solves it. This task description can include any available information about t, like labeled and unlabeled samples, image metadata, textual descriptions, etc.
The weight generation algorithm can then be viewed as a method of using a set of training tasks T train for discovering a particular solver a φ that given τ (t) for a task t similar to those present in the training set, produces an optimal model f * = a φ (τ ) ∈ F minimizing L(f * , t). In this paper, we learn a φ by performing gradient-descent optimization of with p(t) being the distribution of training tasks from T train .
Special Case of Few-Shot Learning
Few-shot learning is a special case of the framework described above. In few-shot learning, the loss L t of a task t is defined by a labeled query set Q(t). The task description τ (t) is then specified via a support set of examples. In a classical "n-way-k-shot" setting, each training task t ∈ T train is sampled by first randomly choosing n distinct classes C t from a large training dataset and then sampling examples without replacement from these classes to generate τ (t) and Q(t). The support set τ (t) in this setting always contains k labeled samples {x The quality of a particular few-shot learning algorithm is evaluated using a separate test space of tasks T test . By forming T test from classes unseen at training time, we can evaluate generalization of the trained solver a φ by computing accuracies of models a φ (t) for t ∈ T test . Best algorithms are expected to capture the structure present in the training set, extrapolating it to novel, but related tasks.
Equation 1 describes the general framework for learning to solve tasks given their descriptions τ (t). When τ is given by supervised samples, we recover classic few-shot learning. But the freedom in the definition of τ permits us, for example, to extend the problem to a semi-supervised regime (Ren et al., 2018), assuming that each τ (t) contains both labeled and unlabeled examples. The approach rely- Figure 1. A diagram of our model showing the generation of two CNN layers: Transformer-based weight generators receive image embeddings s φs (·) and activation embeddings h φ l (·) along with corresponding labels ci, and produce CNN layer weights (θ1 and θ2). After being generated, the CNN model is used to compute the loss on the query set. The gradients of this loss are then used to adjust the weights of the entire weight generation model (φs, φ l , Transformer weights).
ing on solving equation 1 can be contrasted with classical approaches that typically have to modify their algorithms and optimization objectives in response to any additional type of information supplied in the task specification τ . For example, if τ contains unlabeled examples, representationbased approaches could use unlabeled samples to make more accurate estimates of embedding centroids for each class, effectively trying to infer the distribution of samples in Q(t). Optimization-based methods like MAML would have to introduce new optimization objectives on unlabeled samples in addition to the cross-entropy loss on labeled samples. In contrast, our algorithm is able to learn from τ directly.
Empirical solution of equation 1 for a φ (τ ) represented by a deep neural network can be obtained by solving this optimization problem directly. In this section, we describe the design of the model a φ (τ ) that we call a HYPERTRANS-FORMER (HT). Choosing Transformer as the core component of HT, we make it possible for a φ to process any complex multi-modal task description τ (t) assuming that it can be encoded as an unordered set of Transformer tokens.
Few-Shot Learning Model
A solver a φ is the core of a few-shot learning algorithm since it encodes the knowledge of the training task distribution within its weights φ. We choose a φ to be a Transformerbased model (see Fig. 1) that takes a task description τ containing the information about labeled and unlabeled supportset samples as input and produces weights for some or all layers {θ | ∈ [1, L]} of the generated CNN model. Layer weights that are not generated are instead learned end-toend together with HT weights as ordinary task-agnostic variables. In other words, these learned layers are modified during the training phase and remain static during the evaluation phase (i.e. not dependent of the support set). In our experiments generated CNN models contain a set of convolutional layers and a final fully-connected logits layer. Here θ are the parameters of the -th layer and L is the total number of layers including the final logits layer (with index L). The weights are generated layer-by-layer starting from the first layer: θ 1 (τ ) → θ 2 (θ 1 ; τ ) → · · · → θ L (θ 1,...,L−1 ; τ ). Here we use θ a,...,b as a short notation for (θ a , θ a+1 , . . . , θ b ).
Image and activation embeddings. The weights for the layer are either: (a) simply learned as a task-agnostic trainable variable, or (b) generated by the Transformer that receives a concatenation of image embeddings, activation embeddings and support sample labels c i : The activation embeddings at layer are produced by a convolutional feature extractor h φ l (z i ) applied to the activations of the previous layer z i := f −1 (x i ; θ 1,..., −1 ) for > 1 and z 1 i := x i . The intuition behind using the activation embeddings is that the choice of the layer weights should primarily depend on the inputs received by this layer.
The image embeddings are produced by a separate trainable convolutional neural network s φs (x i ) that is shared by all the layers. Their purpose are to modulate each layer's weight generator with a global high-level view of the sample that, unlike the activation embedding, is independent of the generated weights and is shared between generators.
Encoding and decoding Transformer inputs and outputs. In the majority of our experiments, the input samples were encoded by concatenating image and activation embeddings from I to trainable label embeddings ξ(c) with ξ : Here n is the number of classes per episode and d is a chosen size of the label encoding. Note that the class embeddings do not contain semantic information, but rather act as placeholders to differentiate between distinct classes. In addition to supervised few-shot learning, we also considered a semi-supervised scenario when some of the support samples are provided without the associated class information. Such unlabeled samples were fed into the Transformer using the same general encoding approach, but we used an auxiliary learned "unlabeled" tokenξ in place of the label encoding ξ(c) to indicate the fact that the class of the sample is unknown.
Along with the input samples, the sequence passed to the Transformer was also populated with special learnable placeholder tokens, each associated with a particular slice of the to-be-generated weight tensor. Each such token was a learnable d-dimensional vector padded with zeros to the size of the input sample token. After the entire input sequence was processed by the Transformer, we read out model outputs associated with the weight slice placeholder tokens and assembled output weight slices into the final weight tensors (see Fig. 2).
In our experiments we considered two different ways of encoding k × k × n input × n output convolutional kernels: (a) "output allocation" generates n output tokens with weight slices of size k 2 × n input and (b) "spatial allocation" generates k 2 weight slices of size n input × n output . We show comparison results in Supplementary Materials.
Training the model. The weight generation model uses the support set to produce the weights of some or all CNN model layers. Then, the cross-entropy loss is computed for the query set samples that are passed through the generated CNN model. The weight generation parameters φ (including the Transformer model and image/activation feature extractor weights) are learned by optimizing this loss functwion using stochastic gradient descent.
Reasoning Behind the Self-Attention Mechanism
The choice of self-attention mechanism for the weight generator is not random. One reason behind this choice is that the output produced by generator with the basic self-attention is by design invariant to input permutations, i.e., permutations of samples in the training dataset. This also makes it suitable for processing unbalanced batches and batches with a variable number of samples (see Sec. 5.3). Now we show that the calculation performed by a self-attention model with properly chosen parameters can mimic basic few-shot learning algorithms further motivating its utility.
Supervised learning. Self-attention in its rudimentary form can implement a method similar to cosine-similaritybased sample weighting encoded in the logits layer 2 with which can also be viewed as a result of applying a single gradient descent step on the cross-entropy loss (see Appendix A). Here n is the total number of support-set samples {x (m) |m ∈ [1, n]} and e (m) , y (m) are the embedding vector and the one-hot label corresponding to x (m) .
The approach can be outlined (see more details in Appendix A) as follows. The self-attention operation receives encoded input samples I k = (ξ(c k ), e k ) and weight placeholders (µ(i), 0) as its input. If each weight slice W i,· represented by a particular token (µ(i), 0) produces a query Q i that only attends to keys K k corresponding 3 to samples I k with labels c k matching i and the values of these samples are set to their embeddings e k , then the self-attention operation will essentially average the embeddings of all samples assigned label i thus matching the first term in W .
Semi-supervised learning. A similar self-attention mechanism can also be designed to produce logits layer weights when the support set contains some unlabeled samples. The proposed mechanism first propagates classes of labeled samples to similar unlabeled samples. This can be achieved by a single self-attention layer choosing the queries and the keys of the samples to be proportional to their embeddings. The attention map for sample i would then be defined by a soft- (2019) and Tian et al. (2020) correspondingly (results marked with † and were taken from corresponding papers). HT generally outperforms both MAML++ and RFS for smaller models. Accuracy confidence intervals: OMNIGLOT -between 0.1% and 0.3%, MINIIMAGENET and TIEREDIMAGENET -between 0.2% and 0.5%.
max of e i · e j , or in other words would be proportional to exp(e i · e j ). Choosing sample values to be proportional to the class tokens, we can then propagate a class of a labeled sample e j to a nearby unlabeled sample with embedding e i , for which e i · e j is sufficiently large. If the self-attention module is "residual", i.e., the output of the self-attention operation is added to the original input, like it is done in the Transformer model, then this additive update would essentially "mark" an unlabeled sample by the propagated class (albeit this term might have a small norm). The second self-attention layer can then be designed similarly to the supervised case. If label embeddings are orthogonal, then even a small component of a class embedding propagated to an unlabeled sample can be sufficient for a weight slice to attend to it thus adding its embedding to the final weight (resulting in the averaging of embeddings of both labeled and proper unlabeled examples).
Experiments
In this section, we present HYPERTRANSFORMER (HT) experimental results and discuss the implications of our empirical findings.
Datasets and Setup
Datasets. For our experiments, we chose several most widely used few-shot datasets including OMNIGLOT, MINI-IMAGENET and TIEREDIMAGENET. MINIIMAGENET contains a relatively small set of labels and is arguably the simplest to overfit to. Because of this and since in many recent publications MINIIMAGENET was replaced with a larger TIEREDIMAGENET dataset, we conduct many of our experiments and ablation studies using OMNIGLOT and TIEREDIMAGENET.
Models. HYPERTRANSFORMER can in principle generate arbitrarily large weight tensors by producing lowdimensional embeddings that can then be fed into another trainable model to generate the entire weight tensors. In this work, however, we limit our experiments to HT models that generate weight tensor slices encoding individual output channels directly. For the target models we focus on 4-layer CNN architectures identical to those used in MAML++ and numerous other papers. More precisely, we used a sequence of four 3 × 3 convolutional layers with the same number of output channels followed by batch normalization (BN) layers, nonlinearities and max-pooling stride-2 layers. All BN variables were learned and not generated. Experiments with generated BN variables did not show much difference with this simpler approach. Generating larger architectures such as RESNET and WIDERESNET will be the subject of our future work.
Supervised Results with Logits Layer Generation
As discussed in Section 4.2, using a simple self-attention mechanism to generate the CNN logits layer can be a basis of a simple few-shot learning algorithm. Motivated by this observation, in our first experiments, we compared the proposed HT approach with MAML++ and RFS (Tian et al., 2020) on OMNIGLOT, MINIIMAGENET and TIEREDIMA-GENET datasets (see Table 1) with HT limited to generating only the final fully-connected logits layer.
In our experiments, the dimensionality of the activation embedding was chosen to be the same as the number of model channels and the image embedding had a dimension of 32 regardless of the model size. The image feature extractor was a simple 4-layer convolutional model with batch normalization and stride-2 3 × 3 convolutional kernels. The activation feature extractors were two-layer convolutional models with outputs of both layers averaged over the spatial dimensions and concatenated to produce the final activation embedding. For all tasks except 5-shot MINIIMAGENET our Transformer had 3 layers, used a simple sequence of encoder layers (Figure 2) and used the "output allocation" of weight slices (Section 4.1). Experiments with the encoder-decoder Transformer architecture can be found in Appendix E. The 5-shot (Fei et al., 2021). We also include results for CNNs with fewer channels ("-32" for 32-channel models, etc.).
MINIIMAGENET and TIEREDIMAGENET results presented in Table 1 were obtained with a simplified Transformer model that had 1 layer, and did not have the final fullyconnected layer and nonlinearity. This proved necessary for reducing model overfitting of this smaller dataset. Other model parameters are described in detail in Appendix C.
Results obtained with our method in a few-shot setting (see Table 1) are frequently better than MAML++ and RFS results, especially on smaller models, which can be attributed to parameter disentanglement between the weight generator and the CNN model. While the improvement over MAML++ and RFS gets smaller with the growing size of the generated CNN, our results for large models appear to be comparable to those obtained with MAML++, RFS and numerous other methods (see Table 2). Discussion of additional comparisons to LGM-Net (Li et al., 2019b) and LEO (Rusu et al., 2019) using a different setup (which is why they could not be included in Table 2) and showing an almost identical performance can be found in Appendix D.
While the learned HT model could perform a relatively simple calculation on high-dimensional sample embeddings, our brief analysis of the parameters space (see Appendix E) shows that using simpler 1-layer Transformers leads to a modest decrease of the test accuracy and a greater drop in the training accuracy for smaller models. However, in our experiments with 5-shot MINIIMAGENET dataset, which is generally more prone to overfitting, we observed that increasing the Transformer model complexity improves the model training accuracy (on episodes that only use classes seen at the training time), but the test accuracy relying on classes unseen at the training time, generally degrades. We also observed that the results in Table 1 could be improved even further by increasing the embedding sizes (see Appendix E), but we did not pursue an exhaustive optimization in the parameter space.
Note that overfitting characterized by a good performance on tasks composed of seen categories, but poor generalization to unseen categories, may still have practical applications for pesonalization. Specifically, if the actual task relies on classes seen at the training time, we can generate an accurate model customized to a particular task in a single pass without having to perform any SGD steps to fine-tune the model. This is useful if, for example, the client model needs to be adjusted to a particular set of known classes most widely used by this client. We also anticipate that with more complex data augmentations and additional synthetic tasks, more complex Transformer-based models can further improve their performance on the test set.
Semi-Supervised Results
In our approach, the weight generation model is trained by optimizing the query set loss and therefore any additional information about the task, including unlabeled samples, can be provided as a part of the task description τ to the weight generator without having to alter the optimization objective. This allows us to tackle a semi-supervised few-shot learning problem without making any substantial changes to the model or the training approach. In our implementation, we simply added unlabeled samples into the support set and marked them with an auxiliary learned "unlabeled" tokenξ (u, L T ) 1-shot 5-shot (2, 3) (4, 1) (4, 2) (4, 3) (9, 3) Accuracy 56.0 69.9 58.3 56.6 59.9 59.9 61.5 Table 3. Test accuracy on TIEREDIMAGENET of supervised 1shot and 5-shot models and semi-supervised 1-shot models with u additional unlabeled samples per class. The weight generation Transformer model uses 3 encoder layer for supervised tasks and LT encoder layers in semi-supervised experiments. Notice a performance improvement of semi-supervised learning over the 1-shot supervised results. Accuracy is seen to grow with the number of unlabeled samples and the maximum accuracy is reached when the encoder has at least two layers. Figure 3. 5-shot-20-way OMNIGLOT training/test accuracies as a function of the CNN model complexity: only the final logits layer being generated (logits), all layers being generated (all), training the model on all available samples for a random set of few classes (oracle). A model that generates CNN weights by memorizing all samples (being able to determine their classes) and also memorizing optimal trained weights for any selection of classes would reach the oracle accuracy, but would not generalize.
in place of the label encoding ξ(c).
Since OMNIGLOT is typically characterized by very high accuracies in the 97%-99% range, we conducted all our experiments with TIEREDIMAGENET. As shown in Table 3, adding unlabeled samples results in a substantial increase of the final test accuracy. Furthermore, notice that the model achieves its best performance when the number of Transformer layers is greater than one. This is consistent with the basic mechanism discussed in Section 4.2 that required two self-attention layers to function.
It is worth noticing that adding more unlabeled samples into the support set makes our model more difficult to train and it gets stuck producing CNNs with essentially random outputs. Our solution was to introduce unlabeled samples incrementally during training. This was implemented by masking out some unlabeled samples in the beginning of the training and then gradually reducing the masking probability over time 4 .
Generating Additional Model Layers
We demonstrated that HT model can outperform MAML++ on common few-shot learning datasets by generating just the last logits layer of the CNN model. But is it advantageous to be generating additional CNN layers (ultimately fully utilizing the capability of the HT model)?
We explored this question by comparing the performance of models, in which all, or only some of the convolutional layers were generated, while others were learned (typically all or few first convolutional layers of the CNN). We observed a significant performance improvement for models that generated all convolutional layers in addition to the CNN logits layer, but only for CNN models below a particular size. For OMNIGLOT dataset, we saw that both training and test accuracies for a 4-channel and a 6-channel CNNs increased with the number of generated layers (see Fig. 3 and Table 4 in Appendix) and using more complex Transformer models with 2 or more encoder layers improved both training and test accuracies of fully-generated CNN models of this size (see Appendix E). However, as the size of the model increased and reached 8 channels, generating the last logits layer alone proved to be sufficient for getting the best results on OMNIGLOT and TIEREDIMAGENET. By separately training an "oracle" CNN model using all available data for a random set of n classes, we observed the gap between the training accuracy of the generated model and the oracle model (see Fig. 3), indicating that the Transformer does not fully capture the dependence of the optimal CNN model weights on the support set samples. A hypothetical weight generator reaching maximum training accuracy could, in principle, memorize all training images being able to associate them with corresponding classes and then generate an optimal CNN model for a particular set of classes in the episode matching "oracle" model performance.
We visualized the distribution of the weights generated by HT for different episodes by using UMAP (McInnes et al., 2018) embeddings of the generated weights for a 6-channel CNN model (see Fig. 4). We highlighted some of the classes present in the evaluation set and while the general structure may be hard to interpret, the distribution of the highlighted classes is somewhat clustered indicating the importance of semantic information for generated CNN weights. More details can be found in Appendix G.
The positive effect of generating convolutional layers can also be observed in shallow models with large convolutional kernels and large strides where the model performance can be much more sensitive to a proper choice of model weights. For example, in a 16-channel model with two convolutional kernels of size 9 and the stride of 4, the overall test accuracy for a model generating only the final convolutional layer was about 1% lower than the accuracies of the models generating at least one additional convolutional filter. We also speculate that as the complexity of the task increases, generating some or all intermediate network layers should become more important for achieving optimal performance. Verifying this hypothesis and understanding the "boundary" in the model space between two regimes where a static backbone is sufficient or not will be the subject of our future work.
Layer 1 Layer 2 Layer 3 Layer 4 All layers
Class 8 Class 84 Class 60 Figure 4. UMAP embedding of weights for each convolutional layer of a 6-channel CNN generated by HT for 1 242 different episodes from TIEREDIMAGENET. Each point corresponds to 2d embedding of the combined weights for a given layer (or concatenated for all layers) generated for a given episode. We color some of the point according to the classes contained in the episodes. For highlighted classes, the generated weights appear to be correlated between episodes where these classes are present. We selected classes specifically to demonstrate this correlation. For most of the other classes, this correlation was minor. Note that since there are 5 classes in each episode, the coloring for some of the episodes might be ambiguous. See Appendix G for more classes and samples for each class.
Conclusions
In this work, we proposed a HyperTransformer (HT), a novel Transformer-based model that generates all weights of a CNN model directly from a few-shot support set. This approach allowed us to use a high-capacity model for encoding task-dependent variations in the weights of a smaller model. We demonstrated that generating the last logits layer alone, the Transformer-based weight generator beats or matches performance of multiple traditional learning methods on several few-shot benchmarks and surpasses MAML++ and RFS performance on smaller models. More importantly, we showed that HT can be straightforwardly extended to handle more complex problems like semi-supervised tasks with unlabeled samples present in the support set. Our experiments demonstrated a considerable few-shot performance improvement in the presence of unlabeled data. Finally, we explored the impact of the Transformer-encoded model diversity in CNN models of different sizes. We used HT to generate some or all convolutional kernels and biases and showed that for sufficiently small models, adjusting all model parameters further improves their few-shot learning performance.
A. Example of a Self-Attention Mechanism for Supervised Learning
Self-attention in its rudimentary form can implement a cosine-similarity-based sample weighting, which can also be viewed as a simple 1-step MAML-like learning algorithm. This can be seen by considering a simple classification model with θ = (W , b, φ), where e(x; φ) is the embedding and s(·) is a softmax function. MAML algorithm identifies such initial weights θ 0 that given any task T just a few gradient descent steps with respect to the loss L T starting at θ 0 bring the model towards a task-specific local optimum of L T .
Notice that if any label assignment in the training tasks is equally likely, it is natural for f (x; θ 0 ) to not prefer any particular label over the others. Guided by this, let us choose W 0 and b 0 that are label-independent. Substituting θ = θ 0 + δθ into f (x; θ), we obtain where is the label index and δθ = (δW , δb, δφ). We see that the lowest-order label-dependent correction to f (x; θ 0 ) is given simply by s (·)(δW e(x; φ 0 ) + δb ). In other words, in the lowest-order, the model only adjusts the final logits layer to adapt the pretrained embedding e(x; φ 0 ) to a new task. It is then easy to calculate that for a simple softmax cross-entropy loss, a single step of the gradient descent results in the following logits weight and bias updates: Here γ is the learning rate, n is the total number of support-set samples, |C| is the number of classes and y (m) is the one-hot label corresponding to x (m) . A closely-related idea of extending the logits layer with a vector proportional to an average of the novel class sample embeddings is used in the "imprinted weights" approach (Qi et al., 2018) allowing to add novel classes into pre-trained models.
Now consider a self-attention module generating the last logits layer and acting on a sequence of processed input samples 5 ..,n and weight placeholders W L := {( µ(k) , 0 )} k=1,...,|C| , where |C| is the number of classes and also the number of weight slices of W if each slice corresponds to an output layer channel. The output of a simple self-attention module for the weight slice with index i is then given by: where Z := . It is easy to see that with a proper choice of query and key matrices attending only to prepended ξ and µ tokens, the second term in equation 3 can be made negligible, while Q(W L i ) · K I (I L m ) can make the softmax function only attend to those components of V I that correspond to samples with label i. Choosing V I (I L m ) to be proportional to e m , we can then recover the first term in δW in equation 2. The second term in δW can be produced, for example, with the help of a second head that generates identical attention weights for all samples, thus summing up their embeddings.
B. Analytical Expression for Generated Weights
Supplied with a distribution p(t) of training tasks t ∈ T train , we learn the weight generator a φ by optimizing the following objective: If the family of functions a φ (τ ) is sufficiently rich, we can instead try solving an optimization problem of the form θ * (τ (t)) := arg min θ L(θ, t) with θ * ∈ C 2 . Generally, when t → τ is not one-to-one, it is impossible to minimize L(θ * (τ (t)), t) for each t because L(·, t 1 ) and L(·, t 2 ) will typically differ even if τ (t 1 ) = τ (t 2 ). However, if this mapping is one-to-one, and we can re-parameterize the space of tasks T choosing t to have the same dimension as τ , the condition that L(θ * (τ (t + δt)), t + δt) is an extremum for any infinitesimal δt can be rewritten as: which in turn means that: If the Hessian of L and ∂τ /∂t are non-singular, we can solve this equation for ∂θ * /∂τ : Now assume that we know a solution of arg min θ * L(θ * , t) for some particular task t = t 0 . The derivative 4 can then be used to "track" this local minimum at t 0 to any other task t 1 in a sufficiently small vicinity of t 0 , where L remains convex and the Hessian of L is not singular. Choosing a patht : [0, 1] → T witht(0) = t 0 andt(1) = t 1 , we need to integrate ∂θ * /∂τ along τ (t(γ)) with γ changing from 0 to 1, which is equivalent to integrating the following ordinary differential equation: where θ * (γ) = θ * (τ (t(γ)) and all derivatives are computed att(γ) and θ * (γ).
C. Model Parameters
Here we provide additional information about the model parameters used in our experiments.
Image augmentations and feature extractor parameters. For OMNIGLOT dataset, we used the same image augmentations that were originally proposed in MAML. For MINIIMAGENET and TIEREDIMAGENET datasets, however, we used ImageNet-style image augmentations including horizontal image flipping, random color augmentations and random image cropping. This helped us to avoid model overfitting on the MINIIMAGENET dataset and possibly on TIEREDIMAGENET.
The dimensionality d of the label encoding ξ and weight slice encoding µ was typically set to 32. Increasing d up to the maximum number of weight slices plus the number of per-episode labels, would allow the model to fully disentangle examples for different labels and different weight slices, but can also make the model train slower.
Transformer parameters. Since the weight tensors of each layer are generally different, our per-layer transformers were also different. The key, query and value dimensions of the transformer were chosen to be equal to a pre-defined fraction ν of the input embedding size, which in turn was a function of the label, image and activation embedding sizes and the sizes of the weight slices. The inner dimension of the final fully-connected layer in the transformer was also chosen using the same approach. In our MINIIMAGENET and TIEREDIMAGENET experiments, ν was chosen to be 0.5 and in OMNIGLOT experiments, we used ν = 1. Each transformer typically contained 2 or 3 encoder layers and used 2 or 8 heads for OMNIGLOT and MINIIMAGENET, TIEREDIMAGENET, correspondingly.
Learning schedule. In all our experiments, we used gradient descent optimizer with a learning rate in the 0.01 to 0.02 range. Our early experiments with more advanced optimizers were unstable. We used a learning rate decay schedule, in which we reduced the learning rate by a factor of 0.95 every 10 5 learning steps.
D. Additional Supervised Experiments
While the advantage of decoupling parameters of the weight generator and the generated CNN model is expected to vanish with the growing CNN model size, we compared our approach to two other methods, LGM-Net (Li et al., 2019b) and LEO (Rusu et al., 2019), to verify that our approach can match their performance on sufficiently large models.
For our comparison with the LGM-Net method, we used the same image augmentation technique that was used in (Li et al., 2019b) where it was applied both at the training and the evaluation stages . We also used the same CNN architecture with 4 learned 64-channel convolutional layers followed by two generated convolutional layers and the final logits layer. In our weight generator, we used 2-layer transformers with activation feature extractors that relied on 48-channel convolutional layers and did not use any image embeddings. We trained our model in the end-to-end fashion on the MINIIMAGENET 1-shot-5-way task and obtained a test accuracy of 69.3% ± 0.3% almost identical to the 69.1% accuracy reported in (Li et al., 2019b).
We also carried out a comparison with LEO by using our method to generate a fully-connected layer on top of the TIEREDIMAGENET embeddings pre-computed with a WideResNet-28 model employed by (Rusu et al., 2019). For our experiments, we used a simpler 1-layer transformer model with 2 heads that did not have the final fully-connected layer and nonlinearity. We also used L 2 regularization of the generated fully-connected weights setting the regularization weight to 10 −3 . As a result of training this model, we obtained 66.2% ± 0.2% and 81.6% ± 0.2% test accuracies on the 1-shot-5-way and 5-shot-5-way TIEREDIMAGENET tasks correspondingly. These results are almost identical to 66.3% and 81.4% accuracies reported in (Rusu et al., 2019).
E. Dependence on Parameters and Ablation Studies
Most of our parameter explorations were conducted for OMNIGLOT dataset. We chose a 16-channel model trained on a 1-shot-20-way OMNIGLOT task as an example of a model, for which just the logits layer generation was sufficient. We also chose a 4-channel model trained on a 5-shot-20-way OMNIGLOT task for the role of a model, for which generation of all convolutional layers proved to be beneficial. Figures 6 and 7 show comparison of training and test accuracies on OMNIGLOT for different parameter values for these two models. Here we only used two independent runs for each parameter value, which did not allow us to sufficiently reduce the statistical error. Despite of this, in the following, we try to highlight a few notable parameter dependencies. Note here that in some experiments with particularly large embedding or model sizes, training progressed beyond the target number of steps and there could also be overfitting for very large models.
Number of transformer layers. Increasing the number of transformer layers is seen to be particularly important in the 4-channel model. The 16-channel model also demonstrates the benefit of using 1 vs 2 transformer layers, but the performance appears to degrade when we use 3 transformer layers.
Activation embedding dimension. Particularly, small activation embeddings can be seen to hurt the performance in both models, while using larger activation embeddings appears to be advantageous in most cases except for the 32-dimensional activation embeddings in the 4-channel model.
Class embedding dimension. Particularly low embedding dimension of 16 can be seen to hurt the performance of both models.
Number of transformer heads. Increasing the number of transformer heads leads to performance degradation in the 16-channel model, but does not have a pronounced effect in the 4-channel model.
Image embedding dimensions. Removing the image embedding, or using an 8-dimensional embedding can be seen to hurt the performance in both cases of the 4-and 16-channel models.
Transformer architecture. While the majority of our experiments were conducted with a sequence of transformer encoder layers, we also experimented with an alternative weight generation approach, where both encoder and decoder transformer layers were employed (see Fig. 5). Our experiments with both architectures suggest that the role of the decoder is pronounced, but very different in two models: in the 16-channel model, the presence of the decoder increases the model performance, while in the 4-channel model, it leads to accuracy degradation. Inner transformer embedding sizes. Varying the ν parameter for different components of the transformer model (key/query pair, value and inner fully-connect layer size), we quantify their importance on the model performance. Using very low ν for the value dimension hurts performance of both models. The effect of key/query and inner dimensions can be distinctly seen only in the 4-channel model, where using ν = 1 or ν = 1.5 appears to produce the best results.
Weight allocation approach. Our experiments with the "spatial" weight allocation in 4-and 16-channel models showed slightly inferior performance (both accuracies dropping by about 0.2% to 0.4% in both experiments) compared to that obtained with the "output" weight allocation method.
F. Attention maps of learned transformer models
We visualized the attention maps of several transformer-based models that we used for CNN layer generation. Figure 8 shows attention maps for a 2-layer 4-channel CNN network generated using a 1-head 1-layer transformer on MINIIMA-GENET(labeled samples are sorted in the order of their episode labels). Attention map for the final logits layer ("CNN Layer 3") is seen to exhibit a "stairways" pattern indicating that a weight slice W c,· for episode label c is generated by attending to all samples except for those with label c. This is reminiscent of the supervised learning mechanism outlined in Sec. 4.2. While the proposed mechanism would attend to all samples with label c and average their embeddings, another alternative is to average embeddings of samples with other labels and then invert the result. We hypothesize that the trained transformer performs a similar calculation with additional learned transformer parameters, which may be seen to result in mild fluctuations of the attention to different input samples.
The attention maps for a semi-supervised learning problem with a 2-layer transformer is shown in Figure 9. One thing to notice is that a mechanism similar to the one described above appears to be used in the first transformer layer, where weight slices W c,· attend to all labeled samples with labels c i = c. At the same time, unlabeled samples can be seen to attend to labeled samples in layer 1 (see "Unlabeled" rows and "Label . . . " columns) and the weight slices in layer 2 then attend to the updated unlabeled sample tokens (see "Weights" rows and "Unlabeled" columns in the second layer). This additional pathway connecting labeled samples to unlabeled samples and finally to the logits layer weights is again reminiscent of the simplistic semi-supervised learning mechanism outlined in Sec. 4.2.
The exact details of these calculations and the generation of intermediate convolutional layers is generally much more difficult to interpret just from the attention maps and a more careful analysis of the trained model is necessary to draw the final conclusions.
G. UMAP embedding of the generated weights. Figure 10 shows additional plots of the UMAP embedding with some of the classes highlighted. Here, we can more clearly see that, at least for those classes, the embedding is quite correlated with the episodes where these classes are included. This suggests that the HYPERTRANSFORMER does generate meaningful individualized weights for each of the episode. Figure 11 shows some samples of the highlighted classes. Notice that Class 8 is clustered more tightly for the second layer, which means that the weights for the episodes containing this class are much more similar specifically for that layer. Indeed, Class 8 corresponds to zebras, with their stripes pattern being a distinct feature. In order to be able to distinguish this feature, CNN would need to aggregate information from a wide field of view. This would probably not happen in the first layer, thus we see almost no correlation for the first one, but very tight cluster for the second. Figure 6. Change of the training (blue) and test (orange) accuracies on 1-shot-20-way OMNIGLOT task for a 16-channel model relative to the base configuration with 3-layer transformer, 16-dimensional activation embedding, ν = 1.0, d = 32, 2 heads and 32-dimensional image embedding. Approximate confidence intervals are shown. . Learned attention maps for 4-layer 8-channel CNN network generated with 1-head, 2-layer transformer for 5-shot TIEREDIMA-GENET with additional unsupervised samples (2 per class). Only the last layer of CNN is generated.
Class 10
Class 85 Class 39 Class 60 Class 84 Class 8 Figure 10. UMAP embedding of weights generated by the HYPERTRANSFORMER. Each point corresponds to the embedding of the weights of a given layer for a given episode. The original dimensionality for the first layer weights is 162 = 3 × 3 × 3 × 6, for each subsequent weights is 324 = 3 × 3 × 6 × 6 and for all layers combined is 1 134. We further selected 6 different classes with the smallest standard deviation in the embeddings and highlighted the episodes that contain these classes.
Class 8
Class 84 Class 60 Class 39 Class 85 Class 10 Each column shows 1 model with two layers and fully-connected head that is always generated by the transformer. Left: both CNN layers are generated, center: first CNN layer is trained, second is generated, right: both CNN layers are trained. Layer weight allocation: "output".
H. Visualization of the Generated CNN Weights. Figures 12 and 13 show the examples of the CNN kernels that are generated by a single-head, 1-layer transformer for a simple 2-layer CNN model with 9 × 9 stride-4 kernels. Different figures correspond to different approaches to re-assembling the weights from the generated slices: using "output" allocation or "spatial" allocation (see Section 4.1 in the main text for more information). Notice that "spatial" weight allocation produces more homogeneous kernels for the first layer when compared to the "output" allocation. In both figures we show the difference of the final generated kernels for 3 variants: model with both layers generated, one generated and one trained and both trained.
Trained layers are always fixed for the inference for all the episodes, but the generated layers vary, albeit not significantly. In Figures 14 and 15 we show the generated kernels for two different episodes and, on the right, the difference between them. It appears that the generated convolutional kernel change withing 10 − 15% form episode to episode. Table 4. Average model test and training accuracies on OMNIGLOT (separated by a slash) for the models of different sizes. "Logits" row shows accuracies for the model with only the fully-connected logits layer generated from the support set. It can be interpreted as a method based on a learned embedding. "All" row reports accuracies of the models with some or all convolutional layers being generated. We were not able to see a statistically significant evidence of an advantage of generating more than one convolutional layers. Figure 13. Visualizing convolutional kernels for a 2-layer network with a 9 × 9 CNN kernel size and stride of 4 trained on MINIIMAGENET.
I. Additional Tables and Figures
Each column shows 1 model with two layers and fully-connected head that is always generated by the transformer. Left: both CNN layers are generated, center: first CNN layer is trained, second is generated, right: both CNN layers are trained. Layer weight allocation: "spatial". Figure 15. Visualizing generated convolutional kernels in a 2-layer model for two different episodes. Left two plots: kernels for two random episodes of 5 classes, right: the difference in generated kernels for two episodes. Layer weight allocation: "spatial".
|
2022-01-13T02:16:08.605Z
|
2022-01-11T00:00:00.000
|
{
"year": 2022,
"sha1": "c3112a62284b1f7b699b5aad3adb2d837f7f4e12",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c3112a62284b1f7b699b5aad3adb2d837f7f4e12",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Computer Science"
]
}
|
130962334
|
pes2o/s2orc
|
v3-fos-license
|
Asia’s Role in the New United States Export Economy
The global economic crisis sent world trade volume and world production into retreat and threatened a second Great Depression. There has emerged a consensus that global imbalances – fundamentally reflected in the over-reliance upon the United States (US) consumer market – that built up over the first decade of the 21st century are no longer sustainable. Deficit regions led by the US will have to increase net exports in real terms in order to restore living standards and employment. Surplus regions, Asia, in particular, will have to rely more upon domestic demand and will have a substantial role to play as growth centers for net imports from the rest of the world. This paper examines the composition and prospects for growth of US net exports to the world and to developing Asia. We find that much of the apparent shift in export product shares was a result of the worldwide collapse in demand for high-technology products particularly new aircraft and information technology products. Nonetheless, India, ASEAN-10, and the newly industrialized economies are destinations for US high-technology products to an even greater extent than for the world as a whole. In contrast, the People’s Republic of China tends to import a lower portion of high-technology products but a larger share of agriculture-related and raw materials and energy products than the world as a whole.
I. Introduction
During the high-growth years prior to the global financial crisis, developing Asia's prosperity was furthered by the United States (US) absorbing much of the large and expanding volume of global exports. Merchandise trade deficits of the US during 2006-2008 averaged nearly $1 trillion annually, providing a strong and steady external market for the world's exporters, particularly those in developing Asia. Before the crisis, the US trade balance had started to adjust, with net exports expanding at a modest pace in 2007-2008. With the onset of the crisis, however, US trade-and with it, world tradecontracted. The US trade deficit fell by one third in 2009 as its imports collapsed even faster than its exports. Coming out of the crisis, US households and firms will have to reduce their debt by saving a higher share of disposable income and profits, which will further narrow the trade deficit in the short term. Whether the US can sustain the increase in exports relative to imports in the long term is an open question. What industries are emerging from the rebalancing of US trade and how will this affect developing Asia?
The objective of this paper is to examine the composition of US net exports to developing Asia by level of factor intensity, and to draw implications for its growth prospects. At the outset, however, several generalizations and perhaps one or two myths have to be laid aside. First, the large current account deficits that the US had run in most years of this century are neither desirable nor sustainable. There was some substance to the view that the US current account deficit (reaching almost $1 trillion per year in nominal dollars for the period 2005-2008) could be "a good thing" for export-oriented economies. However, from a development standpoint, it made little sense for one of the richest economies on the planet to be hoarding global savings in order to finance its own investment and consumption demands. This lies behind the paper's emphasis on net exports defined as the sales of US domestic exports abroad over purchases of imports for domestic consumption. The growth of US net exports is seen as a strong correction of recent consumption and housing bubble-driven growth.
Global imbalances had become excessive and were a clear flashing light that trouble lay ahead. The US must expand net exports to fuel growth while reducing external debt and allowing more of the globe's savings pool to flow to where the risk-adjusted returns are highest-in the fast-growing regions such as those in developing Asia, Latin America, and the few bastions of hope in Africa and the Middle East.
The next myth/generalization to be cleared away is the idea that the US cannot compete internationally and that it is no longer a dynamic, export-oriented economy. The data, which will be discussed in detail in the next section, suggests that the international competitiveness of the US is broad-based and covers a very wide swathe of industries. The US remains the preeminent producer and exporter of many high-technology products-indeed these account for the largest real volume of US net exports in recent years. 1 This competitiveness derives from many scientific and research advantages including the US education system, particularly the tertiary or university system (Cole 2009). While the US education system has some flaws, overall the quality of the colleges and universities has raised the country's performance in scientific achievement. Realization of the technological, engineering, and marketing required for competitiveness has mostly been through the modern multinational corporation-another area where US exports have declined somewhat in relative terms but still remains preeminent in the world at large.
The recognition that international trade has a significant role in global economic growth prospects has been celebrated in endogenous growth modeling (Acemoglu 2009) and has also been featured prominently in research using computable general equilibrium models. Both approaches suffer from weaknesses in underlying theoretical foundations and also lack strong empirical grounding. However, more research and careful specification of variables is likely to lead future research to bear more fruit. Trade in differentiated products and related studies of multinational enterprises seems to be particularly useful in demonstrating effective links between technological progress, industrialization, and new goods, and the diffusion of technology in industrialization. This is clearly the case in East Asian industrialization processes (Oshima 1986 and1993).
That brings us to the myth that Asia's growth is purely export-oriented and does not owe as much to the import side of the trade equation. We propose that now is the time to debunk this myth as Asia must come to terms with emerging challenges in energy use, production, and consumption; with health challenges from an aging work force and population; and with an environmental challenge of utmost importance to the ability of our globe to continue to support the emergence of new prosperity, and to safeguard the advances of the past few hundred years. Fortunately, the US and other western economies are well situated to help overcome these challenges that Asia cannot face or solve alone.
The paper is organized as follows. Section II describes the data, particularly in terms of the factor intensity of US real net export flows to various countries, to show that the international competitiveness of the US is broad-based. Section III reports the statistical correlation between US real net exports to the world and to developing Asia as a whole and for the various key countries and subregions (the People's Republic of China [PRC], India, newly industrialized economies [NIEs], and Association of Southeast Asian Nations [ASEAN]). Section IV discusses the trends of US real net exports by level of factor intensity. Section V concludes.
II. Data on and Competitiveness of US Products
For the US to rebalance its growth while reducing external debt, the general consensus has been to produce more than consume, and to increase exports (Rosen 2009). Citing the growing dependence of the US economy on foreign capital as the root of the recent economic havoc, the only way out, while maintaining or even improving the US standard of living, is to expand exports. The US has a comparative advantage in many products and given its level of economic development, there is no question that it can improve both the quantity and quality of goods and services it produces and exports. While some papers emphasize on the use of exports shares in analyzing industrial competitiveness, this paper uses net exports.
One reason for the use of net exports as a measure of competitiveness is that it allows for trade size and intra-industry trade, which limit other indicators. Exports by industry as a share of total exports for instance, tends to be biased by the value of the product itself. For example, it would be a mistake to assume that US exports of vehicles are more competitive than exports of agricultural products, purely because the former has a higher value than the latter. The ratio of exports to imports on the other hand, though effective in determining industrial competitiveness, is less meaningful at higher levels of industry aggregation because of intra-industry trade, i.e., in vehicles, food and beverages, computers, and minerals. The US, for instance, exports trucks and imports cars, but since both are included under the same 2-digit industry classification, i.e., vehicles, the overall result may be difficult to interpret.
Another important measure, the Revealed Comparative Advantage (RCA) index of Balassa (1986), which looks at the product's share in the country's exports in relation to its share in world trade, also has its own set of criticisms. It helps identify which products meet the test of international competition, but a high RCA score does not imply that similar industries in less developed countries cannot compete in international markets. In some product categories, RCA scores in some Asian economies have improved over the years. 2 In addition, countries with similar RCA profiles are less likely to have high bilateral trade intensities if they trade with each other due to product competition unless intra-industry trade is involved, or if intercountry differences in tastes and interindustry disparities in the extent of protection would prevail (Bender and Li 2002). Some of these domestic measures, however, such as local subsidies or foreign trade barriers, have nothing to do with comparative advantage, thus the index may also be biased (Bender andLi 2002, Ng andYeats 2003). Though not a perfect measure, this paper uses real net exports because this indicator immediately shows the impact of the global economic crisis and the correction, or which industries are showing signs of rebalancing.
To analyze this, we used trade data from the US International Trade Commission (USITC), which were compiled into detailed product categories (specifically, the 4-digit level of aggregation in the harmonized system of tariff classifications). The detailed product groups were then grouped by factor content into four broad categories, namely high-technology manufactures (HT), low-technology manufactures (LT), agriculture-related (AR), and raw materials and energy (RME). Next, the data were examined for the group of economies that account for the bulk of the US exports to developing Asia: the People's Republic of China (PRC), India, ASEAN-10, and the NIEs. ASEAN-10 includes Brunei Darussalam, Cambodia, Indonesia, the Lao People's Democratic Republic, Malaysia, Myanmar, the Philippines, Singapore, Thailand, and Viet Nam. The NIEs comprise Hong Kong, China; the Republic of Korea; Singapore; and Taipei,China. To remove price effects, net exports data are deflated using the average of export and import price indexes for each period. The cut-off value was also set at $100 million to highlight the trend in high-valued products, while addressing issues on sample size.
As seen in Table 1, on average, between 2007 to 2008, there were no fewer than 231 4-digit HTS products with US real net exports to the world of over $100 million, and 395 sectors with exports that averaged at least $10 million in real net exports in 2007-2008. Real net exports to developing Asia of over $100 million as well stood at 103 categories, which is almost half the total number of product categories where the US has real net exports of over $100 million; while those over $10 million stood at 177 ( Table 2). The data, therefore, suggests that the international competitiveness of the US is broad-based and covers a very wide range of product categories.
III. Correlation Analysis of US Real Net Exports
This section examines if there is a reasonable consistency between the items for which the US tends to have positive real net exports and large export values. A correlation analysis is performed between US real net exports to the world and to developing Asia as a whole, and for the various key countries and subregions (the PRC, India, NIEs, and ASEAN-10) during the 2-year base period 2007-2008. The sample is limited to trade values greater than $100 million to highlight the trend in high-valued products.
Since developing Asia figures prominently in the US' top trading partners, it is expected that we will find a positive correlation between US real net exports to the world and those to developing Asia for high-valued products. The first row in Table 3 shows positive and high correlation results indeed for real net exports of over US$100 million between the US and key Asian countries and subregions. This indicates that increases in US exports to the world seems to be driven by US exports to developing Asia. This relationship, however, slowly weakened in the years leading to the crisis, indicating a narrowing of the US trade balance with the world, and the reeling effect on global demand of the current global economic crisis that began in 2007 (James 2010).
Extending the analysis further to four subcategories or factor intensities (Table 3, rows 2-5) reveals a negative correlation from 2006 to 2008 in one category: low-technology manufactures (LT). This is due to the fact that despite the US recording positive net exports of LT manufactures to the world, it is a net importer of this type of product in developing Asia. As seen from the results in Table 3, this negative correlation seems to be caused by the negative trade balance with the PRC and the NIEs. This suggests that the NIEs and the PRC are more competitive in this product category vis-à-vis US while the US remains to be more competitive with regard to other export partners. On the other hand, the correlation coefficients in HT manufactures are positive and high in all combinations. This is not so surprising, since given its level of development, the US is well known for producing high-value capital equipment and HT products. It has high export values in a number of export industries like vehicles and scientific transport equipment (HTS 87 and 88), which comprised 15.6% of total US exports in 2007-2008, and machinery and mechanical appliances (HTS 84 and 85), which is about 26.1% of total exports during the same period.
Similarly, the correlation coefficients are positive and high in AR and the RME industries, except with India in AR products.
IV. Trends in US Real Net Exports by Factor Intensity
We now turn to analyze the trends in US real net exports by factor intensity and by countries and subregions in developing Asia. Overall, US real net exports declined in 2009. In 2007-2008, US real net exports to the world of over $100 million stood at $246.3 billion, 29% higher than the value in 2006. About 57.8% of this is due to trade in HT manufactures, or products with high research and development (R&D) intensity, such as in aerospace, computers, pharmaceuticals, scientific instruments, and electrical machinery. But in 2009, total net exports plunged by 35% as the global economic crisis continues to slash global trade. HT manufactures suffered considerably as trade in highvalued HT products dropped 50% from $142.4 million in 2007-2008 to just $71.9 million in 2009. Despite the drop, high-valued HT manufactures remains the top exportable products of the US. AR products on the other hand, showed signs of resilience as the value of its net exports in real terms declined by only 10% during the same period.
US real net exports to developing Asia also declined across different products but some industries remain resilient. In particular, net exports to ASEAN-10 for high-valued HT manufactures in 2007-2008 stood at about $12.6 billion in real terms, roughly about 74% of total real net exports to the ASEAN-10 region (Figure1). In 2009, this dropped by 56.7% reducing its share in the total to just about 58%. Though overall, real net exports of over $100 million dropped by about 46.9%, the market for HT manufactures in the region remains promising because it still acounts for over half of total real net exports to the region. At least 12 HT products, mainly electrical machinery and equipment including plastics and rubbers, remained in the top 20 products traded in 2009 (Appendix Table 1). Notable however is the rise in the share of AR products. Though its value dropped by 12.3% in 2009, the share of AR products overall has increased from just about 18% of total real net exports in 2007-2008 to about 33% in 2009. Cereals, oil seeds, cotton, animal feeds, and dairy products remained the top exported AR products to the ASEAN-10 (Appendix Table 1).
2007-2008 2009
AR 18% HT 74% LT 0% RME 8% RME 9% LT 0% AR 33% HT 58% AR = agriculture-related, HT = high-technology manufactures, LT = low-technology manufactures, RME = raw materials and energy. Note: Real values are derived using the average of export and import price indexes. They are generated from a sample where US net exports to the world is over $100 million. Source: Authors' calculation using data from USITC (2010) US real net exports of high-valued HT products to India as well dropped from $6.1 billion in 2007-2008 to just $3.1 billion in 2009, with the biggest impact found in HT manufactures (it dropped from a high of 80% of the total in 2007-2008 to just 50% in 2009, as seen in Figure 2). Of particular interest however, is the 29 percentage points rise of real net exports of RMEs in 2009. Its share to the total improved to 45% from just 16% in 2007-2008 due mainly to increases in trade of precious stones and metals, iron and steel, and wood pulp (Appendix Table 2). Continuous trade in fertilizers and nuclear reactors, boilers, machinery, and mechanical appliances have also kept HT manufactures a buoyant industry. Real net exports of AR products have also improved slightly.
An almost similar trend for HT manufactures was also seen in US real net exports to the NIEs. Real net exports of HT manufactures dropped by 43.1% from 71.2% in 2007-2008 to just 58.2% in 2009 (Figure 3). But some HT products like machinery and mechanical appliances and electrical equipment, measuring and medical instruments and apparatus, and plastics remained the top exported products to the NIEs. AR products like corn, soybeans, nuts, wheat, and meat also remained in the top 20 (Appendix Table 3). Overall, the value of net exports in real terms to the NIEs dropped 35% in 2009.
2007-2008 2009
AR = agriculture-related, HT = high-technology manufactures, LT = low-technology manufactures, RME = raw materials and energy. Note: Real values are derived using the average of export and import price indexes. They are generated from a sample where US net exports to the world is over $100 million. Source: Authors' calculation using data from USITC (2010) Interactive Tariff and Trade DataWeb, Version 3.1.0, available: dataweb.
usitc.gov/, downloaded 15 February 2010. Though HT products hold the lion's share in the overall real net exports pie to the PRC, its share is less dominant than in the other regions. AR products like soybeans, cotton, meat, and edible offal of poultry remained top US exports in real net terms to the PRC, indicating a high demand for food in the PRC (Appendix Table 4). Over the years, the pattern of food consumption in the PRC has changed in response to rising incomes and growing population. In particular, consumption of meat has increased. Consumption of soybeans as well, either directly as food-tofu, meat substitutes, soy sauce, and other products-or extracted as oil has also increased. In 2009, net exports in real terms of AR products to the PRC increased by 15% from the $7.5 billion average in 2007-2008. Overall, real net exports to the PRC only dropped 8.5% in 2009, compared to the 2-digit declines in India and other regions.
2007-2008 2009
AR = agriculture-related, HT = high-technology manufactures, LT = low-technology manufactures, RME = raw materials and energy. Note: Real values are derived using the average of export and import price indexes. They are generated from a sample where US net exports to the world is over $100 million. Source: Authors' calculation using data from USITC (2010) For developing Asia as a whole, 231 detailed products met the criteria with an overall average value of over $246 billion (in 2001 prices), nearly 58% of which were in HT manufacturing (Table 1). In the precrisis period, the US had net exports in real terms to developing Asia of at least $10 million per year for 177 of the 231 products-with an even greater concentration in the HT category (60%). In contrast to the precrisis performance, in 2009 there was a huge shift in US net exports from HT to the other three categories, especially agriculture. The trend is almost similar for real net exports to developing Asia of at least $100 million ( Figure 5).
V. Conclusion
The analyses of this paper suggest that much of the apparent shift in product shares was a result of the worldwide collapse in demand for high-technology products, particularly new aircraft (the most important product in US net exports in "normal" years) and in information technology products. With new commercial airliners coming off the assembly lines soon, this shift is likely to be reversed. Interestingly, India, ASEAN-10, and the NIEs are destinations for US HT products to an even greater extent than for the world as a whole. In contrast, the PRC tends to receive a lower portion of HT net imports (28%) but a larger share of AR (41%) and RME products (30%) than the world as a whole.
As developing Asia renews its emphasis on improving the quality of life and improving health and environment, its demand for US-produced high technology goods is likely to accelerate and thus to contribute to a more balanced global trade relationship. This will be mutually beneficial as the US seeks to raise saving and exports and to curb consumption and imports.
About the Paper
William E. James and Shiela Camingue examine the composition and prospects for growth of net exports of the United States (US) to the world and to developing Asia. They find that much of the apparent shift in export product shares was a result of the worldwide collapse in demand for high-technology products, particularly new aircraft and information technology products. Nonetheless, India, ASEAN-10, and the newly industrialized economies are destinations for US high-technology products to an even greater extent than for the world as a whole. In contrast, the People's Republic of China tends to import a lower portion of high-technology products but a larger share of agriculture-related and raw materials and energy products than the world as a whole.
About the Asian Development Bank
ADB's vision is an Asia and Pacific region free of poverty. Its mission is to help its developing member countries reduce poverty and improve the quality of life of their people. Despite the region's many successes, it remains home to two-thirds of the world's poor: 1.8 billion people who live on less than $2 a day, with 903 million struggling on less than $1.25 a day. ADB is committed to reducing poverty through inclusive economic growth, environmentally sustainable growth, and regional integration. Based in Manila, ADB is owned by 67 members, including 48 from the region. Its main instruments for helping its developing member countries are policy dialogue, loans, equity investments, guarantees, grants, and technical assistance.
|
2019-04-25T13:07:00.490Z
|
2011-02-01T00:00:00.000
|
{
"year": 2011,
"sha1": "00adac66adccba1f9485058368600aef56183338",
"oa_license": "CCBY",
"oa_url": "https://www.econstor.eu/bitstream/10419/109400/1/ewp-250.pdf",
"oa_status": "GREEN",
"pdf_src": "ElsevierPush",
"pdf_hash": "e9416d63aa33e55e633eac1ac2c03486a2c39ca9",
"s2fieldsofstudy": [
"Economics",
"Business",
"Political Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
244148598
|
pes2o/s2orc
|
v3-fos-license
|
Successful treatment with rituximab for central nervous system vasculitis caused by Epstein-Barr virus-associated lymphoproliferative disorder with immunoglobulin M gammopathy
Monoclonal gammopathy of undetermined significance (MGUS) is associated with several autoimmune conditions, including central nervous system (CNS) vasculitis. Epstein-Barr virus (EBV) is a pathogen capable of triggering a systemic immune response and is involved in the occurrence of a wide range of B-cell lymphoproliferative disorders. In systemic autoimmune diseases, EBV infection is suspected to play a central role in pathogenesis. Here, we present a case that was thought to be a systemic autoimmune disease and CNS vasculitis that developed after EBV infection, demonstrating that rituximab is effective for the treatment.
proportion of immune cells with autoimmunity [4]. Similarly, autoimmune disease can occur post EBV infection, but reports on the CNS are still not sufficient. Here, we present the case of a 42-year-old woman with CNS vasculitis associated with monoclonal gammopathy of immunoglobulin M (IgM) kappa type after infection with EBV. After steroid treatment, only a partial effect was shown, but rituximab administration caused immediate and complete remission. Rituximab may be considered as a treatment option for patients suspected of having CNS vasculitis following viral infection, particularly EBV infection. patient had a 10-year history of episodic paresthesia in her fingers and toes when exposed to cold, which was diagnosed as limited systemic sclerosis 3 years before admission. The patient also had a history of recurrent umbilical hernia and pelvic organ prolapse of unknown etiology. Initial cerebrospinal fluid (CSF) analysis revealed pleocytosis of 459 white blood cells/mm 3 (leukocytes 88%; lymphocytes 12%), an elevation protein level of 161 mg/dL, and low glucose (36 mg/ dL) with positive EBV DNA polymerase chain reaction (PCR) results. The initial magnetic resonance imaging (MRI) showed multifocal patchy T2 high signal intensity in bilateral cerebral hemispheres with hemorrhagic changes ( Figure 1A and B). She was treated with intravenous vancomycin (aim for predose level 15-20 mg/L), ceftriaxone (2 g intravenously [IV] every 12 hours), acyclovir (10 mg/kg), and dexamethasone (10 mg IV every 6 hours). Despite receiving antimicrobial treatment based on the meningoencephalitis diagnosis, the patient's headache gradually worsened and her visual field began to blur. She was referred to our hospital for further evaluation after 7 days of antimicrobial treatment.
Upon physical examination, neither the liver nor spleen were palpable in the abdomen, and there was no edema of the lower extremities. Neurological examination revealed a drowsy mentality, general weakness (MRC [Medical Research Council] grade III), and decreased visual acuity to differentiate only hand motion. Vital signs at initial evaluation were: blood pressure, 119/87 mmHg; heart rate, 64 beats/min; respiratory rate, 18 breaths/min; and temperature, 37.3°C. In the blood test, the hemoglobin sedimentation rate was 56 mm/hr, the C-reactive protein was 3.09 mg/dL, serum cryoglobulin was positive, and mild hyponatremia (133 mEq/L) was observed. Except for the above, no abnormal findings such as anemia were observed in the blood tests. In the complete blood count, leukocytes were 2,660/μL, hemoglobin was 8.6 g/dL, platelets were 213,000/μL, mean cell volume was 80.3 fL, and mean corpuscular hemoglobin was 23.6 pg. Biochemical tests found that the total protein was 10.2 g/dL, albumin was 3.26 g/dL, aspartate aminotransferase was 24 IU/L, alanine aminotransferase was 19 IU/L, alkaline phosphatase was 52IU/L, lactic dehydrogenase was 186 IU/L, total bilirubin was 0.6 mg/dL, blood urea nitrogen was 12 mg/dL, and creatinine was 0.89 mg/dL. Serum electrolytes were Na 133 mEq/L, K was 4.4 mEq/L, Cl was 112 mEq/L, total calcium was 8.4 mEq/L, ionized calcium was 4.63 mEq/L, uric acid was 4.3 mg/dL, serum cholesterol was 94 mg/dL, and serum triglyceride was 24 mg/ dL; proteinuria was not observed as a result of urinalysis.
No specific antibodies against neuronal cell-surface or synaptic proteins were found in the serum or CSF. In addition, viral PCR in the CSF and serum were all negative, including EBV. Serum protein electrophoresis was normal (total protein, 7.0 g/dL; M-spike, 0%), but immune-typing results showed a dimmed monoclonal band against the anti-IgM and anti-kappa antiserum suggesting monoclonal gammopathy of the IgM kappa type. There was no evidence of clonal plasma cell infil- tration in bone marrow biopsies. MRI findings showed aggravated multifocal patchy T2 high signal intensity in bilateral cerebral hemispheres compared with the previous study ( Figure 1C). The positron emission tomography scan revealed relatively mild hypometabolic lesions in the left high parietal and right occipital cortices, suggesting CNS inflammation ( Figure 2). Conventional angiography showed stenoses of multiple medium-sized brain arteries in the form of beads on string, a typical finding of CNS vasculitis (Figure 3). The patient received a high-dose corticosteroid based on the clinical diagnosis of CNS vasculitis. After steroid administration, visual acuity and motor power recovered quickly, but the patient continued to complain of gait imbalance. Intravenous rituximab (375 mg/m 2 ) was administered at regular intervals (the first four sessions were administered weekly and then monthly thereafter). By the end of the fourth cycle, the patient's clinical symptoms had recovered completely and image findings were improved ( Figure 1D). The CSF profile was also improved with 3 white blood cells/mm 3 , as well as a normal range of protein (40 mg/dL) and 62 mg/dL of glucose (serum glucose, 127 mg/dL). Rituximab was maintained on a monthly schedule, and the patient remained symptom-free for over a year. Furthermore, the T2 high signal seen on MRI almost disappeared after the 11th cycle of rituximab ( Figure 1E).
Discussion
We present a case of CNS vasculitis that is presumed to have developed after EBV infection. At first, brain involvement of systemic sclerosis was considered due to past medical history. However, the characteristic brain involvement of systemic sclerosis, such as small vessel calcification and intracerebral calcification [5], was not observed. Cryoglobulin-related vasculitis was also considered but was excluded after observing that angiography showed the vasculitis invading medium-sized blood vessels and not small blood vessels and because it responded well to steroids [6].
Although the precise etiology of CNS vasculitis is unknown, infectious agents have been proposed as triggers [7]. EBV is a pathogen capable of triggering a systemic immune response, including in the CNS [8]. EBV is a human herpes virus, and 90%-95% of the adult human population carries EBV as a chronic latent infection [4]. Most EBV infections are asymptomatic, but EBV sometimes causes a systemic infection or reactivation that may directly involve the CNS [9]. The nervous system is clinically involved in EBV infection in 0.5%-7.5% of individuals, and the most common CNS complications of EBV infection include encephalitis, cerebellitis, meningitis, cranial nerve palsies, and myelitis [7]. However, there are few reports of CNS vasculitis associated with EBV infection. Unfortunately, treatment has not yet been established.
In our case, CSF pleocytosis persisted even after antimicrobial treatment, and considering the negative conversion of EBV DNA, the most likely explanation is that CNS vasculitis occurred after EBV infection. Kim et al. [10] recently reported a patient who was presumed to have developed acute disseminated encephalomyelitis after EBV infection and was treated with rituximab after failing steroid and IV immunoglobulin therapy. As in our case, in the above study, EBV DNA was converted to negative, and the MRI image improved after treatment with rituximab. The pathological development of EBV-related neurological diseases can be immune-mediated, infected, or both [11]. EBV infection may directly or indirectly contribute to the development of the pathogenesis of EBV-associated vasculitis by an immune-mediated reaction. EBV primarily targets B cells via interaction of the viral envelope glycoprotein, and EBV activates primary human B lymphocytes [12]. The preference of EBV for B cells may explain why the anti-CD20 monoclonal antibody rituximab, which targets B cells, has resulted in significant clinical improvements.
Several studies demonstrated a higher incidence of EBV positivity in lymphoproliferative disorder patients than in the general population [13]. A small percentage of these carriers, particularly those with immunodeficiency, develop EBV-positive lymphoproliferative disorders, even though some disorders also develop in the general population [13]. In adults, EBV-positive lymphoproliferative disorders may be caused by dysregulation of the immune response to EBV infection, reduced immunity with aging, and iatrogenic immune suppression [14]. We believe that several autoimmune responses followed sequentially due to changes in immune responses after EBV infection.
Monoclonal gammopathy, whether malignant or of undetermined significance (MGUS), results from clonal proliferation of differentiated plasma cells producing homogeneous whole immunoglobulin or light chain [15]. Reactivation of EBV has been implicated in the pathogenesis of monoclonal gammopathy [16]. Excessive production of abnormal clonal gamma globulins, or paraproteins, causes changes in the circulation by increasing the viscosity [17]. Several studies have been conducted to determine the predictive value of the M protein as a marker of lymphoid proliferations disease [18], but further studies are still needed.
The patient presented here also had a history of pelvic organ prolapse and umbilical hernia, suggesting that she may have had a connective tissue disorder. Systemic autoimmune diseases (SADs) are a group of connective tissue diseases with diverse, yet overlapping, symptoms and autoantibody development [4]. Because of the relationship between SADs and MGUS, we tested whether the gene was involved or not, but there were no special abnormalities, and no one in the family showed similar symptoms.
In conclusion, we report a lymphoproliferative disorder that appeared after EBV infection in the form of systemic sclerosis and CNS vasculitis. Steroids can be used as first-line treatment, but if the response to steroids is limited, rituximab can be considered as another treatment option for the patient.
Conflicts of Interest
Kon Chu has been on the editorial board of encephalitis since October 2020. He was not involved in the review process of this case report. No other potential conflicts of interest relevant to this article are reported.
|
2021-11-17T16:08:10.552Z
|
2021-11-15T00:00:00.000
|
{
"year": 2021,
"sha1": "25cd37161278bf79531366b4096dad7834bf8bf5",
"oa_license": "CCBYNC",
"oa_url": "https://www.encephalitisjournal.org/upload/pdf/encephalitis-2021-00143.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "88693b5bd0908cdd85bed2c2b964d6aead54b1a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
208435642
|
pes2o/s2orc
|
v3-fos-license
|
Physico-chemical parameters and acceptability and of spleentreated beef patties
Iron deficiency is one of the world’s most common disorders and it occurs when the amount of iron available is insufficient to meet an individual’s needs. Spleen is known as a food product rich in iron content, and is a cheap offal. Therefore, consumption of spleen, both directly and indirectly, and especially for the treatment of anemia (iron deficiency) disorder is advised by the medical profession. However, consumption of cooked spleen is unacceptable to many people, due to its bloody structure. In this study, the effect of adding spleen at 0, 5, 10 or 15% to beef patties was studied and physico-chemical (pH, color and iron content) and sensory changes (color, odor, chewiness, flavor and overall acceptability) in the patties were investigated. Along with incremental increases of spleen content in beef patties, pH and iron content were increased, lightness L* and redness a* values were decreased, but yellowness b* values were not significantly different between the patties with added spleen (P>0.05). In terms of sensory analysis, panelists generally appreciated the patties with 10% spleen more than the other spleen levels.
Introduction
Although iron is an abundant element in the world, iron deficiency (anemia) is one of the most common human disorders worldwide. Iron deficiency occurs when the amount of iron available is insufficient to meet an individual's needs. Estimates indicate that over 2 billion people suffer from iron deficiency, and more than half of them are anemic. The prevalence of anemia is especially common among pregnant, infants, and children under the age of 2 years [1,2]. Consequently, various treatments exist for iron deficiency, such as drug supplementation with iron, consumption of foods rich in iron (liver, spleen, fish, egg etc.) and products enriched with iron. For example, 100 g of raw beef spleen contains approximately 19 mg iron, and this amount increased up to 47.15 mg per 100 g after cooking [3]. Compared to other products (egg, fish etc.), although spleen is fairly rich in iron, it has some negative sensory properties such as texture, odor and flavor. Therefore, it is not an offal type that is frequently preferred by consumers. The aim of this study was to determine the effect of adding spleen to beef patty formulations on some physico-chemical (pH, color, iron content) and sensory properties of beef patties.
Preparation of beef patties
Fresh beef brisket and rib meat was obtained from a local meat processor in Denizli. Approximately, 2 kg of fresh meat was minced using a meat mincer (PM-70, Mainca, İspanya) through a plate with 3 mm holes. The minced beef mixture was divided into four to prepare the following formulations: Control (without spleen) and 5, 10 and 15% added spleen. For patty preparation, minced meat was mixed with salt (1%) and spleen, and then kneaded by hand for 15 minutes. Patties (25 ± 1 g) were molded using a metal shaper (6 cm diameter and 1 cm thickness) and polystyrene foam plate and stored at 2°C until analysis.
Analysis
The color values (L* (lightness), a* (redness), b* (yellowness) of patties was assessed with a colorimeter (Hunterlab Miniscan XE Plus, USA). To measure pH, 10 g of patty was homogenized with 90 ml of distilled water and homogenate pH was measured with a digital pH meter (Crison Basic 20, Spain). Before pH measurements, the pH meter was standardized using pH 4, 7 and 10 buffer solutions (Merck, Germany).
A Perkin-Elmer Analyst 700 atomic absorption spectrometer (AAS) (Norwalk, CT, USA) was used for analyses of the iron (Fe) in this work. The measurements were conducted in an air/acetylene flame. The running parameters for iron element were operated as suggested by the manufacturer. All measurements were performed in triplicate. Patty (1.0 g) was weighed on an analytical balance, and then 10 mL HNO3 was added. This mixture was predigested by standing in open vessels for a minimum of 15 mins before sealing the vessels. Digestion was conducted using a microwave system, power set at 1030 and 1800 watts, ramp time 20-25 min, hold 15 mins. Preliminary experiments showed that 15 min hold digestion time was suitable for digests without insoluble materials and at 200°C.
The patties were evaluated by a 20-member semi-trained panelist team selected from Pamukkale University Department of Food Engineering students. The patties were cooked in a conventional oven (Termikel 13007, Turkey) at 130°C for 20 min until the internal temperature reached 80°C and then, all cooked patties were coded with 3-digit random codes and offered to the panelists in a random order. Sensorial properties color, odor, chewiness, flavor and overall acceptability were evaluated using a seven point hedonic scale, ranging from dislike extremely unacceptable (score: 1) to like extremely acceptable (score: 7).
The statistical design of the study was 4 (treatments) * 3 (replications) randomized block design and all parameters were measured in duplicate (n = 24). A one-way analysis of variance (ANOVA) and Duncan's Multiple Range Test were performed to analyze in order to evaluate effects on the treatments and the storage periods using SPSS for Windows (SPSS version 15,0 for Windows). Critical difference was determined at the 5% significance level.
Results and Discussion
Color plays an important role in both the quality and consumers' acceptance of meat and meat products. Physico-chemical properties (color, pH and iron content) of beef patties are presented in Table 1. Patties containing spleen had lower L* (lightness) values than control patties (P<0.05). A statistical difference in a* (redness) values among the patties was observed, shown in Table 1. As expected, L* (lightness) and a* (redness) values decreased with the increasing amounts of spleen due to the fact that spleen has a substantially red pigment. The patties containing 15% spleen had lowest L* and a* values among the patties (P<0.05). The addition of spleen did not significantly alter the b* (yellowness) values (P>0.05), but the b* values of the patties fluctuated.
The pH of the patties increased from 6.50 to 6.90 as the proportion of spleen increased from 0 to 15% in the beef formulation. Patties with spleen had slightly higher pH than the control patties (P<0.05). Iron contents of the patties were between 11.8-28.5 µg/g. The iron content of patties with 15% spleen was approximately 2.5-fold higher than that of control patties. Moreover, as the proportion of spleen increased in the beef patty formulations, iron content increased and differences were statistically significant (P<0.05). Results of sensory analysis (color, odor, chewiness, flavor and overall acceptability) of beef patties are given in Table 2. Patties with 15% spleen were a different color to control patties (P<0.05), while color differences between control and patties with 5% spleen were not significant (P>0.05). Although patties with 5% spleen had the highest sensory odor scores (6.00), there was no odor difference between patties with 5% and 10% spleen (P>0.05). Chewiness scores were not different between the patties ( Table 2) (P>0.05). The beef patties with 15% spleen had a significantly different flavor than beef patties prepared with less spleen or without spleen. The overall acceptability scores of the patties with 5% spleen and control patties were similar (P>0.05). Control patties were the most acceptable, overall, while patties with 15% spleen had the lowest overall acceptability (5.68±0.24 and 5.06±0.30, respectively). Addition of 5% spleen did not produce a negative impact on sensory properties. However, with the higher percentages of spleen in the patty formulations resulted in lower sensory scores of the patties, except that of odor. Krishnan and Sharma (1990) reported that offal (rumen and heart meat in equal proportions) in buffalo meat sausages did not produce any negative effect on sensory properties (appearances, color, flavor, juiciness and overall acceptability).
Conclusion
Spleen is a food (along with liver) recommended by the medical profession for the treatment of iron deficiency anemia. The development of beef patties fortified with the spleen could help older adults, pregnant women and infants achieve their targeted iron requirements, thus reducing the risk of anemia. Our study on the physico-chemical properties of the patties with spleen showed that increasing the percentage of spleen incorporated in the beef patties does not affect b* (yellowness) or pH, while the L* (lightness) and a* (redness) values decrease, but iron content increases. The findings from this research showed that beef patties with 5% spleen and control patties (no spleen) were similarly favorably assessed in terms of sensory scores. Spleen has potential to be used to successfully enrich beef patties with iron, providing a new and healthier product.
|
2019-10-17T09:05:32.085Z
|
2019-10-14T00:00:00.000
|
{
"year": 2019,
"sha1": "2bcd7ba63081bf146eae702a3420d63bbe3b0c26",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/333/1/012091",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0d77248014123b27b75bf8507949894903b78719",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
}
|
24809390
|
pes2o/s2orc
|
v3-fos-license
|
Quantitative mass spectrometry-based techniques for clinical use: Biomarker identification and quantification
The potential for development of personalised medicine through the characterisation of novel biomarkers is an exciting prospect for improved patient care. Recent advances in mass spectrometric (MS) techniques, liquid phase analyte separation and bioinformatic tools for high throughput now mean that this goal may soon become a reality. However, there are challenges to be overcome for the identification and validation of robust biomarkers. Bio-fluids such as plasma and serum are a rich source of protein, many of which may reflect disease status, and due to the ease of sampling and handling, novel blood borne biomarkers are very much sought after. MS-based methods for high throughput protein identification and quantification are now available such that the issues arising from the huge dynamic range of proteins present in plasma may be overcome, allowing deep mining of the blood proteome to reveal novel biomarker signatures for clinical use. In addition, the development of sensitive MS-based methods for biomarker validation may bypass the bottleneck created by the need for generation and usage of reliable antibodies prior to large scale screening. In this review, we discuss the MS-based methods that are available for clinical proteomic analysis and highlight the progress made and future challenges faced in this cutting edge area of research.
What is clinical proteomics?
The discovery of novel, disease-related biomarkers by proteomic analyses of readily accessible bio-fluids such as plasma using liquid chromatography (LC)-coupled mass spectrometry (MS)-based methods, is an exciting prospect for improved patient care. The major goal of clinical proteomics is to use these highly specific disease/pathology-related signatures to enhance current clinical practice by enabling accurate early diagnosis, selection of appropriate therapeutic strategy and to monitor disease progression and/or possible side effects on a patient by patient basis [1]. It is only with the use of recent advances in analytical biochemistry such as MS technologies and high resolution liquid phase separations that personalised medicine may become a reality [2].
In order for MS-based proteomics to be successful, clinically effective novel biomarkers must have high sensitivity (indicate a positive test for patients who are positive for the disease), high specificity (negative for patients who do not have the disease) and be sufficiently robust to operate in many different centres [3]. This demands rigorous biomarker identification and qualification strategies and the need for well designed, large scale clinical trials to validate the use of novel proteomic signatures [1]. It is also essential that the translation from pre-clinical findings to regulatory-approved biomarkers is undertaken with maximum possible efficiency and realism, with appreciation for the many challenges that this entails.
Why use serum/plasma?
Much emphasis has been placed on the identification of novel blood borne biomarkers due to the ethical situation pertaining to biopsies, plus the ease and cost of patient sampling when compared to standard methods such as biopsy. Serum and/or plasma also offers the option of longitudinal sampling and monitoring of individuals which may also lead to the detection of disease in patients who are asymptomatic or have early stage disease. Serum represents the soluble fraction of blood that remains following the clotting process, and thus is considered to be a more simplified matrix than plasma, which contains all soluble blood borne factors, including clotting factors. While serum is thought to be less complex than plasma, and thus the probability of identifying novel proteins may be increased, the clotting process is not uniform (unlike the preparation of plasma), and may also lead to the loss of novel factors which remain bound to the insoluble protein clot [4].
Diseases that have received the most attention for blood borne biomarker discovery include cancer and cardiovascular disease [5,6]. Bio-fluids that have been analysed also include urine (reviewed in [7]), cerebro spinal fluid (CSF) [8], nipple aspirate fluid (NSF) [9][10][11][12] and tumour ascites fluid have also received attention as potential sources of novel clinically relevant biomarkers. Indeed, in the latter three examples it is thought that these fluids may contain higher concentrations of disease-specific proteins due to their proximity to the primary lesion. Aside from the obvious benefits of blood sampling rather than procedures such as biopsy and CSF withdrawal, it is thought that blood, unlike urine (unless for specific diseases of the urological system), may directly contact the disease site and thus is more likely to contain primary biomarker information.
Mass spectrometry
Biomarker discovery using mass spectrometry to identify and quantify the protein components of bio-fluids such as plasma is based upon the measurement of the mass of proteins and peptides as determined by their mass:charge ratio (m/z). This can be determined by ionisation of a sample to generate charged species which depending on their m/z value will reach the detector at a specific time (time of flight, ToF mass analyser), or by 'trapping' the ions in an electric field and then sequentially filtering them out based on size (smallest first) and measuring the time at which they arrive at the detector. In order to generate peptides that are small enough to be efficiently ionised the sample must be digested with trypsin prior to MS analysis. Because trypsin cleaves proteins at specific amino acid residues these m/z values can then be searched against a database of cleavage products (of known m/z values) in order to generate a 'peptide mass fingerprint'. Protein sequence information can be more accurately achieved by MS/MS analysis. As previously described, ionised peptides are selected based upon their m/z value, however, once all the other ions have been filtered out, they are then induced to fragment by collision with an inert gas, such as argon or nitrogen. This causes fragmentation of the peptide along the peptide backbone in a highly predictable manner, and thus the time at which these fragments (with particular m/z values) reach the detector allows identification of the peptide/protein sequence.
Challenges for biomarker identification using MS
There are several practical challenges for the use of MS methods in the discovery and usage of robust clinical biomarkers, the details of which will be discussed below. Foremost is the need for accurate quantification from MS or MS/MS spectra coupled with protein/peptide identification. Biomarker information must be strictly quality controlled and validated and the appropriate statistical methods must be applied. Ultimately this may mean that individual steps be combined into a workflow that ideally allows for the automation of most of the tasks to minimise external sources of error.
Variation
Recent advances in MS technology have led to the development of equipment with superior sensitivity and specificity that has made the detailed study of complex biological fluids such as plasma possible. Although the blood has been described as the most comprehensive human proteome, a circulating representation of all body tissues and thus reflective of disease status [4], the question still remains as to whether changes in plasma or serum form a linear relationship with events that occur at the site of disease or injury [1]. However, blood sampling is a routine diagnostic tool in the clinic and therefore requires extensive investigation to achieve identification of novel biomarkers [13].
The proteome is a constantly changing entity, with complexity generated at many levels. It is essential that signatures are verified as being disease related, rather than as a result of the background noise inherent in any complex system, further adding to the challenge of biomarker identification. Indeed, due to the heterogeneous nature of human beings and their diseases, a panel of biomarkers rather than a single marker may be required to achieve the high sensitivity and specificity required for clinical applications [14]. With particular regard to oncology, most studies to date have involved patients with advanced disease, and given that genomic studies indicate that the molecular composition of early and late stage tumours can be different, the hope that these signatures will translate to early stage pre-invasive lesions where there are no reliable diagnostic tools may prove to be too simplistic [1].
Dynamic range can be addressed by fractionation
When using MS-based methods to obtain clinically relevant information from biological samples, the quantity and quality of identification and quantification are direct functions of sample complexity. In the clinical proteomics setting where serum/plasma is the source material, extensive pre-fractionation steps are essential due to the huge dynamic range of protein found in the blood. In human plasma the 22 most abundant proteins represent ∼99% of total protein mass in plasma with extraordinary dynamic range (>10 orders of magnitude) from serum albumin at ∼45 mg/mL to cytokines at around 1-10 pmol/mL or lower [4]. In addition, the necessity for tryptic peptides to be generated for direct identification of proteins by MS leads to a concomitant increase in the level of complexity of a given sample, thus the need for pre-fractionation methods becomes essential if the (relative) quantity of low abundance proteins is to be determined with accuracy and precision.
These methods generally involve the use of liquid chromatography, including reversed phase (RP) systems and affinity elution to deplete the major abundant proteins, of which several columns are commercially available. These have been designed to deplete the high abundance proteins, including the top 20 (Sigma ProteoPrep20 TM , Sigma-Aldrich, St. Louis, MO), the top 12 (Pro-teomeLab IgY12, Beckman Coulter, Fullerton, CA) and the top 14, 7 and 6 human proteins (High-Capacity Human-14 (-7, -6) MARS columns, Agilent Technologies Inc., Santa Clara, CA). As an additional pre-fractionation step it is also possible to use strong cation exchange (SCX) chromatography prior to the RP step. Both SCX and RP chromatography are used for fractionation of the sample post-trypsin digestion, prior to MS analysis. This peptide level fractionation enables low abundance peptides to be detected by the mass spectrometer, thus increasing the likelihood of identifying low abundance biomarkers from complex matrices such as plasma.
Quantitative techniques for biomarker ID using MS
Quantification is at the centre of clinical proteomics, without reliable methods to accurately quantify differentially expressed proteins it would not be possible to identify disease biomarkers, and as such, clinical proteomics would fail. Many advances have been made in the field of LC-MS/MS towards this end, and these will be discussed below.
Broadly speaking, quantification techniques have been developed based upon two methods, the incorporation of labels into peptides and proteins prior to MS analysis, or label-free methods. The use of labels is based on the principle of stable isotope dilution theory which states that a stable isotope labelled peptide will behave in a chemically identical manner to its unlabelled counterpart, and thus the two peptides will have identical chromatographic and/or MS properties. Provided that the label imparts a sufficient mass difference between these two peptide forms, their relative abundance may be calculated by comparing their respective signal intensities in the same MS run [15]. Mass tags can be incorporated in a variety of ways, either metabolically, chemically or enzymically. In addition, if the identity of the protein(s) of interest is known, quantification can be achieved by spiking the test sample with labelled synthetic peptides for direct comparison with levels of the corresponding endogenous peptide(s).
Label free methods generate quantification information by directly correlating the MS signal with the relative or absolute protein quantity. This can be achieved by several different methods. One of these uses the integrated ion intensities in MS mode where the number and intensity of precursor ions at selected m/z ratios are counted and peak areas from the extracted ion chromatogram (XIC) are calculated. Systematic errors (sample loading, HPLC reten-tion times and MS instrument performance) are minimised by normalisation of peak intensities over the entire run [16] and ion suppression effects are countered for by the use of internal standard peptides included in each run at equal concentrations [17]. Alternatively, the spectral counting approach [18][19][20] uses data acquired in MS/MS mode to count and identify the number of fragment spectra that identify peptides of a given protein. These are then used to compare abundances between samples based on the number of MS/MS spectra identified for each protein corrected for protein length or expected number of tryptic peptides [21].
However, these methods assume that the linearity of response is the same for every protein, when in fact the spectrum count response is different for every peptide, for example, because the chromatographic behaviour of each peptide will vary. This then necessitates the acquisition of many spectra in order to accurately quantitate levels of any given protein, and as a result low abundance proteins can be difficult to identify and accurately quantify [16]. Meanwhile, saturation of the detector will occur at higher spectral counts, again with different levels for different proteins, in turn leading to potential problems with dynamic range [15]. As such, the performance of both methods is hampered by a need for large sampling numbers.
However, label free approaches to biomarker discovery by MS are continually being developed and refined. One approach is to combine spectral counting to give accurate fold changes with peptide ion intensity measurements using standard peptides to correct for variations in signal (ion abundance) [16,22]. An additional method, known as spectral feature analysis has recently been developed whereby quantitative and qualitative information can be gathered by aligning and comparing LC-MS datasets without the initial need for MS/MS analysis [23]. The increased costs and processing times associated with labelling have meant that label free techniques have generally been considered advantageous in their application to large scale clinical proteomics. However, the introduction of 8-plex iTRAQ reagents may enhance throughput for this technique compared to other methods. Furthermore, label free techniques are generally considered inferior in their quantification accuracy when compared to methods relying upon stable isotopes [15]. In particular, an early study conducted by Petricoin et al. [24] used surface enhanced laser desorption ionisation (SELDI) (a label free approach) to identify a proteomic signature associated with ovarian cancer, however, this study was later disregarded as the results were not reproducible and found to be most likely due to variables introduced during sample processing [25,26].
In all cases the properties of the mass spectrometer will affect the quantification in MS survey. For example, detection of low abundance ions will be obscured by the background noise, or quantification may be prevented by saturation of the detector. Although for quantification in MS/MS mode saturation is rarely a problem, however, in all cases true low abundance peptides may lead to poor quantification due to poor ion statistics (which define the sensitivity of detection; at high data acquisition rates fewer ions entering the mass spectrometer are allocated to the generation of each spectrum, leading to increased signal:noise ratios and low abundance peptides may not be identified) [15]. These factors coupled with the qualities of the label (if a label is used) mean that optimisation of peptide/protein ID and quantification must be achieved by decreasing the sample complexity prior to MS. Decreasing the sample complexity by fractionation (although overall analysis time will be increased) means that a greater number of peptides will be potentially analysed as more MS time will be committed to each one.
The most commonly used MS-based methods for clinical biomarker discovery and their advantages and limitations are summarised in Table 1 and will be discussed in detail below. Table 1 Characteristics of quantitative mass spectrometry methods (adapted from [15,17]
Protein identification and quantification using two-dimensional gels and MS
The first method developed for identification of differentially expressed proteins from complex proteomic samples was a combination of two-dimensional polyacrylamide gel electrophoresis (2D-PAGE) fractionation and MS analysis [27,28]. Samples are first separated by 2D-PAGE, which employs a two-step separation technique whereby denatured proteins are separated based upon their isoelectric point (the pH at which the net charge on the protein is zero) followed by separation based upon molecular size. The gels are stained and spots that appear to be differentially expressed are excised and in-gel digested with trypsin prior to MS analysis in order to determine the identity of the protein.
The development of ultrasensitive fluorescent tags with broad dynamic range and linearity of quantification, high performance digital imaging and analysis software, faster identification of spots by MS, large scale application of these techniques and major progress in genomics and bioinformatics has accelerated the development of 2D gel based proteomics. Technical issues such as poor gel to gel variability and low sensitivity of detection have been minimised. As 2D-PAGE can only separate proteins in the mass range of 10-300 kDa it can be considered complimentary to other techniques, such as SELDI-ToF (discussed later) as this proteomic method can only be used to identify proteins below 20 kDa in size [29]. However, the presence of highly abundant serum/plasma proteins such as albumin and immunoglobulins are a major challenge for the success of 2D-PAGE-MS in the identification of differentially regulated proteins from clinically relevant bio-fluids such as these [30]. These abundant proteins result in large smears that mask lower abundance proteins [31], and thus depletion of these highly abundant proteins prior to 2D-PAGE is essential. The issue of multiple proteins present in a single spot still however remains problematic.
A variation on gel-based 2D separation has been developed, known as the ProteomeLab TM PF 2D system (Beckman Coulter, Fullerton, CA, USA). This is a liquid-phase 2D HPLC fractionation system that separates complex protein mixtures in liquid phase by chromatofocusing in the first dimension followed by high resolution non-porous silica reversed-phase chromatography (RPLC) in the second dimension thereby separating proteins based first upon pI, followed by hydrophobicity [32,33]. In addition, combining this method with iTRAQ tagging offers the potential for reducing sample complexity and identifying proteins that co-elute [34].
Although 2D-PAGE/MS is a relatively low throughput method, it does have the advantage that mass spectrometer analysis time is relatively short, as it is only used to compare differentially expressed proteins. In addition, this technique involves the study of intact proteins, rather than peptides and can therefore distinguish between protein isoforms as well as different post-translationally modified forms of the same protein. Several studies have demonstrated the clinical utility of this method, for example in the identification of differentially expressed proteins associated with neurodegenerative diseases including Alzheimer's and Parkinson's disease (reviewed in [29]). Several studies have also reported identification of urinary biomarkers using 2D gel-based approaches, which also demonstrated a positive correlation between protein abundance and disease stage (reviewed in [7]).
Quantification using stable isotope labelling
For the purposes of this review we will only consider chemical stable isotope labelling and live cell labelling techniques will not be discussed as these methods are not amenable to clinical proteomics. Chemical labels include isotope coded affinity tags (ICAT) [35], or isobaric tags such as iTRAQ [36], which rely on the use of a derivatisation reagent for chemical modification of proteins in a site specific manner. Because these labels are chemically identical the same peptide from two (or more) samples will behave identically in terms of chromatographic retention and ionisation efficiency, allowing samples to be analysed and compared simultaneously [37] (Fig. 1).
In the case of iTRAQ experiments throughput can be improved compared to other labelling methods, because 6 or 7 experimental samples can be analysed simultaneously compared to 1, but this is at the cost of a poor duty cycle due to the need to carry out MS/MS on all peptides. On the other hand, because iTRAQ is one of only two tagging technologies (the other being Tandem Mass Tags (TMTs) [44]) where quantification is carried out in MS/MS mode, this leads to increased accuracy and more reliable quantification.
Indeed, stable isotope labelling should not affect the physiochemical properties of the peptide.
Chemical labelling techniques currently employed in clinical proteomics research will be discussed below.
ICAT
This method, developed by Gygi et al. [35], specifically targets cysteine residues and allows differentially labelled samples to be individually resolved during MS analysis The original ICAT reagent contained either zero or eight deuterium atoms, second generation reagents, in particular, cleavable ICAT (cICAT) has since been developed and contains nine 13 C atoms as the heavy isotope which imparts a mass difference of 9 Da between labels [38] (Fig. 1). ICAT reagents also contain a biotin group for affinity purification of derivatised peptides prior to MS. This can cause problems during MS due to its bulky nature; however, cICAT reagents circumvent this issue as they contain an acid-cleavable linker between the reactive sulfhydryl tag and the biotin moiety, which allows for its removal following affinity purification.
In addition to ICAT, other thiol specific reagents have been developed (see [15] for review), including a metal coded affinity tagging method, which also targets cysteine residues [39]. Perhaps due to the issues associated with targeting cysteine residues (one in seven proteins in the vertebrate proteome do not contain cysteine [40]) the use of ICAT in a clinical setting has been limited, however, it has been used to investigate age-related [41] and Alzheimer's diseaseassociated changes in cerebrospinal fluid proteins [42].
iTRAQ
A further group of labelling reagents are those that have been synthesised to target the peptide amino group and the epsilon-amino group of lysine residues. These tags are considered favourable over methods such as ICAT as they target all tryptic peptides in a sample digest, and thus the depth of coverage is greatly enhanced. In most cases these types of tag utilise Nhydroxysuccinimide (NHS) chemistry or other active esters and anhydrides, for example in the isotope-coded protein label (ICPL) [43], isotope tags for relative and absolute quantification (iTRAQ) [36], tandem mass tags (TMTs) [44] and acetic/succinic anhydridebased methods [45][46][47][48]. Other less used methods include the use of isocyanates, or isothiocyanates [49,50], and methylation of lysine residues using formaldehyde [51][52][53].
With the exception of iTRAQ and new 6-plex TMTs, relative quantification is achieved by integration of the MS signal over isotopomers of 'heavy' and 'light' labelled peptides. iTRAQ differs from these approaches in that it is based upon the use of isobaric tags which are fragmented in tandem MS/MS mode to produce a 'reporter' ion signature in a quiet region of the MS/MS spectrum to allow relative quantification [44] (Fig. 1). Because of the isobaric nature of iTRAQ-labelled peptides this allows the signal from all peptides to be summed in both MS and MS/MS modes thus enhancing the sensitivity of detection. This potential benefit to identify and quantify low abundance proteins in complex samples, coupled with the ability to multiplex up to eight samples in parallel [54] (unlike ICAT, which is limited to two labelled samples per run) suggests that iTRAQ holds the most promise for quantitative biomarker discovery.
Because iTRAQ results in fragmentation of all precursors this necessitates the use of inclusion lists to ensure that the same peptides are being fragmented each time. For example, two runs may have independently identified 200 proteins each, of which there may only be a 50% overlap. Repeated experimental iterations should increase this overlap.
Although the iTRAQ reporter ions have m/z values in the "quiet" region of the mass spectrum if there are any additional peptide ions present in this selection window these will adversely affect quantification [15]. Other methods, such as enzymatic labelling of samples such as the use of trypsin or Glu-C catalysed incorporation of 18 O during digestion avoids side reaction artefacts, however, different peptides incorporate at different rates, and full labelling is rarely achieved [55,56]. Furthermore, enzymatic labelling requires at least a 4 Da mass shift in order to distinguish isotopomer clusters of labelled and unlabelled peptide forms, and as these clusters increase with peptide mass thus enzymatic labelling has limited use for larger peptides [57].
Several papers have been published which highlight the promise of iTRAQ coupled with LC-MS/MS as a tool for identifying potential biomarker signatures indicative of disease from a variety of sources such as serum, CSF and tissue. For example, studies with serum using iTRAQ coupled with LC-MS/MS identified 160 proteins, of which 31 were differentially expressed following traumatic brain injury; three of which (serum amyloid A, C-reactive protein and retinol binding protein 4) were verified independently and shown Fig. 1. Schematic representation of three methods for relative quantification by mass spectrometry (adapted from [30,37]). (a) Protein-level labelling either by culturing cells in the presence or absence of a 'heavy' isotope amino acid (e.g. stable isotope labelling with amino acids in cell culture (SILAC) [33], not ameneable to clinical proteomics) or using chemical derivitisation, by methods such as ICAT (as shown) allows two conditions to be tested simultaneously. In the case of ICAT, the 'heavy' and 'light' labels impart a mass difference of 9 Da without affecting the chromatographic properties of the labelled peptides, thus allowing relative quantification in MS. Subsequent MS/MS analysis must be conducted on targeted ion pairs to enable identification of differentially expressed proteins. (b) Peptide level labelling with isobaric tags such as iTRAQ (shown here) which allows multiplexing of up to eight samples in one run (two are shown for clarity). The different masses of each 'reporter' group are counteracted by a 'balance' group which confers isobaric properties on each tag in MS mode. Subsequently, multiplexed samples containing the same mix of peptides labelled with different iTRAQ tags will behave identically until they are fragmented during MS/MS. This provides several advantageous properties, as all equivalent peptides will behave identically in LC separation steps, and in MS and MS/MS mode the signal from all peptides may be summed (as they have the same mass), thus enhancing the sensitivity of detection. (c) Label-free methods such as SELDI (shown here) enrich for specific peptides by binding and eluting them from a 'chip' with a particular chromatographic surface prior to MS analysis. Proteins are not identified by this method, instead, peak patterns are derived in order to generate a proteomic profile which is used to compare multiple samples processed via the same method.
to have good sensitivity for the early detection of increased intracranial pressure indicative of traumatic brain injury [58].
iTRAQ-MS/MS has been used to identify 219 proteins in human CSF, with 12 proteins differentially expressed between male and female subjects. This represents a comparable, and in most cases, slightly better penetration of the CSF proteome than previously reported using 2D gel-based methods, and indicates that this is a robust method to use in clinical analysis of the CSF proteome during diseases, such as Alzheimer's or Parkinson's [8].
Studies using iTRAQ tagged tissue samples from patients with head and neck squamous cell carcinoma (HNSCC) followed by multidimensional LC-MS/MS identified a panel of differentially regulated proteins when comparing HNSCC samples with pooled normal controls. Three of these proteins (YWHAZ, stratifin and S100A7) were shown to have high sensitivity and specificity for differentiating normal versus cancerous tissue in an independent HNSCC set and show potential for development as clinically relevant biomarkers for diagnosis of this disease [59]. Studies using endometrial tissue also show promise using iTRAQ to determine differential expression profiles in patients with type I and type II endometrial cancer [60][61][62]. Indeed, there is a growing interest in the use of tagging technology in combination with sensitive MS/MS techniques for use in cancer diagnosis, prognosis or monitoring of treatment and relapse.
Benefits and caveats of label free approaches
The clinical use of MS-based methods for the proteomic profiling of bio-fluids for diagnostic and/or prognostic information presents many challenges. It is imperative that sample processing should not affect the outcome of any analyses and that the chosen platform should be robust and reliable, thus reducing detrimental effects introduced by experimental variables. Label free quantification methods are favourable in practical terms as they are relatively high-throughput, requiring no time-consuming and expensive labelling step. Consequently there is no theoretical limit to the number of samples that can be analysed in any given experiment, as it is not restricted by the number of labels available. However, unlike isotope labelling methods, label free approaches do not allow for sample multiplexing, and thus may not be faster. In addition, lack of topoisomeric peptides reduces the spectral complexity at any given chromatographic time, potentially increasing the number of peptides identified, although again this is not true of isobaric tagging reagents such as iTRAQ. Although there is evidence that label free methods show increased dynamic range of quantification over stable isotope labelling, label free methods are particularly susceptible to error, and there is inconclusive data regarding the accuracy and linearity of label free techniques [16].
SELDI-ToF MS
Several label-free proteomic profiling techniques have been developed which are based upon the application of an unprocessed bio-fluid to a "chip" with a specific chemistry, i.e. a particular chromatographic surface. Unbound proteins are washed off and bound proteins are analysed in a simple time-of-flight mass spectrometer. In this method, proteins are not identified but signature peak patterns are derived and compared between test groups to generate a proteomic profile. The primary example of this type of method is SELDI-ToF whereby samples such as serum, plasma and urine can be applied to chromatographic chips designed to enrich for different populations of protein/peptide analyte. Consequently, the main advantage of this technique is ease of use and apparent throughput, a possible reason as to why this method is so heavily used in clinical proteomics, particularly in comparison to other MS-based proteomics approaches [63].
Although there is evidence that label free approaches have an enhanced dynamic range for detection compared with stable isotope labelling, these techniques have been shown to be the least accurate for quantification purposes and are extremely sensitive to experimental variation. Indeed controversy surrounds the long term viability of SELDI as a platform for wide-spread, large scale clinical use, as concern still remains regarding the semiquantitative nature of the method, and its reproducibility [15]. A classic example of this is an early study by Petricoin et al. [24], whereby a biomarker signature for ovarian cancer determined by SELDI was subsequently found to be not reproducible and the differences were proposed to be due to variables introduced during sample processing [64,65]. In addition, the reproducibility and inter-lab variability of SELDI to detect a three peak signature identified to detect prostate cancer was tested by six independent laboratories, and the inter-lab coefficients of variation of the normalised peak intensities were found to be between 15-36% [66].
Nonetheless, SELDI has been used in several disease areas, for example, to identify diagnostic markers of tuberculosis [67], severe acute respiratory syndrome (SARS) [68,69] and intra-amniotic infection [70,71]. SELDI has also been used to identify biomarkers of neurologic disorders, such as Alzheimer's [72,73]. However, this methodology has been utilised most heavily in oncology, and various signatures have been described as diagnostic indicators of various types of cancer. For example, in several studies of patients with hepatocellular carcinoma (HCC) SELDI has been used to identify patients with HCC [74] and to distinguish between patients with HCC and hepatitis C virus [75][76][77]. The most recent study by Zinkin et al. [77], found that SELDI-ToF was more accurate than traditional biomarkers at detecting small tumours and although the authors recognised that the sample set was relatively small, this highlights the potential for this technique in clinical applications such as diagnosis of HCC.
In a recent study by Taguchi et al. [78], SELDI was used on samples from patients with small cell lung cancer (SCLC) and an 8 peak feature map was generated that was able to predict good or poor prognosis groups in response to treatment with epidermal growth factor receptor (EGFR) tyrosine kinase inhibitors (TKI's). In addition, the results of this study were reproducible between two independent laboratories. In another study involving SCLC, biomarker profiles were identified that could distinguish between SCLC and healthy controls, SCLC from pneumonia and SCLC from non-small cell lung cancer (NSCLC) [79]. However, a note of caution must be introduced as the sample sets were relatively small, and the results require further validation with additional patient samples. In addition to lung cancer, SELDI has been reported in the identification of biomarker signatures able to distinguish between prostate cancer, benign prostate hyperplasia and healthy men [80]. SELDI has also been reported in the detection of colorectal cancer signatures [81], early stage ovarian cancer [82] and to distinguish patients with Transitional Cell Carcinoma (TCC) of the bladder [83,84]. Furthermore, in several cancer types SELDI has been applied to the derivation of prognostic signatures, for example in the predication of relapse in breast cancer patients [85].
Several studies have described the use of SELDI in identification of peaks that are indicative of breast cancer from nipple aspirate fluid [9][10][11][12]. NAF is an attractive source for proteomic information in breast cancer, primarily due to its proximity to the primary tumour and the relative ease of sample collection compared to biopsy. However, caution must be employed when analysing these data, as these studies show minimal, if any overlap in the peaks/proteins that were identified as associated with breast cancer. This highlights a major challenge for the multi-centre use of SELDI in routine clinical use, as variables such as sample handling and processing, loading onto the target, washing, matrix type and the method of data acquisition and processing can all significantly affect the final data output. In addition, SELDI has thus far been unable to identify conventional tumour markers, such as ␣-fetoprotein [63].
Although screening is rapid, because potential biomarkers are not identified this is likely to produce a bottleneck in the biomarker validation step. Theoretically, the biomarker(s) does not have to be identified in order to provide diagnostic/prognostic information. However, as questions still remain regarding the reliability of this method, it would seem that the logical step in moving forward and validating SELDI as a useful tool for clinical proteomics would be to identify the protein(s) responsible for the characteristic m/z peak in order to develop more robust methods for high throughput use, such as ELISA assays.
Other proteomic profiling methods have been developed based upon the principles of SELDI, such as ClinProt (Bruker Daltonics, Billerica, MA), which is bead rather than chip-based and both bound and eluted proteins can be identified [86].
AQUA (absolute quantification of proteins)
The substantial resolving power of modern mass spectrometers can only be fully realised in the clinical arena by the use of accurate methods for absolute quantification. Unless a standardised reference sample is used, coded isotope labelling can only provide relative quantification, which can lead to difficulties when interpreting inter-study comparisons. The use of internal standards has long been a tool for absolute quantification of small molecules in isotope dilution experiments. Quantification is achieved by spiking known amounts of an isotopically labelled form of a known analyte into the sample prior to MS, and the relative levels of labelled and endogenous forms can be calculated. It must be noted that in this case the identity of the analyte is known prior to analysis.
Variations on the internal standard method have been developed for use in proteomics, including AQUA (absolute quantification of proteins) [87], PC-IDMS (protein cleavage-isotope dilution mass spectrometry) [88], SISCAPA (stable isotope standards and capture by anti-peptide antibodies) [89] and VICAT (visual isotope coded affinity tags) [90] and these methods are able to measure absolute protein amounts and post-translational levels of proteins, ultimately essential for the validation of any novel protein biomarker. A critical difference between the isotope dilution approach and AQUA is that while isotope dilution experiments conducted on small molecules involve direct measurement of the analyte, quantification of proteins by AQUA is carried out at the peptide level [91].
The specificity of the spiked standard may also lead to inaccurate quantification if it has the same m/z value of other peptides in the sample. However, combining AQUA with use of multiple reaction monitoring (MRM) [91], a highly sensitive method routinely used to measure drug metabolites, hormones, protein degradation products and pesticides with high precision and, known, reproducible LC retention time can reduce these effects. MRM involves two stages of mass selection, in the first instance a parent ion is selected and isolated at a particular m/z ratio. The parent ion is then fragmented and a second selection step is then used, whereby a specific product ion is accumulated and monitored, making this a highly specific and sensitive quantitative technique when combined with the appropriate isotope labelled protein standards [92] (Fig. 2).
Because MRM focuses on a handful of proteins of interest rather than a global proteomics approach this technique is attractive dur-ing the validation and assay development phases of biomarker discovery. In the clinical proteomics setting, MRM has been successfully used to isolate and quantitate tryptic peptides in plasma which are indicative of disease, including C-reactive protein [26], apolipoprotein A-1 [25] and prostate-specific antigen [93]. Traditional diagnostics based upon 1 or several protein biomarkers involve the use of antibodies, typically requiring the development of an ELISA method. Antibody microarrays have been shown to have sensitivity ranging from 1 to 1000 pg/mL for cytokines in plasma [94], however, due to the idiosyncratic nature of antibody generation the development of reliable antibodies for screening can be problematic at best. Because MRM can detect peptides at low ng/mL levels [13,95,96] and is applicable to all peptides it is thought that this method may provide the most promise for biomarker validation and screening.
Other label free methods for biomarker identification are largely untested in the clinical proteomics arena, including spectral feature analysis where the peptide sequence is not identified and quantification is carried out by comparison of spectral features from separate LC/MS runs [23,[97][98][99]. However, this method generates high error rates [100,101], therefore it is generally accepted that further studies are required to verify any changes in abundance and to determine the identity of these spectral features [21].
Future challenges and directions
Recent advances in LC-MS/MS-based techniques for clinical proteomic biomarker discovery and validation have offered much hope for superior patient care, particularly for cancer diagnosis and treatment where the potential gains for individualised therapy are huge. However, the complexity, variation and dynamic range of proteins present within bio-fluids (such as plasma) are major obstacles to these methods to accurately quantify changing protein levels. In Fig. 2. Schematic overview of multiple reaction monitoring (MRM) for biomarker quantitation (adapted from [103]). (a) Specific peptide detection by MRM. In this example peptides from a tryptic digest enter the first quadrupole (Q1) and a diagnostic peptide (m/z 400) eluting at a specific time during liquid chromatography (LC) is isolated and enters the collision cell (Q2). Collision induced dissociated (CID) fragments this peptide and a specific product ion (m/z 390) if generated, is selected to enter the third quadrupole (Q3) where it then reaches the detector. This filtering dramatically reduces the background resulting in a significantly increased signal to noise and greater sensitivity. (b) Absolute quantification by MRM. Inclusion of an isotopically labelled standard peptide allows for MRM transitions to be monitored for the test and standard peptide. The mass difference imparted by the isotopomer enables the test and standard peptide to co-elute during LC and be monitored for different MRM transitions in parallel. As the concentration of the standard is known, the ratios of the total signal generated by each peptide can be calculated and thus used for absolute quantitation purposes. addition to problems presented by the analyte itself, many other factors such as specimen collection, handling and processing (fasting samples, freeze-thaw effects, life style variations, for example), pre-fractionation methodology, instrumentation set-up, database mining, statistical analysis and data storage will all lead to increased costs and decreased throughput and thus affect the ultimate success of LC-MS/MS-based biomarker discovery [102].
Traditional drug development within the pharmaceutical industry follows a process from discovery through to pre-clinical development and clinical testing, typically involving large scale screening of multiple analytes. In contrast, basic research is dominated by studies of individual molecules. Therefore, in order for clinical proteomics by MS-based methods to be successful it is essential that the gap between these two disciplines is bridged [1]. One of the major challenges is the translation of pre-clinical animal studies into human subjects. MS offers the exciting prospect of bypassing this problem by moving directly into human bio-fluid samples for discovery-based medicine. In addition, there is potential to reduce the number of patients required for clinical testing by carrying out well designed pre-clinical studies in well characterised animal and/or cell line models [3].
It is essential that novel biomarker profiles are carefully validated and it is possible that routine application may be carried out by immunoassays, which can also present huge challenges. These include issues surrounding antibody reliability and sensitivity and the ability for multiplexing interactions which all impact upon their cost effectiveness. In the case of clinical proteomics it is likely that multiple novel candidates will be identified and thus multiple reaction monitoring/stable isotope dilution (MRM/SID-MS) by triple quadrupole MS may be more feasible, allowing for greater throughput, accuracy, sensitivity and throughput than antibody development [13,30]. Perhaps the most important factor for the realisation of personalised medicine provided by novel biomarkers identified by MS is the need for careful and rigorous validation of these markers though rationally designed, large scale clinical studies [1]. These can only be successfully realised by close working relationships between discovery labs and clinical centres. This highlights the major challenge faced by translational medicine, in particular clinical proteomics, but with careful planning it is hoped that the potential provided by continual developments in LC-MS/MS methods for relative and absolute protein quantification will lead to advances in the way diseases such as cancer are diagnosed and managed.
|
2018-04-03T01:01:26.619Z
|
2008-11-18T00:00:00.000
|
{
"year": 2008,
"sha1": "dd19e23dd3bde007ee63fd3e598c308d020314c3",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.jchromb.2008.11.023",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "35948df872c835e49a904b6c357be96bd878c8c0",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
21772368
|
pes2o/s2orc
|
v3-fos-license
|
Search for Borrelia sp . in Ticks Collected from Potential Reservoirs in an Urban Forest Reserve in the State of Mato Grosso do Sul , Brazil : a Short Report
A total of 128 ticks of the genus Amblyomma were recovered from 5 marsupials (Didelphis albiventris) – with 4 recaptures – and 17 rodents (16 Bolomys lasiurus and 1 Rattus norvegicus ) captured in an urban forest reserve in Campo Grande, State of Mato Grosso do Sul, Brazil. Of the ticks collected, 95 (78.9%) were in larval form and 22 (21.1%) were nymphs; the only adult (0.8%) was identified as A. cajennense . Viewed under dark-field microscopy in the fourth month after seeding, 9 cultures prepared from spleens and livers of the rodents, blood of the marsupials, and macerates of Amblyomma sp. nymphs revealed spiral-shaped, spirochete-like structures resembling those of Borrelia sp. Some of them showed little motility, while others were non-motile. No such structures could be found either in positive Giemsa-stained culture smears or under electron microscopy. No PCR amplification of DNA from those cultures could be obtained by employing Leptospira sp., B. burgdorferi , and Borrelia sp. primers. These aspects suggest that the spirochete-like structures found in this study do not fit into the genera Borrelia or Leptospira, requiring instead to be isolated for proper identification.
cally hard to distinguish from other spirochetes.
More recently, ticks of the species A. cajennense, A. aureolatum, Ixodes sp., I. didelphidis, and I. loricatus have been identified (Barros-Battesti 1998, Costa 1998, Abel et al. 2000) in rodents and marsupials contaminated with spirochetes morphologically similar to those of the genus Borrelia.
Laboratorial techniques for diagnosis have been developed and diagnostic criteria established, thus allowing an improved knowledge on this borreliosis in Brazil, with the identification of patients showing clinical manifestations that are similar, if not identical, to those of Lyme disease and presenting serological cross-reactivity against Borrelia sensu lato antigens (Yoshinari et al. 1992a,b, 1993a,b, 1997, Joppert 1995, Ishikawa 1996, Costa et al. 1996, Pirana et al. 1996).
The name Lyme disease-like illness, to designate an emerging zoonosis found in Brazil, was proposed by Yoshinari et al. (1999Yoshinari et al. ( , 2000)), based on clinical similarities with the classical Lyme disease described by Dr Allen C Steere in 1977, although with some laboratorial differences.Brazilian patients' sera react differently to B. burgdorferi antigens, and, despite many attempts, no etiological agent has been cultured so far, suggesting that this is not a single disease, but rather a complex clinical syndrome.
The present study was carried out in order to verify the possible presence of Borrelia sensu lato in ticks found as ectoparasites of marsupials and rodents in an urban forest reserve of Campo Grande, the capital city of the State of Mato Grosso do Sul, in southwestern Brazil.
MATERIALS AND METHODS
Study area -The 48.5-hectare area selected, known as Reserva Biológica, is located in the campus of Universidade Federal de Mato Grosso do Sul at Campo Grande.This reserve borders residential neighborhoods and has a lake that nearly divides its extension into two sectors exhibiting distinct vegetations -while one of them has high-growing grass and Cerrado trees, the other has features of palm swamp and gallery forest.
This location was chosen because two patients out of a total of 15 in Campo Grande with clinical features and laboratorial diagnosis compatible with Lyme disease-like illness had frequented it (Costa 1998).
Capture of animals -Capture was performed for three weeks in a row, for five consecutive days a week, in July 1997.Weather conditions were recorded during this period.
The live traps employed were baited with bacon slices for marsupials and corn grains for rodents.Traps were inspected daily, and the animals captured were anesthesized with sulfuric ether in plastic bags for further processing.
Processing of mammals -Once anesthesized, mammals were identified, weighed, and measured.They were combed over a white tray to recover their ectoparasites.Ectoparasites adhering to the skin were removed with the help of tweezers.
Blood was drawn from the caudal vein of the marsupials.Part of the sample was seeded in BSK culture medium and another portion was used for preparing Giemsastained smears.Serum separated from the remaining blood was frozen at -20°C for further studies.The initials MSM plus a sequential number were shaved on the marsupials, which were then released into their original environment, thus becoming available for recapturing.
The rodents captured were initially kept under observation for the presence of ticks during one week, at the end of which they were anesthesized and submitted to cardiac puncture.A few drops of blood were inoculated into BSK medium, and some more were used for preparing Giemsa-stained smears for optical microscopy.Following sacrifice, liver and spleen samples were seeded in BSK culture medium.The initials MSR plus a sequential number were shaved on the animals, which were then weighed, measured, and frozen at -20°C until taxidermy time.Following taxidermy, they were deposited at the mastozoology collection (Dr Michel Miretzki) of Museu de História Natural Capão da Imbuia, Departmento de Zoologia, of the municipality of Curitiba, State of Paraná.
Upon being washed in ethanol, ticks were macerated and seeded in BSK culture medium under sterile conditions and laminar flow in the Laboratório de Investigação em Reumatologia (LIM-17), Faculdade de Medicina, Universidade de São Paulo, where the cultures were followed up for spirochetes' growth.
Analysis of culture media -Cultures were kept in anaerobic conditions in an incubator at 33°C.Aliquots were periodically viewed under a dark-field microscope with immersion lens.Spirochete-like positive cultures were reinoculated in BSK.
Analysis of spirochetes -Spirochetes were searched for in different ways: by direct examination of cultures under a dark-field microscope with immersion lens; in Giemsa-stained blood smears; and by electron microscopy (Instituto Adolfo Lutz and Departamento de Anatomia Patológica, Faculdade de Medicina, Universidade de São Paulo).
Cultures exhibiting spiral-shaped structures were submitted to PCR in order to allow differentiation from structures of the genus Leptospira, and to detect the presence of Borrelia sp.DNA.
DNA extraction and amplification -DNA was extracted according to the protocol described by Sambrook et al. (1989), with minor modifications.
For the analysis of the amplification products, electrophoresis in 2.5% agarose gel and ethydium-bromide staining were employed (Sambrook et al. 1989).
In order to differentiate from Leptospira sp.those spirochetes found in the mammals and ticks, a positive control was employed consisting of the species L. interrogans, of serogroup Tarassovi (Laboratório de Zoonoses Bacterianas, Departamento de Medicina Veterinária Preventiva e Saúde Animal, Faculdade de Medicina Veterinária e Zootecnia, Universidade de São Paulo, FMUSP).
For negative control, a microtube was used, containing ultrafiltered water with a standard of 100-Bp molecular weight (Pharmacia Biotech, Uppsala, Sweden).
The DNA of Borrelia sensu lato was detected through PCR with the use of primers of flagellin and 16S rRNA, as described by Barbour et al. (1996), a procedure capable of identifying microorganisms of the genus Borrelia.
The positive control consisted of DNA extracted from three Borrelia species (B.burgdorferi, B. afzelli, and B. garinii) kept under culture at Laboratório de Investigação em Reumatologia, FMUSP.A standard of known molecular weight was used for reference.
RESULTS
Weather conditions -The highest and lowest temperatures recorded in the study area during the period of capture were 15.2°C and 30.9°C, with a mean of 21.04°C.
Identification of the mammals captured -Five marsupials of the species Didelphis albiventris were collected (3 males and 2 females), with 4 recaptures.Of the 17 rodents collected, 16 were identified as Bolomys lasiurus and 1 as Rattus norvegicus.
Identification of ectoparasites -A hundred and eighteen ticks were recovered from the marsupials, 117 of which belonged to the genus Amblyomma.Of those, 95 (81.2%) were in the larval stage and 22 (18.8%) were nymphs; the only adult found (0.8%) was identified as A. cajennense.
Analysis of culture media under dark-field microscopy -Spiral-shaped structures, most of them with little motility, were found in nine cultures, although only in the fourth month after inoculation.In addition, the structures were sometimes seen tangled, in the manner of Borrelia sp.cultures (Figure).
Of the positive cultures, 4 were prepared from rodent spleens, 1 from rodent liver, 1 from marsupial blood, and 3 from macerates of Amblyomma sp.nymphs.
No structures suggestive of spirochetes, however, were observed in Giemsa-stained aliquots obtained from culture media, or in at least 2 analyses performed under electron microscopy.
DISCUSSION
The weather conditions at the time of capture must be taken into account, as the development stages of ticks correlate with the seasons (Flechtmann 1973, Barros-Battesti 1998).Barros-Battesti (1998), in an extensive study carried out in the municipality of Itapevi, State of São Paulo, found the highest frequency of rodents, along with the highest numbers of immature ticks, to be related to the dry and cold season, which lasts from April to September.
In the present study, capture was performed in July, when rainfall was entirely absent, with a mean temperature of 21.04ºC -i.e., a typical dry and cold winter month, in terms of the region's pattern.A total of 5 marsupials (D. albiventris) and 17 rodents (16 of the species B. lasiurus and 1 of R. norvegicus) were collected during that period, providing 227 immature forms (larvae and nymphs) of Amblyomma sp., along with one adult form of the same genus.On the other hand, in the study carried out by Barros-Battesti in Itapevi, adult ticks were predominant in the rainy and warm season (October to March).
All tick nymphs and larvae identified in the present work belonged to the genus Amblyomma, with one adult specimen identified as A. cajennense; the remaining ectoparasites captured did not belong to the sub-order Ixodidae.
A. cajennense, popularly known as "star-tick" (Flechtmann 1975), is widely distributed in the Americas, particularly in warmer regions.It can be found on wild and domestic birds alike, as well as on mammals and, not rarely, on ophidians.
Humans are often bitten by this tick in any of its development stages, but mainly as larvae and nymphs.Found in large numbers on fields and pastureland in dry and cold seasons, it is regarded as the vector of spotted fever group rickettsiosis and equine babesiosis.
The identification of A. americanum as the vector, in the United States, of a distinct, uncultivable species of Borrelia known as B. lonestari (Barbour et al. 1996), recognized as the etiological agent of a Lyme disease-like illness causing cutaneous lesions without systemic manifestations, has caused the inclusion of the genus Amblyomma to the list of known vectors of this disease to humans.
The investigations performed by Barros-Battesti et al. (1998, 2000) and Yoshinari et al. (1997) in the municipality of Cotia, State of São Paulo, led to the recovery, from rodents and marsupials, of tick species other than A. cajennense, such as I. didelphidis, I. loricatus, A. aureolatum and Rhipicephalus sanguineus; all of which, except for the last one, were often found contaminated with spirochete-like structures similar to those immobile structures found in Campo Grande.
Perhaps because the study in Campo Grande has spanned only a short period of the year and has covered a limited stretch of land, no occurrence could be detected of tick species other than those four mentioned abovea finding that, nonetheless, corroborates results of previous investigations showing an absence of records of Spirochete-like structures seen under dark-field microscopy in BSK culture of spleen of rodent (Bolomys lasiurus) captured in the Biological Reserve of Universidade Federal de Mato Grosso do Sul (Campo Grande, State of Mato Grosso do Sul, Brazil).
Ixodes sp.ticks in this region of the country.This constitutes a very relevant epidemiological finding, since the transmission of the disease in Brazil might be linked to the presence of Amblyomma sp.vectors.
Additionally, the possible involvement of Amblyomma sp. in the transmission of this Lyme disease-like illness in Brazil is supported by reports of human cases related to casual bites of this arthropod genus in the State of Rio de Janeiro (Yoshinari et al. 1999(Yoshinari et al. , 2000)).
It is possible to suggest that ticks of the I. ricinus complex are the vectors responsible for the transmission of microorganisms that cause classical Lyme disease in the United States and Europe, while others, such as Amblyomma sp., would be implicated in the appearance of a similar disease.Curiously, cases of Lyme disease have been reported in Australia (Russell et al. 1994), although neither vectors nor microorganisms have been identified yet.Abel et al. (2000) found immobile Borrelia-like structures in cultures from ticks and small mammals collected in Cotia, and some of these isolates were recognized through indirect immunofluorescence using sera of patients with Lyme disease-like illness, suggesting that these structures could be implicated in the etiology of this disease in Brazil.
On the other hand, Schonberg et al. (1992) described immobile spiral microorganisms obtained from skin and cerebrospinal fluid cultures from patients with Lyme borreliosis.Later (1994), however, the authors discovered by electron microscopy that such structures were in fact flagella of microorganisms contaminating the culture medium.
In the present study, the finding of spiral-shaped structures in cultures from blood of marsupials and organs of rodents, as well as from macerates of the ticks, is in agreement with results obtained by Abel et al. (2000).It is important, however, to point out that similar structures have appeared in blood and cerebrospinal fluid cultures from patients with Lyme disease-like illness (Yoshinari et al. 1999).
The negative results of PCR amplification (employing Leptospira sp. or Borrelia sp.primers) of DNA from cultures containing spirochete-like structures from ticks and mammals, and also from blood samples of clinical patients (Costa 1998, Barros 2000), strongly suggest that the causative agent of the disease in Brazil should be quite different from any already known borrelias.
The first cases of the disease in Brazil were discovered in Cotia (Yoshinari et al. 1992a,b), and clinical features, especially the presence of erythema migrans, have been the leading criteria for diagnosis.Epidemiological data such as tick-bite history, contact with domestic animals, and attendance at risk areas are helpful information.Positive serology for B. burgdorferi has been obligatory, but because the etiological agent seems to be a novel one, serological assays have displayed low sensitivity.Often, ELISA assays reveal low antibody titers and the Western Blotting pattern of reactivity differs from that observed in sera of North American patients.Finally, spirochete-like microorganisms, although seen in peripheral blood and cerebrospinal fluid of patients, are uncultivable when added into BSK medium (Yoshinari et al. 1997(Yoshinari et al. , 1999(Yoshinari et al. , 2000)).
Because of these microbiological and serological differences and the impossibility of amplifying DNA from primers of B. burgdorferi flagellin, in addition to the fact that the spirochete-like microorganisms do not belong to the genus Leptospira, the existence can be postulated of a new clinical syndrome called Lyme disease-like illness, whose causative agent is either a very different borrelia or a microorganism of another genus transmitted by ticks other than those of the I. riccinus complex.
Further studies shall lead to the isolation and characterization of this etiological agent, thus permitting a more accurate diagnosis of this emerging zoonosis in Brazil, which has been met with increased interest in many medical specialities, as this challenging infectious syndrome has been affecting more and more patients over time.
|
2017-06-17T22:31:37.428Z
|
2002-07-01T00:00:00.000
|
{
"year": 2002,
"sha1": "6bb202ee585e7b731f88e07235094527f1ffe821",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/mioc/a/mgjjsqgKctqqDyyqgnJmBhM/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6bb202ee585e7b731f88e07235094527f1ffe821",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
251905185
|
pes2o/s2orc
|
v3-fos-license
|
CD55 Facilitates Immune Evasion by Borrelia crocidurae, an Agent of Relapsing Fever
ABSTRACT Relapsing fever, caused by diverse Borrelia spirochetes, is prevalent in many parts of the world and causes significant morbidity and mortality. To investigate the pathoetiology of relapsing fever, we performed a high-throughput screen of Borrelia-binding host factors using a library of human extracellular and secretory proteins and identified CD55 as a novel host binding partner of Borrelia crocidurae and Borrelia persica, two agents of relapsing fever in Africa and Eurasia. CD55 is present on the surface of erythrocytes, carries the Cromer blood group antigens, and protects cells from complement-mediated lysis. Using flow cytometry, we confirmed that both human and murine CD55 bound to B. crocidurae and B. persica. Given the expression of CD55 on erythrocytes, we investigated the role of CD55 in pathological B. crocidurae-induced erythrocyte aggregation (rosettes), which enables spirochete immune evasion. We showed that rosette formation was partially dependent on host cell CD55 expression. Pharmacologically, soluble recombinant CD55 inhibited erythrocyte rosette formation. Finally, CD55-deficient mice infected with B. crocidurae had a lower pathogen load and elevated proinflammatory cytokine and complement factor C5a levels. In summary, our results indicate that CD55 is a host factor that is manipulated by the causative agents of relapsing fever for immune evasion.
B orrelia species cause at least two general types of disease in humans: relapsing fever and Lyme borreliosis (1)(2)(3). Relapsing fever-associated Borrelia species can cause widespread infection in humans (4)(5)(6). Louse-borne Borrelia recurrentis infection was the primary etiologic agent of epidemic relapsing fever in Asia and Europe during the last century (7,8). The endemic forms of relapsing fever, transmitted by ticks, have been reported in different geographical regions, including North America, Europe, Africa, South America, and Asia (8)(9)(10)(11)(12). Relapsing fever is one of the most prevalent bacterial infections in Africa and a significant cause of morbidity in rural areas throughout much of West Africa (13)(14)(15)(16).
Tick-borne relapsing fever is characterized by recurrent episodes of systemic symptoms, including headache, myalgias, and bleeding (17). Typically, the first febrile episode lasts for several days, and symptoms recur after afebrile periods of a few days (18)(19)(20)(21). The relapsing nature of infection depends on the ability of Borrelia spirochetes to undergo antigenic variation (19). Borrelia miyamotoi is a potential emerging etiologic agent of tick-borne relapsing fever in North America, while Borrelia crocidurae is prevalent in North and West Africa and is emerging in Europe (22,23). B. crocidurae, which was first isolated from the blood of a musk shrew in Senegal and later identified as the cause of endemic relapsing fever in Western Africa, is a major cause of morbidity and neurologic disease (24,25). B. crocidurae has been also shown to associate with erythrocytes and generate cell aggregates that disrupt the microcirculation (24,26,27). These aggregates create microthrombi in arterioles and subsequently cause myocardial damage (24,28). In addition, sequestration within aggregates of erythrocytes may allow B. crocidurae spirochetes to avoid damage from the sheer pressure of the blood flow and by contact with immune cells (29).
There is a dearth of information on the host proteins that interact with relapsing fever Borrelia. This information is critical for the development of more effective diagnostics and therapeutics. Molecular and biochemical approaches to identify potential interactions between host immune proteins and spirochete ligands generally require speculation based on putative functionality or bias in the selection of potential immune receptors. We have previously demonstrated that large-scale screening of host-Borrelia interactions with BASEHIT (Bacterial Selection to Elucidate Host-microbe Interactions in high Throughput) effectively overcomes these challenges to identify host factors important in controlling Borrelia pathogenesis in vivo (30). Through targeted screening of relapsing fever-causing spirochetes, we determined that CD55, a complement regulator, interacts with B. crocidurae, allowing us to validate its role in erythrocyte aggregation and pathogenesis.
RESULTS
Identification of human host factors that interact with spirochetes that cause relapsing fever. Using a recently developed combinatorial screening technology termed BASEHIT (30), we identified specific human proteins that interact with relapsing fever-causing Borrelia species and may therefore be involved in the pathogenesis of or immunity against these microbes. In this approach, surface-biotinylated Borrelia spirochetes are panned against a genetically barcoded Saccharomyces cerevisiae yeast display library of .1,000 human extracellular and secreted proteins. Yeast clones expressing Borrelia-binding proteins are isolated by magnetic separation using streptavidin microbeads and identified by next-generation sequencing of their specific barcode sequences (Fig. 1A).
Analysis of the results revealed that human CD55 bound to B. crocidurae and Borrelia persica, two spirochetes that cause relapsing fever, and exceeded a stringent significance threshold ( Fig. 1B and C; Table S1 in the supplemental material). CD55 is present on the erythrocyte surface and harbors the Cromer blood group antigens (31,32). CD55 also functions as an endogenous complement inhibitor (33). As blood-borne pathogens, it is imperative for relapsing fever spirochetes to find ways to outwit complement activity (21). Thus, this interaction was of significant interest because it suggested a potential linkage of these Borrelia species with erythrocytes and mechanisms for escape from complement-mediated destruction and other components of the immune system.
CD55 binds to B. crocidurae and B. persica. To determine whether CD55 directly binds to B. crocidurae, we performed flow cytometry-based binding assays with healthy spirochetes grown in vitro. Since human CD55 shares 47.21% sequence identity with its mouse orthologue (Fig. S2), we examined whether murine CD55 also recognizes Borrelia spirochetes. As shown by the results in Fig. 2A, both murine and human CD55 bound to a majority of B. crocidurae spirochetes, while the secondary antibody alone showed weak reactivity. As a positive control, we used recombinant human PGLYRP1 (peptidoglycan recognition protein 1), an antimicrobial protein that has been shown to interact with Borrelia species (Fig. 2) (30). As a negative control, we used a poly Histagged tick protein, IsPDIA3 (Ixodes scapularis protein disulfide isomerase A3) (34), which was also expressed and purified from mammalian cells in the same manner as CD55 and PGLYRP1. To further visualize the interaction between CD55 and B. crocidurae, we performed an immunofluorescence assay with purified CD55-His 8 . This assay further FIG 1 Screening the BASEHIT human exoproteome library to identify host proteins that interact with Borrelia strains that cause relapsing fever. (A) Schematic of yeast display screen: the BASEHIT yeast library, displaying 1,031 human proteins on the yeast surface, was mixed with surface-biotinylated Borrelia isolates. Each human protein is encoded by a unique bar-coded plasmid. Yeast cells binding to Borrelia spirochetes were isolated by magnetic separation using streptavidin microbeads, and next-generation sequencing was used to identify human proteins. (B) Host interactions with B. crocidurae or B. persica grown at 33°C. Each symbol represents one human protein. CD55 is represented by a larger, solid square or circle. The list of proteins that bound to B. crocidurae and/or B. persica is shown in Table S1. The score for each protein is defined as the overall enrichment for that corresponding gene (relative to the unselected library) multiplied by the percentage of barcodes associated with the enriched gene (defined as a log fold change [logFC] of .0). The dotted line reflects a BASEHIT score of 1, and all genes that had higher scores are listed in Table S1. (C) CD55 interactions with different Borrelia species. Samples from 29 Borrelia isolates were screened against the host BASEHIT protein library. The y axis represents the CD55 score as the overall enrichment for CD55 of each isolate (relative to the unselected library) multiplied by the percentage of barcodes associated with the enriched CD55 (defined as logFC of .0). confirmed that CD55 binds to B. crocidurae (Fig. S2A). We also performed flow cytometry-based binding assays with B. persica, B. miyamotoi, or B. duttonii spirochetes and CD55 ( Fig. 2B; Fig. S2B and C). As suggested by the BASEHIT screen, CD55 only recognized B. persica among these species. Similar to the results for B. crocidurae, human or mouse CD55 also showed binding to B. persica. Collectively, these results confirmed the interaction of B. crocidurae and B. persica with CD55.
CD55 binds to a protein ligand on the surface of B. crocidurae. Since CD55 is conserved in humans and mice and both have complement inhibitory activity (35), we tested whether murine CD55 could competitively inhibit the binding of Fc-tagged human CD55 to B. crocidurae. As shown by the results in Fig. 3A and B, the binding between Fc-tagged human CD55 and B. crocidurae was reduced with the addition of murine CD55-His 8 , suggesting that both proteins bind to the same ligand on the spirochete surface ( Fig. 3A and B). To further investigate the nature of the B. crocidurae ligand that interacts with CD55, we performed flow cytometry-based binding assays with protease-treated B. crocidurae spirochetes. Spirochetes treated with proteinase K showed reduced ability to bind to mouse CD55 ( Fig. 3C and D). Overall, these results indicate that both human and mouse CD55 bind the same ligand on the B. crocidurae surface.
CD55 interferes in complement activity against B. crocidurae. CD55 is known to inhibit complement activity (35)(36)(37)(38). To assess this role of CD55 and its effect on binding with the spirochetes, cultured B. crocidurae spirochetes were incubated with immune sera from mice previously infected with B. crocidurae in the presence or absence of recombinant CD55. The borreliacidal activity of mouse complement was first assessed by observing B. crocidurae viability using a dark-field microscope (Fig. 3E). The spirochetes were incubated with either 40% or 20% mouse serum in the presence of human CD55 (100 mg/mL), and Borrelia viability was assessed after 2 h. The borreliacidal activity of immune serum was also assessed by the BacTiter-Glo microbial cell viability assay, which has been performed routinely for live Borrelia estimation. The immune sera had borreliacidal activity that was inhibited by CD55 (Fig. 3F), demonstrating that soluble CD55 can inhibit the complement-mediated killing of Borrelia crocidurae in vitro. . crocidurae cultures were grown to a density of 10 5 CFU/mL and incubated with recombinant CD55-His 8 (50 mg/mL). PGLYRP1 has previously been shown to bind spirochetes that cause Lyme borreliosis and relapsing fever (30) and was used as a positive control. B. crocidurae's binding to recombinant protein was measured by flow cytometry using a secondary Alexa Fluor 488 (AF488)-conjugated anti-His 6 monoclonal antibody. Overlay histograms show protein binding to B. crocidurae spirochetes identified using the secondary antibody. Binding of recombinant tick protein IsPDIA3-His 8 (50 mg/mL) to B. crocidurae was used as a negative control. Background binding of AF488-conjugated anti-His 6 monoclonal antibody alone with B. crocidurae is shown by the gray-shaded region. Results from one of two representative experiments are shown here. (B) B. persica cultures were grown to a density of 10 5 CFU/mL and incubated with recombinant CD55-His 8 (50 mg/mL). B. persica's binding to recombinant CD55-His 8 was measured using a secondary AF488-conjugated anti-His 6 monoclonal antibody. CD55 is involved in B. crocidurae-induced rosette formation. Relapsing fever spirochetes, including B. crocidurae, are known to form rosettes with erythrocytes, which contributes to the pathogenesis of the relapsing fever and enables spirochetes to evade the host immune system (27). The interaction of relapsing fever-causing Borrelia spp. with erythrocytes is dependent on glycosylation on the surface of erythrocytes (39). However, a specific interaction between relapsing fever spirochetes and an . crocidurae cultures were grown to a density of 10 5 CFU/mL and incubated in the presence or absence of proteinase K (0.2 mg/mL) at 37°C for 10 min. Subsequently, the proteinase K activity was quenched using a Roche cOmplete proteinase inhibitor cocktail and spirochetes were washed with PBS thrice. Borrelia spirochetes were incubated with recombinant mouse CD55-His 8 (50 mg/mL). B. crocidurae's binding to mouse CD55 was measured by flow cytometry using a secondary AF488-conjugated anti-His 6 monoclonal antibody. Background binding of AF488-conjugated anti-His 6 monoclonal antibody alone with B. crocidurae is shown by the gray-shaded region. The results from one of five representative experiments are shown here. (D) Data from five independent experiments are plotted here. The y axis represents the mean fluorescence intensities of AF488 from B. crocidurae. Statistical significance was assessed using the nonparametric Student t test (Mann-Whitney test of the data for proteinase K-pretreated B. crocidurae incubated with mCD55 versus untreated B. crocidurae incubated with mCD55 [P = 0.0079]). (E) Complement-mediated killing of B. crocidurae in the presence or absence of recombinant human CD55. Human CD55 (100 mg/mL) was incubated with B. crocidurae for 2 h in the presence or absence of immune serum from mice that were infected 30 days previously with B. crocidurae. Viability was assessed by observing spirochete movement under dark-field microscopy. The growth inhibition of B. crocidurae was calculated based on the growth of untreated B. crocidurae. The bars represent mean values 6 SD, and P values were determined by the Student t test ( antigen on the erythrocytes has not been demonstrated. CD55 is a glycosylated antigen that is abundantly present on the erythrocyte surface (40,41). We compared the erythrocyte rosettes induced by B. crocidurae using erythrocytes collected from C57BL/ 6 wild-type (WT) and CD55 knockout (KO) mice (38). Compared to the results using the erythrocytes isolated from WT mice, B. crocidurae induced fewer rosettes in the presence of erythrocytes from CD55 KO mice (Fig. S3). Furthermore, when the erythrocyte aggregate sizes were compared, erythrocyte aggregates from CD55 KO mice were ;25% smaller than those formed from WT erythrocytes ( Fig. 4A; Fig. S4).
To further characterize the effect of CD55 on B. crocidurae-induced rosette formation, rosettes were quantified by endpoint lysis of red blood cells (RBCs) (39). These results confirmed that B. crocidurae forms fewer rosettes with erythrocytes in the absence of CD55 (Fig. 4B). Furthermore, we examined whether B. crocidurae interacted with CD55 present on human erythrocytes. To block CD55 on human RBCs, we preincubated human erythrocytes with a CD55-blocking monoclonal antibody. As a control, we used a CD3 antibody that does not bind erythrocyte antigens. The conjugates between B. crocidurae and human erythrocytes were quantified by flow cytometry. B. crocidurae interacted with human erythrocytes and formed rosettes, while such rosettes were reduced in the presence of anti-CD55 neutralizing antibodies (Fig. 4C). Overall, these results show that B. crocidurae interacts with CD55 present on human or mouse erythrocytes.
Based on these results, we hypothesized that saturating the CD55 binding ligand on B. crocidurae could affect its ability to form rosettes with erythrocytes. To assess this further, B. crocidurae spirochetes were preincubated with either human or mouse recombinant CD55 protein and added to erythrocytes from WT mice. The samples were incubated at 37°C for 20 min before microscopic examination. In the presence of B. crocidurae, RBCs formed visible rosettes, while in the absence of B. crocidurae, erythrocyte aggregates were not observed. When B. crocidurae spirochetes were spiked with recombinant CD55, smaller and fewer erythrocyte aggregates were visible ( Fig. 4D; Fig. S5). Rosette formation did not decrease in the presence of an unrelated Borrelia-binding protein, PGLYRP1. Studies have crocidurae interactions with erythrocytes from WT and CD55 KO mice were compared. Erythrocytes from WT and CD55 KO mice were incubated with 10 6 B. crocidurae spirochetes in 0.2-mL PCR tubes at 37°C as described before (39). Hemoglobin from erythrocytes that interacted with B. crocidurae was quantified using the QuantiChrom hemoglobin assay kit. Data from three independent experiments performed in triplicates are presented. (C) B. crocidurae's interaction with human RBCs in the presence of anti-CD55 neutralizing antibody (R&D Systems) was measured. Human RBCs were preincubated with either anti-human CD55 neutralizing antibody or another antibody (anti-CD3 antibody) as an isotype control for 20 min. The RBCs were incubated with 10 6 B. crocidurae spirochetes. The interactions between human RBCs and B. crocidurae spirochetes were measured by flow cytometry. Data from three independent experiments are presented. (D) B. crocidurae's interactions with RBCs in the presence of recombinant CD55 or PGLYRP1 were compared. RBCs from C57BL/6 mice were incubated with 10 6 B. crocidurae spirochetes at 37°C in the presence or absence of recombinant human (h) or mouse (m) CD55 (100 mg/mL). Hemoglobin from RBCs that interacted with B. crocidurae was quantified.
CD55 Binds to Borrelia crocidurae mBio demonstrated the importance of rosette formation in the pathogenesis of B. crocidurae infection (24,27,29). These results show that CD55 is a key host protein that is involved in B. crocidurae's interactions with erythrocytes and evasion of host complement. CD55-deficient mice show reduced B. crocidurae burden. To understand the physiological significance of the CD55-B. crocidurae interaction, we compared the outcomes of B. crocidurae infection in WT and CD55 KO mice (Fig. 5A). WT and CD55 KO mice were inoculated with 1 Â 10 5 spirochetes. The B. crocidurae burden in the blood was assessed at different days postinfection (dpi) by quantitative PCR (qPCR) using a B. crocidurae flaB-specific probe. CD55 KO mice had a significantly lower spirochete burden at 2 dpi (Fig. 5B), suggesting a role for CD55 during the early phase of infection. CD55 KO mice also exhibited splenomegaly compared to the WT group at 10 dpi (Fig. 5C). Furthermore, when blood from infected WT and CD55 KO mice was microscopically examined, B. crocidurae interactions with erythrocytes were more evident in WT mice than in CD55 KO mice (Movies S1 and S2).
To further understand the immunopathogenesis of B. crocidurae infection, serum cytokine profiles were assessed in both WT and CD55 KO mice infected with B. crocidurae, using a mouse cytokine/chemokine array panel. Sera from uninfected WT and CD55 KO mice were also probed as baseline controls (Fig. 6A). Increases in the levels of interleukin-6 (IL-6) (day 2), IL-1a (day 4), tumor necrosis factor alpha (TNF-a) (day 4), CCL5 (RANTES) (day 4), and CCL3 (day 4) cytokines were observed in infected CD55 KO mice compared to their levels in WT mice (Fig. 6A to E), while the levels of other representative cytokines were not altered (Fig. S6). CCL3 is involved in both febrile and inflammatory responses (42). A key characteristic feature of relapsing fever infection is increased monocyte and neutrophil numbers. IL-1a and IL-6 are produced in blood by myeloid cells, particularly monocytes (43)(44)(45). Interestingly, IL-6 regulates neutrophil trafficking during acute inflammation (46). Our results suggest that the interactions of CD55 with B. crocidurae may also influence the host neutrophil and monocyte response.
CD55 is known to control complement activity, including through inhibition of the C5 convertase, and soluble C5a (a cleaved component of complement C5 that signals through the G protein-coupled receptor C5AR) can also induce IL-6 production. To assess whether CD55-mediated control of IL-6 was related to an increase in C5a production, we compared the C5a levels in the serum of WT and CD55 KO mice at 4 dpi. Our results indicated that CD55 KO mice had higher C5a levels at 4 dpi (Fig. 6F). These findings suggest that in the absence of CD55, mice may limit infection by increasing complement and cytokine responses. CD55-associated pathways are linked to B. crocidurae pathogenesis. To understand the molecular pathways in CD55 KO mice associated with resistance to B. crocidurae infection, we compared the whole-blood transcriptomes of WT and CD55 KO mice that were uninfected or infected with B. crocidurae (Fig. 7A). We found that 320 genes were differentially expressed between uninfected WT and CD55 KO mice, while 906 genes were differentially expressed between B. crocidurae-infected WT and CD55 KO mice. Only 43 common genes were observed between uninfected and infected mice in the absence of CD55, indicating that the 863 genes were altered by the deficiency of CD55 in response to B. crocidurae infection (Fig. 7B to D; Fig. S7). Based on Gene Ontology (GO) functional classification and KEGG pathway analyses of the 863 genes, selected immune response and cytokine signaling pathways were highly enriched in infected CD55 KO mice (Fig. 7E). Of note, the natural killer (NK) cell-mediated cytotoxicity pathway, the B and T cell receptor signaling pathways, and chemokines like CCL5 (which was also validated in the abovedescribed cytokine analysis) (Fig. 6D) were significantly upregulated in CD55 KO mice (Fig. 7D). Transcriptomic analysis also showed that J chain expression was significantly upregulated (fold change [FC] of 7.7) (P . 0.0001) in CD55 KO mice that were infected with B. crocidurae. J chain is important for the secretion and polymer formation of IgM and IgA (47). To determine the effect of CD55 deletion on the B. crocidurae-specific IgM response, we measured antibody responses to spirochete antigens via sandwich enzyme-linked immunosorbent assay (ELISA). There was a modest increase in B. crocidurae-specific IgM in CD55 Binds to Borrelia crocidurae mBio the sera obtained from CD55 KO mice compared to the level in sera from WT C57BL/6 mice, collected at 12 dpi (Fig. 7F), while no differences were observed in B. crocidurae-specific IgG levels at 12 dpi (Fig. S7C). Overall, transcriptomic analysis, cytokine measurements, and antibody ELISA results indicated that CD55 deletion affected innate and IgM responses following infection. Taken together, all these data suggest that CD55-mediated immune pathways are critical for B. crocidurae infection and that the relapsing fever agent may influence immune signaling.
DISCUSSION
B. crocidurae causes relapsing fever infections in Africa, Asia, and Europe (12,48,49). In this study, we identified CD55 as a novel host interaction partner with B. crocidurae and B. persica, two etiologic agents of relapsing fever. CD55 was identified using a BASEHIT screening strategy that combines next-generation sequencing with an advanced yeast display library approach. This yeast display library expresses 1,031 human proteins individually on the surface of yeast cells, consisting of secretory and extracellular proteins. Other than CD55, identified hits included REG4 (regenerating family member 4) and selected cytokines and chemokines, such as CCL24 (eotaxin-2), CCL17 and CCL11 (eosinophil chemotactic protein and eotaxin-1), CXCL3 (macrophage inflammatory protein-2-beta [MIP2b]), and IL-29 (interferon lambda [IFN-l1]), that The key immune pathways enriched included T cell receptor and B cell receptor signaling, natural killer cell-mediated cytotoxicity, NF-k B signaling, antigen processing and presentation, and primary immunodeficiency. (F) B. crocidurae-specific IgM levels in uninfected wild-type C57BL/6 (WT) and CD55 KO mice were compared with those in the infected animals at two time points (12 days and 21 days postinfection). Whole-cell lysate of B. crocidurae was used to coat the wells of a microtiter plate, and serum from uninfected WT, infected WT, uninfected CD55 KO, or infected CD55 KO mice was used at a 1:200 dilution. The binding was measured using a secondary HRP-conjugated anti-mouse IgM. Each data point represents the result for an individual animal in the corresponding group. The bars represent mean values 6 SD, and P values were determined using the Student t test.
CD55 Binds to Borrelia crocidurae mBio potentially bound to both B. crocidurae and B. persica. It is also interesting that most of these protein candidates were not top hits in our previous screen with Borrelia burgdorferi. These results suggest that BASEHIT is a powerful approach that is capable of identifying strain-and species-specific host binding partners. We examined the interaction between CD55 and B. crocidurae in greater detail because B. crocidurae is a major cause of human disease, readily infects mice, and is known to bind erythrocytes and generate cell aggregates that disrupt the microcirculation (24,27,29).
Our results show that CD55 protects B. crocidurae spirochetes from complementmediated killing and facilitates interactions with erythrocytes. This effect may be a key strategy adopted by this spirochete to enable its early establishment and dissemination in the blood. CD55 is present on the surface of erythrocytes, where its primary role is to protect erythrocytes from complement-mediated lysis (50,51). B. crocidurae interacts with erythrocyte surface-localized CD55 and induces the formation of rosettes that allow the spirochete to evade innate immune responses (27). We show that mice lacking CD55 exhibit resistance to B. crocidurae infection, as demonstrated by a lower spirochete burden and enhanced cytokine and chemokine innate immune responses. We determined that CD55-B. crocidurae interactions help in the formation of rosettes, which is a crucial feature of B. crocidurae pathogenesis. Furthermore, protease treatment of B. crocidurae decreases the CD55 binding, indicating that this interaction is likely associated with a protein ligand. Overall, CD55 binding to B. crocidurae is critical for its interaction with erythrocytes and pathogenesis, as well as immune evasion.
CD55 is a known complement regulatory protein, and humans with defects in CD55 develop complement hyperactivation, angiopathic thrombosis, and protein-losing enteropathy (CHAPLE disease), a lethal illness that is due to overactivation of complement and innate immunity (52,53). We hypothesized that the increased resistance of CD55 KO mice to B. crocidurae infection could be related to increased C5a levels in the serum and antimicrobial defenses. Following B. crocidurae infection, the C5a levels increased more in CD55 KO mice than in control animals. Cytokines like IL-6, IL-1a, and CCL5, produced by monocytes, other myeloid cells, or NK cells, were upregulated. We hypothesize that in the absence of CD55, B. crocidurae infection induces inflammation that then increases C5a, CCL3, and CCL5. Increases in these soluble mediators can result in the recruitment and activation of innate immune cells, including NK cells and monocytes that make TNF-a and IL-6. Finally, RNA sequencing elucidated the specific genetic signature associated with B. crocidurae infection in C57BL/6 (WT) and CD55 KO mice. The activated pathways included chemokine signaling pathways and natural killer cell-mediated cytotoxicity pathways.
The activity of complement is tightly regulated by regulatory proteins (decay-accelerating factor [DAF or CD55], membrane cofactor protein [MCP or CD46], complement receptor 1 [CR1 or CD35], and CD59) to balance the response against pathogens and prevent injury of the host. To evade the complement response, pathogens like B. burgdorferi express many different lipoproteins on their surface that bind complement components and interfere with complement activation (54)(55)(56)(57). B. burgdorferi surface antigens bind to soluble complement regulators factor H (FH), factor H-like protein, and C4bp and inhibit the activation of the C1 complex, composed of C1q, C1r, and C1s (55,58). Similarly, different viruses also adopt strategies to thwart the complement attack (56,(59)(60)(61). Parasites like Plasmodium falciparum, Entamoeba histolytica, Trichomonas vaginalis, Trypanosoma cruzi, and Schistosoma spp. also use various strategies to escape complement-mediated killing, including the recruitment of complement regulatory proteins and expression of orthologs of complement regulatory proteins to inhibit complement activity (62,63).
Decay-accelerating factor (DAF or CD55) was first identified as a complement regulator and is a cell surface receptor that is also present in body fluids in a soluble form (64). In addition to inhibiting the early steps of complement activation, CD55 can also influence the activation of T cells and the natural cytotoxicity of NK cells. CD55 binds to CD97, a leukocyte adhesion marker that is involved in the recruitment, activation, and migration of granulocytes. CD55 deficiency increases CD97 expression on the surface of leukocytes and does not affect receptor signaling (65). B. crocidurae's interactions with CD55 may also affect its binding to CD97, and the CD55-CD97 axis may also contribute to B. crocidurae pathogenesis in vivo. Previous studies have shown that the binding of Escherichia coli adhesin to CD55 leads to the induction of the stress-induced ligand MICA (major histocompatibility complex [MHC] class I-related molecule) on epithelial cells (66). In our studies, we did not see differences in Rae-1 (distantly related to MHC class I proteins) expression in CD55 KO mice following infection with B. crocidurae. The increases in proinflammatory cytokines (IL-6, IL-1a, TNF-a, CCL5 [RANTES], and CCL3) and in complement anaphylatoxin C5a may also influence the Borrelia burden in CD55 KO mice. These results suggest that the influence of CD55 on innate immunity may also contribute to B. crocidurae's growth in the mice. To delineate the role of CD55 in B. crocidurae pathogenesis, studies can focus on identifying interacting ligand(s) on the B. crocidurae or B. persica surface. Our results also suggest that relapsing fever species can be further divided into CD55 binding and nonbinding species, and studies can explore whether CD55 binding is related to multiphasic antigenic variation (20).
B. crocidurae is the primary cause of endemic relapsing fever in Western Africa (67). The clinical manifestations are related to the ability of B. crocidurae to affect the blood coagulation system (24,29). Our results demonstrate that CD55 directly influences erythrocyte aggregation and the pathogenesis of B. crocidurae. This interaction shields spirochetes from host immune attack and is associated with an altered host inflammatory response. Our screen also showed that a second relapsing fever spirochete, B. persica, binds to CD55. At present, there is little information about the pathogenesis of B. persica, and further studies will delineate the role of CD55 in B. persica infection and that of other relapsing fever strains, including but not limited to emerging pathogens like B. miyamotoi. Overall, relapsing fever caused by B. crocidurae and other spirochetes remains a major source of morbidity globally (15,68), and it is important to better understand the mechanisms of infectivity in order to develop new therapeutic strategies for this disease. These findings suggest that CD55 plays an important role in the pathogenesis of B. crocidurae infection.
MATERIALS AND METHODS
Ethics statement. All experiments performed in this study were conducted in accordance with the Guide for the Care and Use of Laboratory Animals (69), and efforts were made to reduce animal suffering. Animal experiment protocols were approved by the Institutional Animal Care and Use Committee at Yale University (protocol permit number 07941).
Yeast library screening. Details of library construction and selection are described elsewhere (30). Briefly, a library of barcoded plasmids containing the extracellular portions of 1,031 human proteins was expressed in Saccharomyces cerevisiae strain JAR300 and maintained in SDO-Ura (synthetic drop-out medium, prepared with 20 g/L glucose according to the manufacturer's instructions) (D9535; USBiological). Protein synthesis was induced by culturing the library in medium containing 90% galactose and 10% glucose for 24 h at 30°C. Induced yeast cells were harvested and incubated with biotinylated bacteria for 1 h at 4°C. Yeast cells were incubated with streptavidin microparticles (0.29 mm) (catalog number SVM-025-5H; Spherotech) for 1 h at 4°C. Bead-bound yeasts were selected by magnetic separation and subsequently grown in 1 mL SDO-Ura at 30°C. DNA was extracted from yeast cell libraries and amplified and sequenced using Illumina MiSeq and Illumina version 2 MiSeq reagent kits according to the manufacturer's standard protocols. Enrichment calculations were performed using edgeR (30,71,72). The overall enrichment for a gene (relative to the unselected library) was multiplied by the percentage of barcodes associated with the enriched gene (defined as a logFC of .0). The cutoff was selected as a BASEHIT score of 1. This was decided based on our previous study, where nearly all Borrelia species showed interaction with PGLYRP1 at a score of .1 (30). This cutoff score allowed a focus on genes that were highly enriched in the BASEHIT screen and therefore bound more strongly to Borrelia species.
CD55 Binds to Borrelia crocidurae mBio
Gene cloning and expression. Human and mouse CD55 (human CD55, amino acids 35 to 353, or mouse CD55, amino acids 35 to 362) genes were cloned into pEZT-Dlux, a modified pEZT-BM vector, as described before (30). Protein was purified from the culture supernatant by Ni-nitrilotriacetic acid (NTA) chromatography and desalted into PBS. The human PGLYRP1 (amino acids 22 to 196) gene was also cloned into pEZT-BM and the expressed protein purified as described before (30). Expi293 cells (ThermoFisher) were transfected with CD55 or PGLYRP1 using the ExpiFectamine 293 transfection kit (ThermoFisher). Protein purity was verified by SDS-PAGE. The protein concentration was measured by the absorbance at 280 nm.
Flow cytometry-based CD55 binding assay. Low-passage-number B. crocidurae spirochetes were cultured to a density of ;10 6 to 10 7 cells/mL, washed two times with PBS, and incubated with recombinant human CD55, mouse CD55, IsPDIA3, or PGLYRP1 (all with an 8ÂHis tag) at 4°C for 1 h. After the incubation period, spirochetes were washed three times and fixed in 2% paraformaldehyde (PFA). Spirochetes were blocked in 1% bovine serum albumin (BSA), probed with anti-6ÂHis monoclonal antibody conjugated to Alexa Fluor 488 (AF488) (catalog number MA1-21315-488; ThermoFisher), and run through a BD LSR II instrument (BD Biosciences). These data were analyzed by FlowJo. For competition assays, B. crocidurae spirochetes were incubated with recombinant human CD55-Fc (50 mg/mL) alone or in the presence of mouse CD55-His 8 (100 mg/mL). The binding of B. crocidurae to human CD55-Fc was measured using an anti-human CD55 mouse monoclonal antibody (MAB2009; R&D systems) and a goat anti-mouse IgG (H1L) Alexa Fluor 488-conjugated secondary antibody (ThermoFisher Scientific) (1:1,000). For the protease assays, B. crocidurae cultures were grown to a density of 10 5 CFU/mL and incubated in the presence or absence of proteinase K (0.2 mg/mL) at 37°C for 10 min. Subsequently, the proteinase K activity was quenched using a Roche cOmplete proteinase inhibitor cocktail, and spirochetes were washed with PBS thrice. Borrelia spirochetes were incubated with recombinant mouse CD55-His 8 (50 mg/mL). The binding of B. crocidurae spirochetes to mouse CD55 was measured by flow cytometry using a secondary AF488-conjugated anti-6ÂHis monoclonal antibody.
Immunofluorescence assay for analysis of B. crocidurae's interactions with human and mouse CD55. Spirochetes were grown to a density of ;10 7 cells/mL and harvested by centrifugation at 5,000 Â g for 15 min. Cells were washed twice with PBS containing 2% BSA (PBS-BSA). Spirochetes were incubated with either recombinant human or mouse CD55 or human PGLYRP1 conjugated with a His tag or with a control protein fused with a His tag at 50 mg/mL for 1 h at 4°C. After washing two times with PBS and incubating with an anti-His tag AF488-conjugated secondary antibody (1:50), the samples were incubated for an additional 30 min as described before (73). The spirochetes were washed with PBS and visualized by dark-field microscopy.
Complement activity against B. crocidurae. The effect of complement and antibodies on B. crocidurae growth was measured using the microscopy and BacTiter Glo microbial cell viability assay. To measure complement-mediated killing of B. crocidurae in the presence or absence of recombinant CD55, human CD55 (100 mg/mL) was incubated with B. crocidurae for 2 h in the presence or absence of immune serum (40% and 20%) from mice that were infected 30 days previously with B. crocidurae. Viability was assessed by observing spirochete movement under dark-field microscopy as described before (74,75). The growth inhibition of B. crocidurae was calculated based on the results for untreated B. crocidurae.
The BacTiter Glo microbial cell viability assay provides a method for determining the number of viable Borrelia spirochetes in culture based on quantitation of the ATP present by measuring luminescence. The luminescent signal is proportional to the ATP concentration, thus indicating the number of viable Borrelia spirochetes in the culture. To test the effect of human or mouse CD55 on mouse complement and the growth of B. crocidurae, we incubated 1 Â 10 5 B. crocidurae spirochetes under microaerophilic conditions at 33°C for 24 h in a final volume of 300 mL in the presence of immune serum. Immune serum was collected from mice at day 30 postinfection with B. crocidurae and stored at 280°C in aliquots. The percentage of growth inhibition was calculated using viable spirochetes that were incubated in the absence of serum.
Quantitative erythrocyte rosetting assay. Log-phase B. crocidurae spirochetes were harvested by centrifugation at 8,000 Â g and resuspended to 1 Â 10 8 spirochetes/mL in RPMI containing 10% fetal bovine serum (FBS). B. crocidurae and erythrocytes were preincubated at 37°C for 15 min in different tubes. After preincubation, B. crocidurae spirochetes and erythrocytes were mixed in 96-well, flat-bottom microtiter plates and incubated at 37°C for 30 min. The rosettes were visualized under the EVOS microscope (ThermoFisher). For analyzing the rosette size, the erythrocytes were labeled with the fluorescent dye PKH67 before the assay. The excess dye was quenched using FBS, and erythrocytes were washed with RPMI. The rosette size was calculated using EVOS cell imaging system software.
Erythrocyte rosetting assay by microscopy. Log-phase B. crocidurae spirochetes were harvested by centrifugation at 8,000 Â g and resuspended to 1 Â 10 8 spirochetes/mL in RPMI containing 10% FBS. B. crocidurae spirochetes and erythrocytes were preincubated at 37°C for 15 min in different tubes. After the preincubation, 20 mL of B. crocidurae spirochetes and 40 mL of erythrocytes were mixed in 0.2-mL PCR strip tubes and incubated at 37°C for 15 min. Subsequently, 40 mL of supernatant was removed from each tube and 50 mL of fresh medium was added to the tube. The tube was further incubated for 15 min at 37°C, and subsequently, 50 mL of erythrocytes floating at the top was removed without disturbing the rosettes. At the end of the incubation, 200 mL of water was added to lyse the RBCs, and 50 mL of lysed erythrocyte solution was used to measure hemoglobin, using the QuantiChrom hemoglobin assay kit and measuring absorbance at 405 nm.
Flow cytometry-based interaction assay. Human RBCs from a healthy donor were used for these assays. RBCs were washed twice with PBS and subsequently stained with the cell proliferation dye eFluor 670 (ThermoFisher Scientific) at 5 mM for 5 min in PBS at 37°C. Similarly, healthy growing B. crocidurae spirochetes were stained with EvaGreen dye (Biotium) for 5 min in PBS at 37°C. RBCs and spirochetes were then separately washed three times with medium containing serum (RPMI with 10% FBS). RBCs were preincubated in the presence of 5 mg anti-human CD55 antibody (MAB2009; R&D Systems) or anti-CD3 antibody (Biolegend). For interaction assays, spirochetes and RBCs were incubated together for 30 min at 37°C and 5% CO 2 in a humidified chamber. Flow cytometry was performed on a FACS LSR-II (BD Biosciences), and data analyzed with FlowJo (FlowJo, LLC).
Pathogen-free C57BL/6 WT mice (Charles River Laboratories) and CD55 KO mice (C57BL/6 DAF 2/2 ) at 6 to 8 weeks of age were infected intraperitoneally with low-passage-number B. crocidurae (1 Â 10 5 spirochetes). Uninfected mice were used as controls. Blood was collected at different days postinfection to compare the Borrelia burdens. Spleen weights were measured at day 10 postinfection immediately following euthanasia to assess splenomegaly. The protocol for the use of mice was reviewed and approved by the Yale Animal Care and Use Committee.
For video analysis of B. crocidurae interaction with RBCs, blood was collected at day 7 postinfection. The whole blood was immediately diluted in PBS, and interactions were visualized by dark-field microscopy within 1 h of blood draw.
Quantification of B. crocidurae burden. B. crocidurae DNA was extracted from whole-blood samples using the DNeasy blood and tissue kit (Qiagen). Quantitative PCR (qPCR) was performed using iTaq universal SYBR green supermix (Bio-Rad). For quantitative detection of the B. crocidurae burdens within mouse blood samples, qPCR with DNA was performed using the flagellin subgroup B gene (flaB), a marker for B. crocidurae detection. The primers used in the assay were Flab F, GAATTAATCGTGCATC TGAT, and Flab R, CATCCAAATTTCCTTCTGTTG. The mouse b-actin gene (30,73) was used to normalize the amount of DNA in each sample.
RNA-seq analysis. Total RNA was extracted from whole blood obtained from mice 2 days after the intraperitoneal infection with B. crocidurae. TRIzol was added to the whole blood, and RNA was isolated according to the manufacturer's instructions (Qiagen, CA). RNA was submitted for library preparation using TruSeq (Illumina, San Diego, CA, USA) and sequenced using the Illumina HiSeq 2500 by pairedend sequencing at the Yale Center for Genome Analysis (YCGA). All the transcriptome sequencing (RNAseq) analyses, including alignment, quantitation, normalization, and differential gene expression analyses, were performed using Partek Genomics Flow software (St. Louis, MO, USA). Specifically, RNA-seq data were trimmed and aligned to the mouse genome (mm10) with the associated annotation file using STAR (version 2.7.3a) (76). The aligned reads were quantified by comparison to Ensembl transcripts release 91 using the Partek E/M algorithm (77), and the subsequent steps of gene-level annotation followed by total count normalization were performed. The gene-level data were normalized by dividing the gene counts by the total number of reads, followed by the addition of a small offset (0.0001). Principal-component analysis (PCA) was performed using default parameters for the determination of the component number, with all components contributing equally in Partek Flow. Volcano plot hierarchal clustering was performed on the genes that were differentially expressed across the conditions (P , 0.05, fold change of $2 for each comparison). Pathway enrichment was also conducted in Partek Flow as described before (78). A gene expression heatmap of the selected genes was further plotted by using ggplot2 and Prism version 8 (GraphPad). The selected immune pathways were further plotted on a bubble diagram by using ggplot2 in R studio.
Measuring the C5a levels in mouse serum by ELISA. The mouse serum was collected at day 4 postinfection. For C5a measurements, the sera were diluted 1:1,000. Mouse complement component C5a DuoSet ELISA kits (R&D Systems, Minneapolis, MN, USA) were used according to the manufacturer's recommendations.
Statistical analysis. The analysis of all data was performed with the Student t test or analysis of variance (ANOVA) in Prism 8.0 software (GraphPad Software, Inc., San Diego, CA). A P value of ,0.05 was considered statistically significant.
Data availability. The RNA-seq data are available in the Gene Expression Omnibus (GEO) repository at the National Center for Biotechnology Information under the accession number: GSE198510.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only.
ACKNOWLEDGMENTS
We thank Alan Barbour for helpful advice in the project. We thank Jiri Cerny, Akash Gupta, Maryna Golovchenko, Natalie Rudenko, and Ulrike Munderloh for help with the BASEHIT screen. We acknowledge Wenchao Song (University of Pennsylvania) for providing CD55 (DAF) knockout mice. We acknowledge Jun Liu and Chunyan Wang for their help in structural studies. We also thank Alje van Dam, Amsterdam UMC, for providing B. crocidurae and Sven Bergström and Guy Baranton from Pasteur Institute Paris for B. persica strains.
This work was supported by grants from the NIH (grants number AI126033, AI138949, and AI157014) and the Steven and Alexandra Cohen Foundation to Erol Fikrig. This work was also supported by ZonMW as part of the project Ticking on Pandora's Box, a study into tick-borne pathogens in Europe (project no. 522003007), to Dieuwertje Hoornstra and Joppe W. Hovius.
A.M.R., C.E.R., and N.W.P. are inventors on a patent application describing the BASEHIT technology. All the other authors declare no conflict of interest.
|
2022-08-30T06:19:47.015Z
|
2022-08-29T00:00:00.000
|
{
"year": 2022,
"sha1": "eb6bc89cf03e94739ddaac23e9698585e20f7b78",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ASMUSA",
"pdf_hash": "ab4261e2f578dc477ceceba15d82bebcacceca02",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
145319208
|
pes2o/s2orc
|
v3-fos-license
|
Attitudes Towards the Sexuality of Men with Intellectual Disability: The Effect of Social Dominance Orientation
Usually, people with ID (intellectual disability) have been deprived of the right to live in the community, receive an educat ion, work, marry, and procreate. The aim of this study was to investigate the attitudes of 122 university students towards the sexuality of men with ID. We hypothesized that Social Dominance Orientation (SDO) should hinder an open attitude towards the right to sexuality of men with ID.Results showed that attitudes are generally passably positive and that students high on SDO were more oriented to reject sexual rights for men with ID than students scoring lower on this factor.
Introduction
-professionals. However, even among disability-professionals, there are distorted and misleading attitudes about the disability, as shown by some researches (Licciardello & Di Marco, 2010).
Disability requires to consider that someone helps the disabled individual to do something or everything. Disabled people, hence, regardless their specific impairment, are affected by social attitudes that enable them to live a whole and satisfactory life, regardless of actual limitations.
Thus, the Social Model of Disability (Oliver, 1981) suggests to think disability as the consequence of a disabling good development, according to own specific characteristics and needs. If there is one group which has historically been denied the dignity and value attached to the status of being human, it would have to be people with Intellectual Disabilities (ID) (Herr et al., 2003). Usually, people with ID have been deprived of the right to live in the community, receive an education, work, marry and procreate (Griffiths et al., 2003).
tudes and beliefs about the ID. Furthermore, if we consider the specific field of sexuality and affectivity, the theme is complicated and complex. As in researches of Aunos and Feldman (2002), sexuality of disabled people evokes discomfort, mostly in parents, who try to suppress sexual expression of their children and generally oppose the idea that they can live an autonomous adult sexuality.
Thus, if the beliefs of caregivers can influence the development of sexual identity of disabled people and their real possibilities of living an adult sexuality (Swango-Wilson, 2008), disabled people are often kept from realizing their own sexuality, in the way they would like to do. Moreover, denying the possibility to live their own sexual life, disabled individuals sexuality, it seems that society perceives adults with ID as asexual beings (Milligan & Neufeldt, 2001); however, it doesn't accept socio-sexual expression of persons with ID. In effect, attitudes to the intimate relationships of adults with ID are one reflection of the inclusiveness of a community (Cuskelly & Gilmore, 2007). sionals to be flexible, because they should learn to put the needs of patients before their own ideological system.
In this study, we address to university students, as citizens, because, regardless of professional interests, they may raise or lower barriers about autonomy, freedom and quality of life of disabled people.
We have considered only the attitudes towards the sexuality of men with ID, in order to deal with the issue by a specific point of view.
Social Dominance Orientation And Disability
People characterized by the Social Dominance Orientation (SDO; Sidanius & Pratto, 1999) think that reality is characterized by a continuous competition between groups, and encourage/promote a social stratification into dominant and inferior groups (Pratto et al., 1994). This kind of personality has been studied concerning especially ethnical prejudice (Ekehammar et al., 2004). A very little attention has been given to prejudice towards other minority groups, for instance disabled people (Vezzali et al., 2010). In this regard, Zachariae and Frindte (2002) considered different because of some features. Brandes & Crowson, 2009) considered and deepened the link between SDO and attitudes towards disabled people.
stance, found that high levels of SDO positively correlate with the prejudice towards disabled people. Moreover, Brandes and Crowson (2009) found that the SDO and the discomfort felt in interaction with a disabled person are the major causes of negative attitudes towards disability.
Other studies showed that this orientation is diriment in the management of intergroup interactions and attitudes (Mari et al., 2007). From these researches, we can see that SDO reduces the positive evaluation and emotions towards outgroup, increasing the perception of differences and negative feelings. Then, SDO is an important part of all variables measured when we want to consider the complexity of social condition of disabled people (and minority groups, in general).
Aims and Hypothesis
This study explored the attitudes of university students towards the sexuality of men with ID, considering also the effect of SDO on these attitudes.
Our hypothesis was that levels of SDO should hinder an open attitude towards the needs of men with ID.
Participants and Materials
Participants were 122 university students (men n=63, women n=59). The mean age of the sample was 23.41 years (SD=2.97; range=18-30 years).
The questionnaire included: -Three Semantic Differentials (Osgood et al., 1957), composed of 5 pairs of bipolar adjectives evaluating on a 7--ASQ-ID, The Attitudes to Sexuality Questionnaire: Individuals with an Intellectual Disability (Cuskelly & Gilmore, 2007). It is composed of 28 items; participants responded on a 6-point Likert scale, from of the scale regarding men with intellectual disability. Cuskelly and Gilmore (2007) identified four meaningful subscales: Sexual Rights (13 items, Parenting (7 items, s -Non-Reproductive Sexual Behavior Self-Control (3 items sexually than people without ID -SDOS, Social Dominance Orientation Scale (Sidanius, Pratto et al., 1994), in the Italian version by Aiello Participants responded to the items on a 7-
Results and Discussion
The representation of the disabled person M=4.48, sd=1.08), investigated using Semantic Differentials, was just above the intermediate point (=4). On the contrary, the Self ( M=5.50, sd=.95) and the normal person ( M=5.11, sd=.95) representations were more positive oriented. Our students assessed themselves more positively than normal person and, especially, than disabled people [F(2,242)=36.36; p< .001].
Concerning the sexuality, investigated by ASQ-ID, moderate positive attitudes emerged, with some statistically significant differences between the means of the domains of the scale [F(3,363)=5.54, p=.001].
In fact, students were more favorable towards men with ID. To verify effects of SDO on investigated aspects, for each measure a regression analysis has been applied Concerning the representa valuation of disabled -0.44, p<.001).
-0.40, p<.001). The belief that society should be hierarchically organized and that not all groups should have the same rights affects even on the consideration about parenthood of disabled -0.44, p<.001). Furthermore, for a stronger SDO we see a lower esteem of p<.001). SDO had no influence on self-representation, on the representation of the person without ID -. Considering the aspects valued in the general attitude towards sexuality of men with ID scale, we can see that social dominance orientation negatively correlates with the general attitude towards sexuality of men with ID -0.29, p=.002): people, who strongly believe that some social groups are inferior to others, are less inclined to think that disabled men have sexual abilities, rights and needs. All results are shown in table 2.
Conclusions
Sexuality is a very complex and problematic issue. In our study, there were some data that seem to be really important.
First of all, in line with our hypothesis, we can see that if students are more oriented to dominance, they are less inclined to the acceptance and the comprehension of sexual needs of men with ID. From the results obtained with the semantic differentials, the less positive representation of the disabled people (outgroup) and the more positive connotation of normal people (ingroup) increase by considering rightful the social inequalities.
In our study, the SDO plays a negative role on the attitudes of students towards the sexuality of men with disability. This negative role reverberates in what they can really do, as citizens, towards affective and sexual needs p to build opportunities for disabled people who, as disabled, depend on the chances given by the society, as we said before. make youth capable to perceive as unjust the unequal society, in which not all have/not everybody has the same rights and equal opportunities. respect to disabled men. It would be interesting to see if (and in that case how) these attitudes change towards sexuality of disabled women.
|
2019-05-06T14:04:05.086Z
|
2013-07-09T00:00:00.000
|
{
"year": 2013,
"sha1": "19ac026da7efa7adc38caab7c89b63d9d60ce3bd",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.sbspro.2013.06.726",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2f1f858bf235211ac2b841442fff7fb513974dec",
"s2fieldsofstudy": [
"Sociology",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
226281890
|
pes2o/s2orc
|
v3-fos-license
|
On 2-Selmer groups of twists after quadratic extension
Let $E/\mathbb{Q}$ be an elliptic curve with full rational 2-torsion. As d varies over squarefree integers, we study the behaviour of the quadratic twists $E_d$ over a fixed quadratic extension $K/\mathbb{Q}$. We prove that for 100% of twists the dimension of the 2-Selmer group over K is given by an explicit local formula, and use this to show that this dimension follows an Erd\H{o}s--Kac type distribution. This is in stark contrast to the distribution of the dimension of the corresponding 2-Selmer groups over $\mathbb{Q}$, and this discrepancy allows us to determine the distribution of the 2-torsion in the Shafarevich--Tate groups of the $E_d$ over K also. As a consequence of our methods we prove that, for 100% of twists d, the action of $\operatorname{Gal}(K/\mathbb{Q})$ on the 2-Selmer group of $E_d$ over K is trivial, and the Mordell--Weil group $E_d(K)$ splits integrally as a direct sum of its invariants and anti-invariants. On the other hand, we give examples of thin families of quadratic twists in which a positive proportion of the 2-Selmer groups over K have non-trivial $\operatorname{Gal}(K/\mathbb{Q})$-action, illustrating that the previous results are genuinely statistical phenomena.
Introduction
Let E/Q be an elliptic curve with E[2] ⊆ E(Q), and consider the family of quadratic twists of E over Q : {E d : d ∈ Z squarefree}. Let K/Q be a quadratic extension. In this paper, as d varies we study the 2-Selmer groups Sel 2 (E d /K) of E d over K.
1.3.
Twists of the Weil restriction of scalars. Write A = Res K/Q E for the Weil restriction of scalars of E from K to Q. This is a principally polarised abelian surface over Q. For each squarefree integer d we have (see Lemma 4.20) In particular, we can view Theorem 1.1 as giving the distribution of 2-Selmer groups in the quadratic twist family over Q of the abelian surface A. We state this formally as Theorem 6.15.
It is also possible to use this perspective to draw parallels between our work and existing work in the literature. Specifically, we show in §4.3 that for each d, the twist A d admits an isogeny φ d : A d → E d × E dθ whose kernel is a subgroup of A d [2]. The order of the Selmer group Sel φ d (A d /Q) associated to φ d is then, up to a quantity bounded independent of d, a lower bound for the size of the Selmer group Sel 2 (A d /Q). In turn, writing φ d for the dual isogeny, a lower bound for the size of Sel φ d (A d /Q) is given by the Tamagawa ratio For any isogeny between abelian varieties, the Tamagawa ratio is known to admit a local formula, and in our case this is essentially given by the right hand side of (1.4) (see §4.3 for details). Consequently, one explanation for the unbounded growth of dim Sel 2 (E d /K) seen in Theorem 1.1 is that the Tamagawa ratios T (φ d ) tend to grow with d. Similarly, growth of the relevant Tamagawa ratios is the phenomenon underlying the behaviour of 2-isogeny Selmer groups of quadratic twist families of certain elliptic curves seen in work of Klagsbrun-Lemke Oliver [KLO16] and Xiong-Zaharescu [XZ08]. Thus the behaviour we uncover can be viewed as an extension of those works to a special class of abelian surfaces.
1.4. Prime twists of the congruent number curve. As a complement to our main results, we provide examples of thin subfamilies of quadratic twists in which significantly different behaviour occurs to that exhibited by the full family. Specifically, take E to be the congruent number curve: Further, take K = Q( √ θ) to be an imaginary quadratic extension of class number 1 in which 2 is inert. Thus θ ∈ {−3, −11, −19, −43, −67, −163}. For a prime p, define nonnegative integers e 1 (E p /K) and e 2 (E p /K) such that we have an F 2 [G]-module isomorphism Theorem 1.9 (Theorem 9.13). The natural density of primes p for which e 1 (E p /K) = e 1 and e 2 (E p /K) = e 2 is as follows: In particular, the proportion of prime twists for which the G-action on Sel 2 (E p /K) is non-trivial is equal to 5/16.
1.5.
Overview of the proofs. The proofs of the results outlined above requires a combination of algebraic and analytic methods. Where possible we have tried to decouple these, so that the algebraic results stand alone. The algebraic work is largely carried out in §4 and §7, and is based on work of Kramer [Kra81]. For a squarefree integer d, a key role in our results is played by the group Sel C d (Q, E d [2]) of Definition 4.1. This is a subgroup of the 2-Selmer group Sel 2 (E d /Q) of E d over Q. Our key algebraic result is Corollary 4.8 which shows that the Selmer group Sel 2 (E d /K) admits the explicit description of Theorem 1.3 as soon as this auxiliary Selmer The main statistical theorems of the paper then depend on proving Theorem 6.1, which shows that Sel C d (Q, E d [2]) is trivial for 100% of d. To do this we draw on analytic techniques developed by Heath-Brown, and used to determine the distribution of the 2-Selmer groups of quadratic twists of the congruent number curve [HB93,HB94]. That work takes as a point of departure the explicit description of 2-Selmer groups of elliptic curves with full 2-torsion provided by 2-descent. In Proposition 7.8 we similarly give an explicit description of Sel C d (Q, E d [2]) as a subgroup of (Q × /Q ×2 ) 2 .
In fact, for the analytic part of the argument we have opted to replace Sel C d (Q, E d [2]) with a certain subgroup S d of Q × /Q ×2 (see Definition 8.5) whose vanishing implies the vanishing of Sel C d (Q, E d [2]), but which admits a simpler explicit description. In §8.5 we give a formula for the order of S d as a sum of Jacobi symbols in a form which can be treated by the analytic tools of Heath-Brown mentioned above. An alternative method at this point might be to draw on the alternative approaches of Kane [Kan13] or Smith [Smi17].
It is worth remarking that the passage from Sel C d (Q, E d [2]) to S d is somewhat wasteful. By following the work of Heath-Brown [HB93] more closely, one can similarly describe the order of Sel C d (Q, E d [2]) as a sum of Jacobi symbols. This would likely lead to significant improvements to the error bounds in Theorem 6.1. We have opted not to do this in favour of working with the simpler and more explicit sums arising from S d . In this respect, the resulting analysis is much closer to that carried out by Fouvry-Klüners in [FK07] to determine the distribution of 4-ranks of class groups of quadratic fields.
1.6. Layout of the paper. In §2 we introduce some notation that will be in use throughout.
In §3 we review some basic properties of Selmer structures and their associated Selmer groups which we use in later sections.
In §4 we study algebraically the behaviour of 2-Selmer groups of elliptic curves in quadratic extensions, building on work of Kramer [Kra81]. Along the way we give two reinterpretations of Kramer's work, one in the language of Selmer structures, and another in terms of the Weil restriction of scalars.
In §5 we study the analytic properties of the function g(d) of Notation 5.5 (essentially the right hand side of (1.4)) which gives a lower bound for dim Sel 2 (E d /K). In particular, we show in Proposition 5.8 that g(d) follows an Erdős-Kac type distribution.
In §6 we state our main technical result, Theorem 6.1, on the vanishing of the auxiliary Selmer group Sel C d (Q, E d [2]) for 100% of twists d. From this we deduce Theorems 1.1, 1.3 and 1.7, along with related results.
The proof of Theorem 6.1 is carried out across §7 and §8. In §7 we give some algebraic preliminaries. In §8 we build on this by describing the order of Sel C d (Q, E d [2]) as a sum of Jacobi symbols, before following closely the strategy of [FK07,§5] to study the behaviour of these sums as d varies.
In §9 we prove Theorem 1.9 concerning the behaviour over certain quadratic extensions of the 2-Selmer groups of prime twists of the congruent number curve. 1.7. Acknowledgements. We would like to thank Alex Bartel for suggesting we look at the behaviour of 2-Selmer groups in quadratic extensions from a statistical point of view, and for countless helpful comments. We would also like to thank Peter Koymans, Carlo Pagano and Efthymios Sofos for helpful discussions, and the anonymous referee for carefully reading the paper and providing several helpful corrections and suggestions.
Throughout this work, AM was supported by the Max-Planck-Institut für Mathematik in Bonn, and RP was supported by a PhD scholarship from the Carnegie Trust for the Universities of Scotland.
Notation and conventions
In this section we detail some notation and conventions which will be used throughout the paper.
2.1. Arithmetic functions. Given a positive integer n we write ω(n) for the number of distinct prime factors of n. We denote by µ the Möbius function, and for coprime integers m and n with n odd and positive, we write m n for the corresponding Jacobi symbol. 2.2. Galois cohomology. For a field F of characteristic 0 we writeF for a (fixed once and for all) algebraic closure of F , and denote its absolute Galois group by G F = Gal(F /F ). For a positive integer n we write µ n for the G F -module of n-th roots of unity inF , and write µ = ∪ n≥1 µ n . By a G F -module M we mean a discrete module M on which G F acts continuously. For i ≥ 0 we write H i (F, M ) as a shorthand for the continuous cohomology groups H i (G F , M ). We define the dual of M to be This is a G F -module with action given by setting, for σ ∈ G F and φ ∈ M * , σ φ(m) = σφ(σ −1 m).
For i ≥ 0, if L/F is a finite extension we denote the corresponding restriction and corestriction maps by respectively. When L/F is Galois and the action of G F on M factors through Gal(L/F ), we write H i (L/F, M ) as a shorthand for the cohomology group H i (Gal(L/F ), M ).
Number fields and completions.
For a number field F and a place v of F , we write F v for the completion of F at v. We implicitly fix embeddingsF ֒→F v for each place v and in this way view G Fv as a subgroup of G F for each v. In this way, for M a G F -module M , we obtain restriction maps on cohomology res v : When v is non-archimedean we denote by F nr v the maximal unramified extension of F v , and write for the subgroup of unramified classes in H 1 (F v , M ).
2.4. The Kummer image for abelian varieties. Still taking F to be a number field, for an abelian variety A over F , and for a place v of F , we denote by S (A/F v ) the image of the coboundary map arising from the short exact sequence of G Fv -modules 2.5. Quadratic twists. For a field F of characteristic 0, and for an element d of F × /F ×2 , we write χ d for the associated quadratic character. Thus χ d is the function from G F to {±1} defined by, for σ ∈ G F , the formula Given an abelian variety A over F we write A d for the quadratic twist of A by d. That is, A d is an abelian variety over F , equipped with anF -isomorphism
Selmer structures
In this section we review the properties of Selmer structures and their associated Selmer groups which will be used later. For details see e.g. [MR04,Was97] and the references therein.
Throughout this section we take F to be a number field. We take M to be a finite G Fmodule annihilated by 2, so that M is a finite dimensional F 2 -vector space. All dimensions will be taken over F 2 .
3.1. Local duality. For each place v of F , we have the local Tate pairing given by the composition of cup-product and the local invariant map.
(and we have the corresponding isomorphism globally also). For any non-archimedean place v ∤ 2 of F we have for all but finitely many places. The associated Selmer group Sel L (F, M ) is defined by the exactness of It is a finite dimensional F 2 -vector space.
For each place v we write L * v for the orthogonal complement of L v under the local Tate pairing, so that L * v is a subspace of H 1 (F v , M * ). We define the dual Selmer structure L * for M * by taking L * = {L * v } v . We refer to Sel L * (F, M ) as the dual Selmer group. 3.3. The Greenberg-Wiles formula. The following theorem due to Greenberg and Wiles describes the difference in dimension between a Selmer group and its dual.
Example 3.5 (The 2-Selmer group of an elliptic curve). Let E/F be an elliptic curve. For each place v of F we have the Kummer image . This is a consequence of the fact that, for a non-archimedean place v ∤ 2 of F at which E has good reduction, we have One can also give an elementary proof of this by computing the local terms individually (see e.g. [Sch96, Proposition 3.9]).
2-Selmer groups over quadratic extensions
For the rest of the paper we fix a quadratic extension K/Q. Write K = Q( √ θ) for a squarefree integer θ, and write G = Gal(K/Q). Moreover we fix an elliptic curve E/Q. Note that at this point we make no assumption on the 2-torsion of E, in later sections ( §6 onwards) it will be necessary to reduce to the case of full 2-torsion but we shall be clear when this restriction is made. Denote by Sel 2 (E/K) the 2-Selmer group of E/K. The conjugation action of G on The structure of Sel 2 (E/K) has been studied by Kramer in [Kra81]. In this section, since it will be useful for what follows, we give a reinterpretation of part of this work in the language of Selmer structures (see also work of Mazur-Rubin [MR07,MR10] for a similar perspective). The results in this section can be adapted in a straightforward way to general quadratic extensions of number fields (and this is the setting in which Kramer proves his results). However, we stick to quadratic extensions of Q since this is the setting in which all our applications are carried out.
As in §2.5, associated to the squarefree integer θ is the quadratic twist E θ , which comes equipped with the isomorphism ψ = ψ θ from E to E θ . Whilst this isomorphism is only defined over K, it restricts to an isomorphism of G Q -modules from E[2] to E θ [2]. We use this to identify H 1 (Q v , E[2]) and H 1 (Q v , E θ [2]) for each place of v, and identify the corresponding global cohomology groups similarly. In particular, for each place v of Q we may view both the Kummer images ). Similarly, we view both Sel 2 (E/Q) and Sel 2 (E θ /Q) as subgroups of 4.1. Selmer structures associated to E/K. We begin by defining two Selmer structures for E[2] over Q, each of which will capture a part of Sel 2 (E/K).
Definition 4.1. Define the Selmer structure F for the G Q -module E[2] by setting, for each place v of Q, Proof. This follows from the compatibility of local and global restriction maps.
Recall the definition of the local norm map from Notation 1.2.
Lemma 4.3. The following properties hold for the Selmer structure C .
where w is any choice of place of K extending v. (ii) For each place v of Q we moreover have ) is the local Kummer map (2.1) and the intersection takes place in Proof. (i): That cor Kw/Qv (S (E/K w )) and res −1 Kw/Qv (S (E/K w )) are orthogonal complements under the local Tate pairing is noted by Kramer in the paragraph following Equation 10 in [Kra81]. Specifically, it follows from [AW67, Proposition 9] and [NSW08, Corollary 7.1.4] that res Kw/Qv and cor Kw/Qv are adjoints with respect to the local Tate pairings. It follows that we have inclusions cor Kw/Qv (S (E/K w )) ⊆ F * v and res Kw/Qv cor Kw/Qv (S (E/K w )) * ⊆ S (E/K w ) * .
Since S (E/K w ) is its own orthogonal complement, the result follows.
(ii): The first equality follows from the fact that the coboundary maps arising from the respective Kummer sequences (2.2) over K w and Q v commute with corestriction. The second equality is [Kra81, Proposition 7].
(iii): The claim that Sel C (Q, E[2]) = Sel 2 (E/Q) ∩ Sel 2 (E θ /Q) is a formal consequence of (ii). The inclusion follows from (i) and compatibility of the local and global corestriction maps.
Remark 4.4. Let v be a place of Q. Since the Selmer structure F is dual to C , it follows formally from Lemma 4.3 and the fact that S (E/Q v ) is its own orthogonal complement, that we have where the sum is taken inside We may use Theorem 3.4 to determine the difference between the dimensions of the Selmer groups Sel F (Q, E[2]) and Sel C (Q, E[2]).
Lemma 4.5. We have Proof. Since for each place v of Q, the groups C (E/Q v ) and F (E/Q v ) are orthogonal complements under the local Tate pairing, we have Along with Lemma 4.3(ii) this gives Theorem 3.4 then gives and the result follows from (3.6).
4.2.
The 2-Selmer group of E/K. We now apply the results above to study the 2-Selmer group of E/K.
Proof. We first claim that the sequence is exact. To see this, consider the exact sequence of G Q -modules where ǫ is the augmentation map (sending g∈G λ g g to g∈G λ) and G Q acts on G via the quotient map G Q ։ G. Taking the tensor product over F 2 with E[2], and then taking Galois cohomology over Q, gives an exact sequence of G Q -modules Having shown the claim, the result now follows by combining the inflation-restriction exact sequence with Lemma 4.2 and Lemma 4.3(iii). (i) There is a short exact sequence where the first map is inflation.
(iii) The G-action on Sel 2 (E/K) is trivial.
(ii): follows from (i) and Lemma 4.5 upon noting that, since Gal(K/Q) is cyclic, we have .
(See e.g. [AW67, Section 8] for the description of the cohomology of cyclic groups we are using in the above.) (iii): follows from (i) and the fact that the image of the restriction map from For a similar result to Corollary 4.8 (ii) which holds when K/Q is replaced by a cyclic degree p extension for an odd prime p, see [Bra14, Theorem 1.2].
Remark 4.9. Combining Lemma 4.5 with Lemma 4.6 allows one to recover the formula for the rank of E/K given in [Kra81, Theorem 1]. In the second part of that theorem, Kramer studies the group Sel C (Q, E[2])/cor K/Q (Sel 2 (E/K)), which he refers to as the everywhere local/global norms group, and shows that it carries a non-degenerate alternating pairing given by the sum of the Cassels-Tate pairings on Sel 2 (E/Q) and Sel 2 (E θ /Q) (recall from Lemma 4.3(iii) that Sel C (Q, E[2]) = Sel 2 (E/Q) ∩ Sel 2 (E θ /Q)). In particular, this group has even dimension.
When Sel C (Q, E[2]) is not necessarily trivial we still get a lower bound for the dimension of the 2-Selmer group of E over K.
Lemma 4.10. We have Proof. From Lemma 4.6 we find The result now follows from Lemma 4.5, noting that which is a consequence of the explicit description of cohomology of cyclic groups. 4.3. The Weil restriction of scalars. Here we give a slight reinterpretation of the above material in terms of the restriction of scalars of E from K to Q. The material of this section is closely related to, and inspired by, that appearing in [MR07,§3]. Following Milne [Mil72,§2], the restriction of scalars may be described as a special case of a general construction of twists of powers of E, which we now recall. In what follows, for abelian varieties A and B defined over Q, we endow HomQ(A, B) (the group ofQ-homomorphisms from A to B) with the G Q action ϕ → σ ϕ, where for σ ∈ G Q the homomorphism σ ϕ sends P ∈ A(Q) to σϕ(σ −1 P ). In this way we view GL n (Z) as a subgroup of AutQ(E n ). Now suppose that Λ is a free rank-n Z-module equipped with a continuous G Q -action. Choosing a basis for Λ gives rise to a homomorphism ρ Λ : G Q −→ GL n (Z), which we view as a 1-cocycle valued in AutQ(E n ). The class of ρ Λ in H 1 (Q, AutQ(E n )) does not depend on the choice of basis. Associated to this cocycle class is a twist of E n , which we denote Λ ⊗ E. This is an abelian variety over Q of dimension n, equipped with aQ-isomorphism ϕ Λ : E n → Λ ⊗ E satisfying ϕ −1 Λ • σ ϕ Λ = ρ Λ (σ) for all σ ∈ G Q . The relevant restriction of scalars can then be defined as follows.
Definition 4.12. Denote by Z[G] the integral group ring of G = Gal(K/Q). We define the restriction of scalars of E relative to K/Q to be the abelian surface E ⊗ Z[G] . We denote it Res K/Q E. By the above, it comes equipped with an isomorphism ϕ : E ×E → Res K/Q E, defined over K, and such that for all σ ∈ G Q , and all P, Q ∈ E(Q), we have In particular, ϕ −1 composed with projection onto the first coordinate gives an isomorphism Remark 4.13. The restriction of scalars Res K/Q (E) is more typically defined as the unique scheme over Q representing the functor on Q-schemes As in [MR07, Section 2], this is equivalent to the construction given above.
Notation 4.14. To ease notation, in what follows we write A = Res K/Q (E). Thus A is an abelian surface defined over Q.
One has (4.15) Sel 2 (E/K) ∼ = Sel 2 (A/Q) . Moreover, it turns out that the groups Sel C (Q, E[2]) and Sel F (Q, E[2]) are the Selmer groups associated to a certain isogeny between A and E × E θ as we now explain.
Let ϕ : E × E → A be as in Definition 4.12, and let ψ = ψ θ : E ∼ −→ E θ be as in (2.3). Now define the isogeny (a priori over K) One readily computes that in fact φ is defined over Q.
We denote by Sel φ (A/Q) the Selmer group associated to φ. For each place v of Q, we denote by δ φ,v the coboundary map associated to the short exact sequence Then the collection {im(δ φ,v )} v defines a Selmer structure for A[φ], with associated Selmer group is Sel φ (A/Q). Proof. With ϕ : E × E → A as in Definition 4.12, one readily checks that ϕ −1 restricts to a G Q -isomorphism between A[φ] and the diagonal embedding of E[2] into E × E. In this way we identify H 1 (Q, A[φ]) and H 1 (Q, E[2]). We make corresponding identifications locally at each place of Q also. We will show that this identification maps For i = 1, 2 write ∆ i : E → E × E for the homomorphisms defined by ∆ 1 (P ) = (P, P ) and ∆ 2 (P ) = (P, −P ).
This gives maps
which are readily checked to be defined over Q. For each place v of Q these maps fit into a commutative diagram where the right-most vertical maps are induced by the natural inclusions into the respective factors. On cohomology this induces a commutative diagram The result now follows from Remark 4.4.
Remark 4.18. One can show that the product polarisation on E × E descends to a polarisation on A defined over Q rather than just K as is a priori the case (this follows from the material in [How01,§2]). Thus A is a principally polarised abelian surface. We can then view the dual isogeny to φ as an isogeny Denote by Sel φ (E×E θ /Q) the associated Selmer group. It follows formally from Lemma 4.17 and the fact that the Selmer structure C is dual to F , that we have With more work, one can show that the composition (in either direction) of φ and φ is multiplication by 2, and that the maps induce the sequence (4.7).
Remark 4.19. In the terminology of [Kla17,§2], the quantity is called the Tamagawa ratio associated to the isogeny φ. That the Tamagawa ratio for elliptic curves is given by a local formula goes back to Cassels [Cas65, Theorem 1.1]. The corresponding result for abelian varieties, which in particular can be applied to A and φ, is given by Milne in [Mil06,§I.7]. This gives an alternative approach to the local formula of Lemma 4.5. We remark though that Milne's result is very closely related to Theorem 3.4, so this is not really a different proof.
In the next section we will consider the 2-Selmer groups Sel 2 (E d /K) associated to quadratic twists of E by squarefree integers d. As the next lemma shows, this is equivalent to considering the 2-Selmer groups associated to the quadratic twist family over Q of A.
Lemma 4.20. Let d be a square free integer. Let E d denote the quadratic twist of E by d, and let A d denote the quadratic twist of A by d. Then we have a Q-isomorphism of abelian surfaces. In particular, we have Proof. Both Res K/Q (E d ) and A d are twists of E×E, so we need only show that the resulting classes in H 1 (G Q , AutQ(E × E)) agree. Write χ d and χ θ for the quadratic characters associated to The resulting cocycle satisfies, for P, Q ∈ E(Q) × E(Q), On the other hand, fix ψ 1 : E × E ∼ −→ A as in Definition 4.12, and fix also ψ 2 :
Quadratic twists and a distribution result
Recall that K = Q( √ θ)/Q is a quadratic extension, G = Gal(K/Q) and E/Q is an elliptic curve. We now consider the effect of replacing E/Q by its quadratic twist E d /Q, for a squarefree integer d. We denote by F d and C d the Selmer structures of the previous section with local conditions . For a squarefree integer d we write χ d : G Q → {±1} for the associated quadratic character defined by 5.1. The cokernel of the local norm map. It turns out that the cokernel of the local norm map varies in a predictable way as we vary d. First, we fix some notation.
Notation 5.1. Fix a choice Σ of a finite set of places of Q containing the real place, 2, all primes which ramify in K/Q, and all primes at which E has bad reduction.
We begin with the following observation.
Lemma 5.2. Let p / ∈ Σ be a prime divisor of d. Then E d (Q nr p ) has no points of exact order 4. In particular, the same is true of E d (Q p ).
Proof. By assumption E has good reduction at p, so E[4] is unramified at p (that is, the inertia group I p at p acts trivially on E[4]). Thus any element σ of I p acts on E d [4] as multiplication by χ d (σ). Since χ d is ramified at p by assumption, the restriction of χ d to I p is non-trivial and one has giving the result.
Lemma 5.3. Let d be a squarefree integer, let p / ∈ Σ be a prime, and let p be a prime of K lying over p. Then Next, suppose that p ∤ d. Since also p / ∈ Σ, E d has good reduction at p, and K p /Q p is unramified. It follows from [Maz72,Corollary 4.4] that N Kw/Qp is surjective, giving the result. Now suppose that p | d and p is inert in K/Q. In particular, the local extension K p /Q p is unramified of degree 2. Lemma 5.2 and a dimension count then show that the horizontal maps (induced by the inclusion of We thus have It remains to break into cases according to dim since the 2-torsion is either already full over Q p or given by the splitting of an irreducible cubic. In the case that dim E(Q p )[2] = 1, noting that since E has good reduction at p, Q p (E[2])/Q p is unramified, we have dim E(K p )[2] = 2, completing the proof.
Remark 5.4. At primes p ∈ Σ the cokernel of the local norm map is more complicated and depends on the reduction type of E d /Q p . See [Kra81] or [KT82] for more details. However, since the isomorphism class of E d over Q p depends only on the class of d in Q × p /Q ×2 p , the same is true of the cokernel of the local norm map.
To ease notation in what follows, we make the following definition.
Notation 5.5. For a squarefree integer d, write where for a place v of Q, we denote by w a choice of extension of v to K. Further, write Note that by Lemma 4.10, the function g(d)−2 gives a lower bound for dim Sel 2 (E d /K).
Proposition 5.6. As d varies in squarefree integers, we have where the implied constant depends only on the initial curve E and the quadratic field K.
Proof. Since the places in Σ contribute O(1) to g(d), we may ignore them. The result now follows from Lemma 5.3.
The distribution of g(d).
Notation 5.7. Let δ E,K be the natural density of primes p such that ω E,K (p) = 1.
The possible values of δ E,K may be computed by applying the Chebotarev density theorem to the extension K(E[2])/Q and are given by the following table: In the following result of Erdős-Kac type, we determine the asymptotic distribution of the function g(d) when the 2-torsion field of E does not interact with K. Since dim Sel 2 (E d /K) ≥ g(d) − 2 by Lemma 4.10, this shows that dim Sel 2 (E d /K) is (in a precise sense) typically at least as large as a constant times log log(d).
Then the quantity follows a standard normal distribution. That is, for all z ∈ R we have Since by Proposition 5.6 this differs from g(d) by a bounded amount, it is enough to prove the same assertion with g replaced by γ. Moreover, since this function satisfies γ(d) = γ(−d), it is enough to prove that γ has this distribution on the positive squarefree integers. We will do this by combining the method of moments with [GS07,Prop. 4]. Specifically, in the notation of that proposition, take for a function ǫ(X) = o(1) to be chosen later. Further, let γ P be the strongly additive function which agrees with γ for p ∈ P, and takes the value 0 on primes p ∈ P. Note that, still using the notation of [GS07, Prop. 4] we can take Using the explicit form of the Chebotarev density theorem given in [LO77], standard arguments give µ P (γ) = 2δ E,K log log(X)+O(log ǫ(X)) and σ P (γ) 2 = 4δ E,K log log(X)+O(log ǫ(X)).
Taking X sufficiently large in the conclusion of [GS07,Prop. 4] shows that for any k ≥ 0 we have In particular, the kth moments of (γ P − µ P (γ))/σ P (γ) converge to those of a normal random variable with mean 0 and variance 1. Note that for n ≤ X we have Thus the kth moments of (γ − 2δ E,K log log(X))/ 4δ E,K log log(X) converge as X → ∞ to those of the standard normal distribution. It then follows from [Bil95, Theorem 30.2, Example 30.1] that γ becomes normally distributed with mean 2δ E,K log log(X) and variance 4δ E,K log log(X) in the limit X → ∞, i.e.
Remark 5.9. In the last step of the proof we have used the standard result that a function f becomes normal as X → ∞ with mean µ(X) := C 0 log log(X) and variance σ 2 (X) := C 1 log log(X) for some constants becomes normal as X → ∞ with mean 0 and variance 1. This can be proved directly.
Remark 5.10. In the case that K ⊆ Q(E[2]), the function γ(d) in the proof of Proposition 5.8 is 0. In particular, by Proposition 5.6, we have that the kth moments of g(d) are bounded.
We have the following basic corollary showing that, for 100% of d, dim Sel 2 (E d /K) is larger than any fixed integer whenever the 2-torsion of E field does not interact with K. This is in stark contrast with the situation for the Selmer groups Sel 2 (E d /Q), whose distribution is determined by Kane in [Kan13, Thm. 3].
Proof. By Lemma 4.10 we have dim The result now follows from Proposition 5.8.
Remark 5.12. By Lemma 4.20, Corollary 5.11 also applies with Sel 2 (E d /K) replaced by the Selmer groups Sel 2 (Res K/Q E) d /Q associated to the quadratic twists of the Weil restriction of E from K to Q.
Main results
Recall that K = Q( √ θ)/Q is a quadratic extension with G = Gal(K/Q). From this section onwards, we make the restriction that our choice of elliptic curve E/Q has E[2] ⊆ E(Q).
For a squarefree integer d, a consequence of Lemmas 4.5 and 4.6 is that, roughly speaking, the auxiliary Selmer group Sel C d (Q, E d [2]) controls the discrepancy between dim Sel 2 (E d /K) and the function g(d) of Notation 5.5. Thus to improve on Proposition 5.8 and gain full control of the Selmer groups Sel 2 (E d /K) as d varies, it suffices to control these auxiliary groups. We achieve this under the assumption that all 2-torsion of E is defined over Q. Specifically, across Sections 7 and 8 we will prove that, under this assumption, the Selmer group Sel C d (Q, E d [2]) is trivial for 100% of d. That is: Remark 6.2. We will in fact show that the number of squarefree d with |d| < X for which It is likely that with more work this bound could be improved significantly, however we have not attempted to do so.
Remark 6.3. By Lemma 4.3 we have where the intersection is taken inside H 1 (Q, E[2]). Thus Theorem 6.1 shows that for 100% of squarefree d, the groups Sel 2 (E d /Q) and Sel 2 (E dθ /Q) share only the identity element.
Before embarking on the proof, we use the results of previous sections to draw several consequences of this theorem. 6.1. Statistical results for 2-Selmer groups. An immediate consequence of Theorem 6.1 is that the conclusion of Corollary 4.8 holds for 100% of squarefree d when we have full 2-torsion.
Corollary 6.4. For 100% of squarefree d (ordered by absolute value), the Gal(K/Q)-action on Sel 2 (E d /K) is trivial, and we have As a consequence, we can upgrade Proposition 5.8 to the following Erdős-Kac type result determining the distribution of the full 2-Selmer group.
Corollary 6.6. The quantity Proof. By Corollary 6.4, amongst all squarefree integers d with |d| < X, outside a set of cardinality o(X) we have The result now follows from Proposition 5.8 noting that since E[2] ⊆ E(Q), we have that δ E,K = 1/2.
Proof. Since dim X(E d /K)[2] ≤ dim Sel 2 (E d /K) for all d, by Corollary 6.6 we need only show that the limit in the statement (or more precisely the limit superior of the left hand side of the statement) is bounded above by Φ(z) = 1 √ 2π z −∞ e −t 2 /2 dt. This follows from Corollary 6.6 thanks to [Kan13,Thm. 3], which gives adequate control of the Mordell-Weil component of Sel 2 (E d /K). First, for any squarefree integer d, the standard short exact sequence giving the equality Now fix a real number z and a positive real number M . Partitioning into cases according to Dividing through by the number of squarefree integers d with |d| ≤ X, taking the limsup X → ∞, and applying Kane's theorem [Kan13, Thm. 3] to both E and E θ (since E has no cyclic 4-isogeny defined over Q the same is true for E θ , allowing us to apply Kane's result without further assumptions), we find as a consequence of Corollary 6.6 that lim sup where the α r are defined in Kane's Theorem 2. Since the α r determine a probability distribution on the set of r ∈ Z ≥0 , taking the limit M → ∞ gives the result.
Remark 6.8. It seems reasonable to expect that Corollary 6.7 remains true without the assumption that E has no cyclic 4-isogeny defined over Q. However, since no analogue of Kane's result is known in this setting we have not been able to prove this.
6.3. Statistical Results for Mordell-Weil groups. We now give some consequences for the Mordell-Weil groups of the E d /K. We begin with the following algebraic results. Write G = Gal(K/Q).
Notation 6.9. We write We refer to this as the Mordell-Weil lattice. The action of G on E d (K) makes Λ(E d /K) into a G-module.
For a G-module M , we denote by M (−1) the G-module which is isomorphic to M as an abelian group but with G-action twisted by multiplication by −1. That is, the new G-action of the generator σ of G is given by Proof. By [CR90, Theorem 34.31], there exist unique a, b, c ∈ Z ≥0 such that where Z denotes a rank 1 free Z-module with trivial G-action. Note that we have an inclusion of G-modules The right hand side has trivial G-action, as follows from the vanishing of Sel C d (Q, E d [2]) combined with Corollary 4.8 (iii). Thus Λ(E d /K)/2Λ(E d /K) has trivial G-action also. Thus, c = 0. Via the natural K-isomorphism E d ∼ = E dθ , we can identify the points of E d (K) on which the generator of G acts as multiplication by −1 with E dθ (Q). The result follows.
. Proof. By Lemma 6.10 we must have As a consequence, take B to be a Z-basis for Λ(E d /K) such that for all v ∈ B we have σ(v) ∈ {v, −v}. LetB be a lift of B to E d (K). Note that E d (K)/2E d (K) has a basis comprising of the images of the elements ofB and two linearly independent vectors from the submodule is trivial by Corollary 4.8(iii). In particular ±v +u = σ(v) ≡ v in E d (K)/2E d (K), and so u ∈ 2E d (K). Since E d (K) has no 4-torsion, u = 0 and so σ(v) = ±v. Thus the morphism of abelian groups Λ( The result then follows from (6.12).
Proof. Note that for each odd prime p, at most 2 quadratic twists of E have rational ptorsion (otherwise E would have at least 3 dimensional p-torsion over a multiquadratic extension, which is impossible for the isogeny of Remark 4.21, write φ d for its dual, and denote by Sel φ (A d /Q) and Sel φ d (E d × E dθ /Q) the associated Selmer groups.
Theorem 6.15. In the notation above, we have the following.
(i) The quantity dim Sel 2 (A d /Q) − log log |d| 2 log log |d| follows a standard normal distribution. That is, for every z ∈ R we have and dim Sel 2 (A d /Q) is given by the formula on the right hand side of (6.5).
Proof. The first part follows from Corollary 6.6 and Lemma 4.20. Using Lemma 4.17, Remark 4.18 and Lemma 4.20 the second part is then an immediate consequence of Theorem 6.1, Corollary 4.8(i) and Corollary 6.4.
Remark 6.16. Assume that E has no cyclic 4-isogeny defined over Q. Then, since X(A d /Q) ∼ = X(E d /K), as a consequence of Corollary 6.7 we can replace in Theorem 6.15.
Explicit local conditions for full 2-torsion
In this section we make preparations for the proof of Theorem 6.1 by making the results of §4 explicit in the case that E has full rational 2-torsion.
Recall that K = Q( √ θ)/Q is a quadratic extension and E/Q is a fixed elliptic curve with E[2] ⊆ E(Q). Further, we fix a Weierstrass equation for E where, without loss of generality, a 1 , a 2 , a 3 ∈ Z. Set α = a 1 − a 2 , β = a 1 − a 3 , and γ = a 2 − a 3 . Note that the primes of bad reduction for E all divide 2αβγ, and that E[2] = {O, P 1 , P 2 , P 3 } where P i = (a i , 0). As in Notation 5.1 we fix a finite set Σ of places of Q containing the real place, the prime 2, all primes which ramify in K/Q, and all primes at which E has bad reduction. Note in particular that Σ contains all primes dividing 2αβγ.
Quadratic twists. Let d be a squarefree integer. The quadratic twist E d /Q is given by the Weierstrass equation
The following lemma describes the local conditions C (E d /Q v ) of Definition 4.1 at primes p / ∈ Σ. For a place v of Q, we denote by δ d,v : ) the coboundary map associated to the sequence (2.2) with A = E d and F = Q.
Lemma 7.2. Let p be a prime with p / ∈ Σ. Then (i) if p ∤ d, we have Proof. Let p be a prime of K lying over p. (i): By Lemma 5.3 we have N Kp/Qp E d (K p ) = E d (Q p ). The first equality in Lemma 4.3(ii) thus gives The second equality follows from the fact that p is odd and E d has good reduction at p.
(ii): when p splits in K/Q the local extension In particular, it suffices to show that the restriction of δ d,p to E d [2] is injective, which follows from Lemma 5.2.
(iii): by Lemma 5.3 and the fact that E has full 2-torsion, it follows from a dimension count that N Kp/Qp E(K p ) = 2E(Q p ). The result now follows from Lemma 4.3.
Remark 7.3. Taking orthogonal complements, the above result also determines the local groups F (E d /Q p ) for p / ∈ Σ.
7.2. Explicit local conditions. We now use the fact that E d has full rational 2-torsion to give an explicit description of → µ 2 is the Weil pairing. This induces an isomorphism In this description, for each place v of Q, the local Tate pairing ) then becomes the map . We now define a further Selmer structure, whose associated Selmer group contains Sel C d (Q, E d [2]) as a subgroup, and which admits a cleaner explicit description.
Definition 7.6. Define the Selmer structure C d for E d [2] (viewed as a G Q -module) via the local conditions ) the associated Selmer group. Note that by construction, is that now Lemma 7.2 describes all non-trivial Selmer conditions.
Notation 7.7. Write N for the squarefree product of all (finite) primes p ∈ Σ. Further, write d = ad ′ d ′′ , where d ′ is the product of all primes p | d such that both p / ∈ Σ and p splits in K/Q, and d ′′ is the product of all primes p | d such that both p / ∈ Σ and p is inert in K/Q.
For d ∈ Z squarefree, we identify H 1 (Q, E d [2]) with (Q × /Q ×2 ) 2 as in §7.2, and further identify (Q × /Q ×2 ) 2 with the set of pairs of squarefree integers. For a prime p and an integer n coprime to p, we write n p for the Legendre symbol taking value 1 if n is a square modulo p, and −1 else.
Proposition 7.8. With the notation and identifications of Notation 7.7, the Selmer group ) consists of pairs (x 1 , x 2 ) of squarefree integers such that the following conditions all hold: (i) we have x i | N d ′ for i = 1, 2, (ii) we have x i p = 1 for all p | d ′′ and for i = 1, 2, Proof. By Lemma 7.2 and the definition of the local groups C (E d /Q v ), we have C (E d /Q p ) = 0 for all primes p with p / ∈ Σ such that both p | d and p is inert in K/Q, and C (E d /Q p ) = H 1 nr (Q p , E d [2]) for each prime p such that both p / ∈ Σ and p ∤ d. These conditions are equivalent to conditions (i) and (ii) in the statement. Since in the definition of Sel C d (Q, E d [2]) there are no conditions imposed at primes p ∈ Σ, in light of Lemma 7.2(ii) it suffices to show that condition (iii) is equivalent to the condition that for each prime p | d such that both p / ∈ Σ and p splits in K/Q. Since S (E d /Q p ) is its own orthogonal complement under the local Tate pairing, (x 1 , x 2 ) is in S (E d /Q p ) if and only if it pairs trivially with each element of δ d,p (E d [2]). Now P d,1 = (da 1 , 0) and P d,2 = (da 2 , 0) is a basis for E d [2], and by (7.5) we have The result follows.
8. Proof of Theorem 6.1 Recall that K = Q( √ θ)/Q is a quadratic extension with Galois group G, and E/Q is an elliptic curve over Q with E[2] ⊆ E(Q), and given by a Weierstrass equation for a 1 , a 2 , a 3 ∈ Z. Recall also that we have defined integers α = a 1 − a 2 , β = a 1 − a 3 , and γ = a 2 − a 3 , and that the integer N is taken to be the product of all primes in the set Σ of Notation 5.1. The aim of this section is to prove Theorem 6.1. Specifically, we will show the following, strictly stronger, result.
Theorem 8.2. We have In particular #{d squarefree : |d| < X} = 1. ), since the latter is a subgroup of the former. We begin by defining a further group S d determined by simpler local conditions. Specifically, we wish to 'decouple' the variables x 1 and x 2 appearing in Proposition 7.8. We first introduce some notation.
Notation 8.3. We introduce the following 3 sets of primes: ∈ Σ, p split in K/Q, and p non-split Q( αβ)/Q}, P 1 := {p / ∈ Σ, p split in K/Q, and p split in Q( αβ)/Q}, P 2 := {p / ∈ Σ, p inert in K/Q}. (If αβ is a square in Q we take P 0 := ∅ and P 1 the collection of primes not in Σ which split in K/Q.) Note that the sets Σ, P 0 , P 1 and P 2 give a partition of the set of all primes into 4 pairwise disjoint subsets.
For i = 0, 1, 2, we define F i to be the set of positive squarefree integers n all of whose prime factors lie in P i . Note that for i = j we have F i ∩ F j = {1}. We write F i · F j for the collection of squarefree integers n which can be written as a product n = n i n j for some n i ∈ F i and n j ∈ F j . Note that such a decomposition is necessarily unique.
Remark 8.4. Note that provided Q( √ αβ) K, P 0 and P 1 have Dirichlet density 1/4, and P 2 has density 1/2. If Q( √ αβ) ⊆ K then P 0 = ∅ and P 1 and P 2 both have Dirichlet density 1/2. Definition 8.5. For d a squarefree integer, define the subgroup S d of Q × /Q ×2 as follows. First, write (uniquely) d = ad 0 d 1 d 2 where a | N , d 0 ∈ F 0 , d 1 ∈ F 1 , and d 2 ∈ F 2 . Now define S d to be the set of squarefree integers S d := x sq. free : We allow x to be either positive or negative.
We will show the following. As explained below, this is sufficient to prove Theorem 8.2.
Proof of Theorem 8.2 assuming Theorem 8.7. Theorem 8.7 combined with Lemma 8.6 shows that the x 1 -coordinate of any element of Sel C d (Q, E d [2]) is trivial for 100% of squarefree d. By symmetry, the same must then be true of the x 2 -coordinate since we can relabel a 1 and a 2 in the equation (7.1) for our elliptic curve in order to interchange the roles of x 1 and x 2 . This shows the limit statement of Theorem 8.2, and running the same argument but keeping track of error terms proves the general result.
We now begin preparations for the proof of Theorem 8.7.
Notation and preparations.
Notation 8.8. Given a positive integer n we write ω(n) for the number of distinct prime factors of n. For i = 0, 1, 2 we write ω i (n) for the number of distinct prime factors of n which lie in P i . We denote by µ the Möbius function.
We will use frequently the following lemma controlling generalised divisor sums.
Lemma 8.9. Let a 0 , a 1 , and a 2 be non-negative real numbers. Then we have Proof. This follows from a (significantly more general) result of Shiu [Shi80]. Define the multiplicative function f : Z → R ≥0 by setting, for any k ≥ 1, f (p k ) = a i for p ∈ P i (i = 0, 1, 2), and taking f (p) = 1 for p ∈ Σ. We then wish to bound the sum X−Y <n≤X f (n). It follows from Remark 8.4 that we have The result now follows from [Shi80, Theorem 1] (the conditions (i) and (ii) needed for that theorem follow in our setting from well known bounds on the divisor function).
We begin by showing that this is sufficient to prove Theorem 8.7.
Proof of Theorem 8.7 assuming Proposition 8.10. We first show that the weights are at least 1 for 100% of squarefree d. That is, we claim that To see this, fixing any λ > 1 we have By Lemma 8.9 the right hand side is ≪ X log(X) λ/4+1/(2λ)−3/4 . Optimising over λ we find that when λ = √ 2 the exponent is 1/ √ 2 − 3/4 < −0.042, giving the claim. Now fix 1 < γ < 7/8 + √ 17/8. By the claim we have where above d is implicitly taken squarefree. The result now follows from Proposition 8.10.
Remark 8.12. The reason for the introduction of the weight γ is that, in passing from the group Sel [FK07,§5] are allowed to vary over all positive squarefree integers, whilst ours are constrained to lie in the thin families F j . This necessitates changes to the argument in Fouvry-Klüners' first and fourth families, which correspond to our §8.6.7 and §8.6.6 respectively.
8.5. Expressing the sum in terms of Jacobi symbols. We now begin preparations for the proof of Proposition 8.10 by expressing the relevant sum in terms of Jacobi symbols. We first define the following sums which will be ubiquitous in what follows.
Definition 8.13. Let λ and η be squarefree divisors (either positive or negative) of N . For a tuple (D i ) 0≤i≤7 of coprime positive odd integers, write Remark 8.18. The proof above shows that the reason for excluding the terms where λ = D 0 = D 2 = D 3 = 1 in the definition of S γ (λ, η, X) above is to remove the identity element of S d from the count. Now fix 1 < γ < 7/8 + √ 17/8 as in the statement of Proposition 8.10 . In light of Lemma 8.14 we want to study the sums S γ (λ, η, X). such that all of the following hold: • we have D 0 , D 1 ∈ F 0 , D 2 , D 3 , D 4 , D 5 ∈ F 1 , and D 6 , D 7 ∈ F 2 , • we have 7 i=0 D i ≤ X, • if λ = 1, then D 0 , D 2 and D 3 are not all 1.
We thus write .
We also define n i (0 ≤ i ≤ 7) so that the D i are required to lie in F n i (e.g. n 0 = n 1 = 0).
Proposition 8.23. For any 1 < γ < 7/8 + √ 17/8, and for any (positive or negative) divisors λ and η of N , we have S γ (λ, η, X) = o(X). Moreover, when γ = 1/4 + √ 17/4 we have It's immediate from Lemma 8.14 that Proposition 8.23 implies Proposition 8.10 and so, via Theorem 8.7, we obtain Theorem 8.2. The rest of the section is occupied with the proof of Proposition 8.23. 8.6.1. The contribution from D 0 , D 2 , D 3 = 1 and λ = θ. Recall that K = Q( √ θ) for some squarefree integer θ (necessarily dividing N ). We first show that the contribution to S γ (θ, η, X) coming from D 0 = D 2 = D 3 = 1 is negligible, since leaving this in would prevent a uniform argument at a later point. Note that when D 0 = D 2 = D 3 = 1 all Jacobi symbols appearing in (8.22) are equal to 1 except those that involve λ = θ. Moreover, since elements of F 2 are products of primes inert in K, any n ∈ F 2 has θ n = µ(n). On the other hand, we similarly have θ n = 1 for all n ∈ F 1 . Consequently, the contribution to S γ (θ, η, X) from tuples with D 0 = D 2 = D 3 = 1 is given by In the above, to pass from the left hand side to the right hand side we have set r = D 1 D 4 D 5 and n = D 6 D 7 , noting that e.g. given r ∈ F 0 · F 1 there are 2 ω 1 (r) ways or writing r as a product D 1 D 4 D 5 where D 1 ∈ F 0 and D 4 , D 5 ∈ F 1 , and that this multiplicity cancels the contribution of κ . Now since m|n µ(m) is equal to 0 if n > 1, and 1 if n = 1, we find (8.25) |RHS of (8.24)| = where for the bound we are using Lemma 8.9.
8.6.2. Number of prime factors of the variables. We now show that the contribution coming from D i with a large number of prime factors is negligible. This will be important in §8.6.6. Set Ω = 4e · (log log(X) + B 0 ) with B 0 as in [FK07,Lemma 11], and let Σ 1 be the contribution to S γ (λ, η, X) from the tuples (D i ) ∈ D(X) satisfying Writing n = i D i we have Applying the Cauchy-Schwarz inequality and arguing using [HR00, Lemma A] as in [FK07, §5.3] (paragraph above Equation (30)) we find Σ 1 ≪ X log(X) −1 . 8.6.3. Ranges of the variables. We now divide the ranges of summation into intervals, and treat these intervals separately. Specifically, we set (8.27) ∆ := 1 + 1 log(X) 2 and divide the ranges of the variables into intervals [∆ n , ∆ n+1 ] for n = 0, 1, 2, ..., noting that 1 is the only integer in the n = 0 interval. For i = 0, ..., 7 we let A i denote a number of the form ∆ n with 1 ≤ ∆ n ≤ X, let A = (A i ) 0≤i≤7 , and define where, in light of §8.6.2 and (8.25), we define D ′ (X) to be the subset of D(X) consisting of tuples (D i ) i such that ω(D i ) ≤ Ω for each i, and such that, if λ = θ, then not all of D 0 , D 2 and D 3 are equal to 1. Since for α small positive we have log(1 + α) ≈ α, for X large log(X)/ log(∆) ≈ log(X) 3 , so there are order log(X) 24 expressions (8.28) as A varies. Following [FK07, §5.4] we split the collection of all A into families and treat each in turn.
First family:
i A i large. In order to exploit oscillations of the Jacobi symbols it will be necessary to allow the variables D i to range (essentially) freely in the interval To this end, we first deal with the case where the product of the A i is large, where the condition Π i D i ≤ X is relevant. Specifically, the first family of the A is defined by the condition The argument here is essentially identical to that occurring between Equations (33) where for the last inequality we are using that Note that if A does not satisfy (8.29) then the condition i D i ≤ X is made automatic by the restrictions on the intervals the D i lie in, and may henceforth be dropped. 8.6.5. Second family: two large factors corresponding to linked indices. We introduce the parameter X † := log(X) 78 , and consider the A such that Here the argument is almost identical to that given between Equations (40) and (42) in [FK07], ultimately relying on a result of Heath-Brown exploiting double oscillations of characters [HB95,Corollary 4]. For such A, since i and j are linked we have (swapping i and j if necessary) where in the inner sum D i and D j are odd coprime integers with no further constraints, and g(D j ; (D k ) k =i,j ) is defined in the same way but with i and j switched. The coefficients f (D i ; (D k ) k =i,j ) and g(D j ; (D k ) k =i,j ) are complex numbers with absolute value < 1, so applying [FK07, Lemma 15] (with ǫ = 1/6) to the inner sum above, and summing over the remaining variables, gives Summing over each of the ≪ log(X) 24 possibilities for A we find (8.32) A satisfies (8.30) |S γ (λ, η, X, A)| ≪ X log(X) −1 .
8.6.6. Third family: one large and one small factor corresponding to linked indices. We introduce a further parameter X ‡ = exp(log(X) ǫ ) for fixed ǫ > 0 (to be chosen later). Note that for X sufficiently large we have X ‡ > X † . The family of A we now consider is given by (8.33) Neither (8.29) nor (8.30) hold, and ∃ i = j linked with 1 < A j < X † and A i ≥ X ‡ .
This section of the argument corresponds to the treatment of Fouvry-Klüners fourth family [FK07,Equations (43) to (47)], and we similarly obtain cancellation from the Siegel-Walfisz theorem. However, the conditions that the D i lie in the thin families F n i necessitate some changes and the resulting argument is modelled on [FK10, §7.5].
Fix such an A. In the definition of S γ (λ, η, X, A) we group all terms involving D i . Since η and λ divide N , for fixed (D k ) k =i there is a Dirichlet character where in the above we are using quadratic reciprocity for Jacobi symbols. From the definition of linked indices, writing d := d((D k ) k =i ) = k linked to i D k (which is at least 3 by assumption), we have (8.35) where in the inner sum D i is in F n i and is coprime to the D k in the outer sum, and ω(D i ) ≤ Ω. Now d is odd and coprime to N so is a primitive Dirichlet character modulo q for some q divisible by d, and dividing 4N d. In particular, 3 ≤ q ≪ (∆X † ) 7 since (8.30) does not hold. Replacing the inner sum in (8.35) with its maximum possible value we have where the maximum is taken over all 1 ≤ a ≤ X, all 3 ≤ q ≪ (∆X † ) 7 which contain at least one prime factor coprime to N , and all primitive Dirichlet characters χ modulo q.
Here the condition (a, D i ) = 1 takes care of the coprimality of D i with the remaining D k . We now partition the inner sum according to the number 1 ≤ l ≤ Ω of prime factors of D i , write D i = np where p is the largest prime factor of D i , and denote by P + (n) the largest prime factor of the remaining integer n, giving n ω(n)=l−1 max a,χ,q max(P + (n),A i /n)<p<∆A i /n (a,p)=1 p∈Pn i χ(p) , where we allow n to range over arbitrary positive integers with l − 1 factors. To treat the innermost sum, first note that we can drop the condition (a, p) = 1 at the expense of adding to its value. Next, since K/Q and Q( √ αβ)/Q ramify only at primes dividing N , a prime p is in P n i if and only if p (mod 4N ) lies in a certain subset of (Z/4N Z) × . In particular we may express the indicator function ½ P i as a finite sum s a s χ s where each χ s is a Dirichlet character modulo 4N , and the a s are real numbers. Since the modulus q of any χ appearing in (8.37) contains at least one prime not dividing N (coming from D j ), each χ s χ is a primitive Dirichlet character modulo q ′ for some 3 ≤ q ′ ≪ (∆X † ) 7 also. By the triangle inequality and [FK07, Lemma 13] (a consequence of the Siegel-Walfisz theorem) we conclude that for all constants A > 0 we have max a,χ,q max(P + (n),A i /n)<p<∆A i /n (a,p)=1 p∈Pn i χ(p) ≪ max a,χ,q max(P + (n),A i /n)<p<∆A i /n χ(p) + log(X) Now n has at most Ω prime factors, so the sum on the left of (8.38) is non-empty only if n ≤ ∆A We now insert this into (8.38), and insert the result into (8.37) and finally (8.36), to find Summing over the ≪ log(X) 24 possibilities for A and recalling that Ω ≪ log log(X), we find A satisfies (8.33) |S γ (λ, η, X, A)| ≪ X log(X) −1 provided A is chosen large enough (compared to ǫ).
Here the argument deviates significantly from that in [FK07]. Fix such an A, and define Recalling that X ‡ > X † (for sufficiently large X), it follows from the conditions on A that • I A is unlinked, • if j / ∈ I A is linked to an element of I A then A j = 1 (so in particular, if D j is such that A j ≤ D j ≤ ∆A j , then D j = 1). We begin by discarding as many options for I A as we can simply using the trivial bound Specifically, let I be any (possibly empty) set of unlinked indices, and let i 0 = |I ∩ {0, 1}|, i 1 = |{2, 3, 4, 5} ∩ I|, and i 2 = |I ∩ {6, 7}|. Then Here in the above sum, if i j = 0 then we interpret i ω j (m) j as being equal to 1 when m has no prime factors in P j . The right hand side is derived from the left by setting n = i / ∈I D i and m = i∈I D i . To treat the sum on the right hand side of (8.41) we apply Lemma 8.9. Here the argument diverges according to whether Q( √ αβ) ⊆ K or not. Since the former, somewhat degenerate, case is easier we make the following assumption, consigning the case Q( √ αβ) ⊆ K to Remark 8.48.
As before we may remove the condition (a, p) = 1 at the expense of an acceptable error term. To treat the condition that p ∈ P 2 , recall that P 2 is the set of primes coprime to N which are inert in K/Q. In particular, the indicator function ½ P 2 (p) is given by 1 2 (1− θ p ). Inserting this into the sum, we may apply [FK07, Lemma 13] as in §8.6.6 since λ = 1, θ means that both D → λ D and D → λθ D are non-principal. Continuing to argue as in §8.6.6 yields S γ (λ, η, X, A) ≪ X log(X) i 1 /4+i 2 γ/4−1+2ǫ .
Prime twists of the congruent number curve
In this section we prove Theorem 1.9. That is, we provide an example of a thin subfamily of quadratic twists for which the statistical behaviour of the 2-Selmer group differs from that of the family of all twists. In particular, there is a non-trivial Galois action in a positive proportion of cases so that, by Corollary 4.8, Sel C d (Q, E d [2]) is non-trivial for a positive proportion of d in our thin subfamily.
We restrict our quadratic field K = Q( √ θ) to be an imaginary quadratic number field which has class number 1 and in which 2 is inert (so −θ ∈ {3, 11, 19, 43, 67, 163}). Write O K for the ring of integers of K, and note that the only prime which ramifies in K is −θ. We take E : to be the congruent number curve. This has good reduction away from 2. Taking p ∤ 2θ to be a rational prime, we will explicitly describe the group Sel 2 (E p /K) as a G = Gal(K/Q)module. For a place v of K, we will identify the local Kummer images S v (E p /K) of §2.4 with their image under the 2-descent map (7.5) (in our case, a 1 = 0, a 2 = 1, a 3 = −1), so that We view the Selmer group Sel 2 (E p /K) as a subgroup of K × /K ×2 similarly, noting that this identification respects the G-action.
For a vector space V and v 1 , . . . , v n ∈ V we write v 1 , v 2 , ..., v n for the subspace generated by v 1 , . . . , v n . 9.1. 2-Descent. Our primary goal is to characterise the groups Sel 2 (E p /K) for p prime, which we do via 2-Descent. We first begin by identifying the local Kummer images at each prime.
In the case that p is split in K/Q, we will need to understand the image of the primes over p in the localisation at 2, for which we will use the following result. As in Lemma 9.1, p ∤ 2θ is a prime, and we denote by ζ a fixed primitive 3rd root of unity in K 2 . For x in K we denote its conjugate under the action of G asx.
Lemma 9.2. Suppose that p splits in K/Q, and write p = ǫǭ for some ǫ ∈ O K . Then in K × 2 we have ǫ ≡ ±(ζ + 2 − p) (mod K ×2 2 ). (Since −1 is not a square in K 2 , precisely one of these two possibilities occurs.) Proof. The ring of integers of K 2 is Z 2 [ζ] and by Hensel's lemma, an element of Z 2 [ζ] × is a square if and only if it is a square modulo 8. Now using the fact that both 5 and ζ = ζ 4 are squares in K 2 , we find that any element of Z 2 [ζ] × /Z 2 [ζ] ×2 can be written uniquely in the form a ± ζ for some a ∈ {±1, ±5} (in this representation, the trivial class is −1 − ζ = ζ 2 ). Now writing ǫ (mod K ×2 2 ) in this form we find that, in K × 2 /K ×2 2 , we have Thus a ≡ ±(2 − p) (mod 8) and the result follows.
We are now ready to describe the Selmer groups. In the statement, all isomorphisms are as F 2 [G]-modules.
Proposition 9.3. Let p be an odd prime not dividing θ. Then (i) If p is inert in K/Q we have If p is split in K/Q and ǫ ∈ O K has norm p, we have Proof. Let p = 2 be inert in K/Q. Since E p has good reduction outside 2 and p, the 2-Selmer elements are units outside 2, p. As K has class number 1 we thus want to find all a i , b i ∈ {0, 1} for which lies in both of the local groups S p (E p /K) and S 2 (E p /K) described in Lemma 9.1. As K p /Q p is unramified of degree 2, both −1 and 2 are squares in K p . Thus all elements of the form (9.4) lie in S p (E p /K). We now apply the Selmer conditions at 2. Since p is odd we have p ≡ ±1 (mod K ×2 2 ). Consequently, a global element of the form (9.4) which lies in Sel 2 (E p /K) necessarily maps to the subspace of S 2 (E p /K) generated by T 1 = (−1, −p) and T 2 = (1, 2). Restricting to elements of the form (9.4) which do map to this space gives Sel 2 (E p /K) = (p, 2), (−1, −p), (1, (−1) δ p), ((−1) δ p, 1) ∼ = F 4 2 where δ = 1 if p ∈ K ×2 2 and δ = 0 otherwise. Now suppose p splits in K/Q, and fix ǫ ∈ K × such that ǫǭ = p. As above, the 2-Selmer elements are unramified outside {2, ǫ,ǭ}, so we want to find all a i , b i ∈ {0, 1} for which (9.5) ((−1) a 1 2 a 2 ǫ a 3ǭ a 4 , lies in each of the groups S ǫ (E p /K), Sǭ(E p /K) and S 2 (E p /K) described in Lemma 9.1. This is an elementary computation, which we do by treating each possibility for p (mod 8) separately. We repeat the local Kummer images from Lemma 9.1: (1, 2) , (ζ + 3, ζ + 3(1 + p)) , (1, 4ζ + 5) Sǭ(E p ) = (−1, −ǫǭ), (ǫǭ, 2) .
We now break into cases. p ≡ −1 (mod 8) : Here −1 is nonsquare in K ǫ . Replacing ǫ with −ǫ if necessary, we assumeǭ ∈ K ×2 ǫ . Note also that 2 is a square in K ǫ . By symmetry, this gives 2, ǫ ∈ K ×2 ǫ . The elements of the form (9.5) which lie in S ǫ (E p /K) are then those of the shape Reducing further to those that satisfy the conditions of Sǭ(E p ) we are left with elements of the shape (9.6) (−1) a 1 2 a 2 ǫ a 3ǭ a 4 , (−ǫǭ) a 1 2 b 2 .
p ≡ 5 (mod 8) : Here −1 is square in both K ǫ and Kǭ, and 2 is a nonsquare unit in both K ǫ and Kǭ. We now split into two cases according to whetherǭ is in (K × ǫ ) 2 . To capture this, we fix δ = 1ǭ ∈ K ×2 ǫ 0 else.
As the first coordinate of each of these basis elements has trivial valuation, we have a 2 = 0.
Remark 9.10. The proof of part (i) shows that the conditions at inert primes impose no restrictions. Using this observation, one sees similarly that if d is odd and divisible only by inert primes, then This gives a concrete instance of the growth of Sel 2 (E d /K) seen also in e.g.Proposition 5.6. 9.2. Statistics. Here we use Rédei symbols alongside the Chebotarev density theorem to determine the statistical behaviour of Sel 2 (E p /K) from Proposition 9.3. We refer the reader to [Ste18] for definitions concerning Rédei symbols.
Lemma 9.11. Let p ≡ 1 (mod 8) be a prime which splits in K/Q, and let ǫ ∈ O K have norm p. Thenǭ ∈ (K × ǫ ) 2 if and only if the Rédei symbol [θ, −θ, p] is trivial. Proof. Note that −1 is a square in K ǫ since p ≡ 1 (mod 8). In particular, the statement is unchanged upon replacing ǫ with −ǫ. By Lemma 9.2 we may thus assume that we havē ǫ ≡ −(ζ + 1) = ζ 2 ≡ 1 (mod K ×2 2 ). Now consider the diagram of fields Since ǫ ramifies in L/K, we see thatǭ ∈ (K × ǫ ) 2 if and only if the unique prime of L lying over ǫ splits in F/L. Let p denote the unique prime of K ′ lying over p. Since p splits in K/Q, we see that p splits in L/K ′ . Further,ǭ ramifies in L/K and hence has even valuation (either 0 or 2) at any prime p ′ | p of L. In particular, the extension F = L( √ǭ )/L is unramified at such p ′ . Thus F/K ′ is unramified at p. We now conclude thatǭ ∈ (K × ǫ ) 2 if and only if the Artin symbol F/K ′ p is trivial. Before relating this to a Rédei symbol, it will be useful to prove the following two claims.
Claim 1: The field F/K ′ is everywhere unramified. That F ′ /K ′ is unramified at primes not dividing 2pθ is clear, and we have already shown that the unique prime of K ′ dividing p is unramified in F ′ /K ′ . For primes over 2 note that K and K ′ are unramified at 2, and so L/Q is unramified at 2 also. Further, having chosenǭ to be a square in K 2 , the extension K( √ǭ )/K is split at 2. Thus, as the compositum of K( √ǭ ) and L, the full extension F/Q is unramified at 2. Now note that ℓ = −θ is an odd prime. Since p has trivial l-adic valuation, the extension F = K ′ ( √ p, √ ǫ)/K ′ is unramified at (the unique prime of K ′ over) l. This proves the claim.
Claim 2: For each prime q, the Hilbert symbols (p, θ) q and (p, p) q are trivial. By assumption, p is a norm from K = Q( √ θ), so that (p, θ) q is trivial for all q. Next, for each prime q we have (p, p) q = (p, −1) q . That this latter symbol is trivial for q = 2, p is immediate, whilst for q = 2, p it is trivial since p ≡ 1 (mod 8). This proves the claim.
Returning to the proof, by Claim 2 the Rédei symbol [θ, p, p] exists (see [Ste18,Definition 7.8]). Writing ǫ = x + y √ θ for x, y in Q, we have x 2 − θy 2 = p by assumption. The field F is then given by adjoining to L the element Further, by Claim 1 the extension F/K ′ is minimally ramified in the sense of [ This allows us to give a complete statistical description of the F 2 [G]-module Sel 2 (E p /K). First we introduce some notation.
Notation 9.12. For p a prime, we define non-negative integers e 1 (E p /K) and e 2 (E p /K) such that we have a G-module isomorphism Sel 2 (E p /K) ∼ = F e 1 (Ep/K) 2 ⊕ F 2 [G] e 2 (Ep/K) .
Proof. As a consequence of Lemma 9.11, and the Chebotarev density theorem applied to Proposition 9.3, it suffices to show that [θ, −θ, p] is trivial for precisely half of the primes p ≡ 1 (mod 8) which split in K/Q (with respect to the natural density). Fix a prime p ∤ 2θ. In the notation of [Ste18, Definitions 7.6, 7.8], let F θ,−θ be minimally ramified over Q( √ θ, √ −1), so that by definition the Rédei symbol [θ, −θ, p] is equal to the Artin symbol (9.14) where p is any ideal of Q( √ −1) of norm p. The field F θ,−θ is a cyclic degree 4 extension of Q( √ −1) fitting into the diagram below. It is dihedral of degree 8 over Q and contains Q( √ θ, √ −1) as a subfield.
The field F θ,−θ (ζ 8 )/Q is Galois of degree 16. Now p both splits in K/Q and is congruent to 1 modulo 8 if and only if it splits completely in Q( √ θ, √ −1, √ 2) = Q( √ θ, ζ 8 ). On the other hand, the Artin symbol (9.14) is trivial if and only if p splits completely in F θ,−θ .
Consequently, we wish to compute the density of primes which split completely in F θ,−θ (ζ 8 ), amongst those that split completely in Q( √ θ, ζ 8 ). By the Chebotarev density theorem, this is equal to 1/2.
|
2020-11-10T02:00:59.043Z
|
2020-11-09T00:00:00.000
|
{
"year": 2020,
"sha1": "8656b89e43a7f7e0f65286d6b1333df9ca6fc6eb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8656b89e43a7f7e0f65286d6b1333df9ca6fc6eb",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
3519136
|
pes2o/s2orc
|
v3-fos-license
|
A new species of hermit crab, Diogenes heteropsammicola (Crustacea, Decapoda, Anomura, Diogenidae), replaces a mutualistic sipunculan in a walking coral symbiosis
Symbiont shift is rare in obligate mutualisms because both the partners are reciprocally dependent on and specialized to each other. In the obligate accommodation–transportation mutualism between walking corals and sipunculans, however, an unusual saltatory symbiont shift was discovered. In shallow waters of southern Japan, an undescribed hermit crab species was found living in corallums of solitary scleractinian corals of the genera Heterocyathus and Heteropsammia, replacing the usual sipunculan symbiont. We described the hermit crab as a new species Diogenes heteropsammicola (Decapoda, Anomura, Diogenidae), and explored its association with the walking corals. This hermit crab species obligately inhabits the coiled cavity of the corals, and was easily distinguished from other congeneric species by the exceedingly slender chelipeds and ambulatory legs, and the symmetrical telson. Observations of behavior in aquaria showed that the new hermit crab, like the sipunculan, carries the host coral and prevents the coral from being buried. This is an interesting case in which an organism phylogenetically distant from Sipuncula takes over the symbiotic role in association with a walking coral. The hermit crab species is unique in that its lodging is a living solitary coral that grows with the hermit crab in an accommodation–transportation mutualism.
Introduction
Obligate mutualism is a climax of co-evolution between symbiotic organisms. Because both partners of an obligate mutualism are reciprocally dependent on and specialized to each other, symbiont shift only rarely occurs. However, rare symbiont shifts often result in reciprocal diversification of the partners, especially in obligate pollination mutualisms between fig and fig wasp [1][2] and between leafflower and leafflower moth [3][4][5]. In these cases, host shifts have occurred only between phylogenetically related partners, and saltatory host shift between phylogenetically distant taxa is not known. We report here a novel saltatory host shift of partners in an obligate mutualism. PLOS In marine ecosystems, the symbiosis between walking corals and sipunculans is a wellknown example of obligate mutualism, where the partners respectively offer accommodation and transportation. Solitary scleractinian corals of the genera Heterocyathus and Heteropsammia inhabit marine soft bottoms without attaching to hard substrata. Their corallums each contain a coiled cavity inhabited by a sipunculid worm [6][7][8][9][10]. Observations of internal corallum structure suggest that a larval coral initially settles on a small gastropod shell that has already been colonized by a sipunculan, and that the coral grows over and ultimately beyond the shell, providing a coiled cavity for the equally growing worm partner. The coral serves as a sturdy shelter, protecting the worm against possible predators. Furthermore, the worm can roam the seafloor, carrying the host coral with it [11][12][13][14]. If the coral is overturned by water currents or buried by sedimentation, the worm assists the coral in recovering its upright position on the sea floor. Thus, the association between coral and sipunculan has been described as mutualistic [15][16]. However, the symbiotic sipunculan may be replaced by a hermit crab [10,[17][18]. This observation raises the intriguing question whether the hermit crab is an alternative symbiont in the coral-sipunculan association.
During our survey of benthic fauna in Oshima Strait, north of Kakeroma Island, Ryukyu Islands, southern Japan, we found that some solitary scleractinian corals of the genera Heterocyathus and Heteropsammia were inhabited by hermit crabs, rather than by sipunculans. Morphological observations revealed that the hermit crab is a new species of the genus Diogenes Dana, 1851 [19], although all other known species of the genus are known to inhabit only molluscan shells.
The genus Diogenes belongs to the family Diogenidae and is typically characterized by the possession of an intercalary rostriform process flanked by ocular acicles, though several species exhibit a tendency for reduction of this process, e.g., "Troglopagurus group" [20]. Currently, 66 species are recognized worldwide, 63 of which are found in the Indo-West Pacific region [21][22][23][24][25][26][27]. Our morphological observations suggest that the corallum-inhabiting hermit crab belongs to the D. edwardsii species group as defined by Asakura and Tachikawa [28], which is characterized by an intercalary rostriform process unarmed on the lateral margins, an antennal peduncle distinctly longer than the ocular peduncle, and an antennal flagellum with a pair of long setae on the distal margin of each article ventrally. At present, the D. edwardsii species group includes 29 species from the Indo-West Pacific [22][23][24][25][26][27][28][29].
We carried out morphological, ecological, and behavioral observations of the unusual corallum-inhabiting hermit crab. In this paper, we describe and illustrate the hermit crab as a new species. In addition, we tested the hypothesis that the association between the hermit crab and the solitary coral is the same type of accommodation-transportation symbiosis observed between sipunculan and the coral through behavioral observations in aquaria. The study has shed light on how the saltatory symbiont shift from the sipunculan to the hermit crab has occurred.
Sampling and observation
In our preliminary survey of walking corals in shallow waters around Kochi Prefecture, and the Amami and Okinawa Islands in southern Japan, only walking corals collected from the Amami Islands were sometimes inhabited by hermit crabs instead of sipunculans. Thus, we carried out extensive sampling of the walking corals with their symbiotic sipunculans and hermit crabs in Oshima Strait, Amami-Oshima Island. Permits for the research were obtained from the Forestry and Fisheries Promotion Division, Oshima Branch Office, Kagoshima Prefecture. Corals of the genera Heterocyathus and Heteropsammia are not endangered or protected. We dredged soft bottoms (28˚12'N, 129˚14'E, 40-80 m deep) using a small dredge (RIGO, Tokyo, Japan; mouth size = 40 × 15 cm, mesh size = 2 mm). We collected 3 Hc. alternatus and 19 Hp. cochlea individuals between 2012 and 2016; some of which were inhabited by hermit crabs. For observation on behavior, some hermit crab-inhabited corals were reared in aquaria with substrata collected by dredging. All specimens of the new hermit crab species were found in walking corals. All specimens were ultimately preserved in 99% ethanol.
Material examined in this study is deposited in the National Museum of Nature and Science (NSMT), Tokyo, Japan, and the Kyoto University Museum (KUZ). General hermit crab terminology follows McLaughlin et al. [30]. Shield length (sl), measured from the tip of the rostrum to the midpoint of the posterior margin of the shield, is used as an indicator of specimen size.
Nomenclatural acts
The electronic edition of this article conforms to the requirements of the amended International Code of Zoological Nomenclature (ICZN), and hence the new species name contained herein is available under that Code from the electronic edition of this article. This published work and the nomenclatural acts it contains have been registered in ZooBank, the online registration system for the ICZN. The ZooBank LSIDs (Life Science Identifiers) can be resolved and the associated information viewed through any standard web browser by appending the LSID to the prefix "http://zoobank.org/". The LSID for this publication is: urn:lsid:zoobank. org:pub: 2115C9F7-4521-4966-A226-07A1B456713B. The electronic edition of this work was published in a journal with an ISSN, and has been archived and is available from the following digital repositories: PubMed Central, LOCKSS. Description. Shield ( Fig 1A) slightly longer than broad, anterior margin between rostrum and lateral projections concave, anterolateral margins sloping, posterior margin roundly truncate, dorsal surface with few short setae. Rostrum short, broad, obtusely triangular, not produced beyond lateral projections; lateral projections triangular, acutely pointed.
Taxonomic account
Ocular peduncles (including corneas) (Fig 1A) about 0.8 times as long as shield, moderately stout, slightly inflated basally; corneas not dilated, corneal width about 0.3 of length of ocular peduncle; ocular acicles each with straight mesial margin, anterolateral margin nearly straight or slightly convex, bearing 3 or 4 minute or small spines decreasing in size laterally and not extending to entire length of lateral margin. Intercalary rostriform process not reaching anterior apexes of ocular acicles, slightly broadened medially, tapering acutely.
Antennular peduncles (Fig 1A) overreaching distal corneal margins by entire length of ultimate segment. Ultimate segment 4.0 times longer than distal width, subequal in length to penultimate segment, with some long setae on dorsal surface. Ultimate, penultimate and basal segments unarmed.
Antennal peduncles (Fig 1A) overreaching distal corneal margins by 0.5-0.6 length of fifth segment. Fifth segment with row of moderately short stiff setae on ventral surface. Fourth and third segments unarmed. Second segment with strong spine at dorsolateral distal angle. First segment unarmed. Antennal acicles short, each with a single spine slightly proximal to median part of mesial margin, terminating in slender spine. Antennal flagellum (Fig 1B) about 3.0 times longer than shield, articles with paired long setae.
Left cheliped (Fig 2) elongate and slender; moderately setose (setae on merus and ischium plumose, those on carpus to chela simple). Chela 2.3-2.6 times longer than maximum width. Dactylus about half length of palm, curved distally and overlapped by fixed finger; mesial margin with double row of spines; dorsal surface with scattered tubercles; cutting edge sinuous, with row of blunt calcareous teeth. Palm subequal in length to carpus; dorsal surface with small tubercles and with double row of blunt spines along dorsomidline, dorsomesial margin with row of small spines. Fixed finger with scattered, small tubercles or spines on dorsal surface, dorsolateral margin with row of small spines; ventral surface with scattered tubercles proximally; cutting edge sinuous, with row of blunt calcareous teeth. Carpus 1.8-2.4 times longer than maximum width, with single row of spines increasing in size distally on dorsal margin; dorsodistal margin with 2 spines or unarmed (smaller individual); mesial surface with few spinules or unarmed. Merus distinctly longer than high, dorsal surface rounded. Ischium unarmed. Right cheliped (Fig 3) slender, setose (setae on merus and ischium plumose, those on carpus to chela simple). Chela 3.5-5 times longer than wide; fingers crossing distally, with broad hiatus. Dactylus gently arched, about 0.6-0.9 times longer than palm; dorsal surface with single row of small spines along dorsomidline; mesial surface with several short setose ridges; ventral surface unarmed. Palm about 0.7-0.8 times longer than carpus; lateral and ventral surface with several short, setose ridges. Fixed finger gently curved, with some small spines on dorsal surface proximally or unarmed. Carpus with small spine on dorsodistal margin. Merus and ischium unarmed.
Ambulatory legs (Figs 4 and 5) long, slender, with moderately dense, long setae on dorsal and ventral margins (setae on carpi, meri, and ischia plumose, those on propodi to dactyli simple); third pair slightly longer than second pair. Dactyli 1.2-1.3 times longer than propodi and 10.5-12.0 times longer than high, each terminating in sharp, corneous claw; ventromesial margin unarmed. Propodi nearly straight or faintly ventrally curved, 1.6-1.8 times longer than carpi and about 6.0 times longer than high. Carpi 0.7-0.9 length of meri, each with small dorsodistal spine on dorsal margin. Meri and ischia unarmed.
Pleon curved. Male with unpaired, uniramous, left second to fifth pleopods. Female with unpaired, left second to fifth pleopods, second to fourth unequally biramous, fifth uniramous.
Coloration in life. Shield generally white. Ocular peduncles maroon red. Antennular peduncles with ultimate and penultimate segments translucent, basal segment maroon red. Antennal peduncles with first to third segments maroon red, fourth and fifth segments translucent. Chelipeds generally white; merus of left cheliped with tinge of reddish brown on mesial face; dactylus, palm, and fixed finger of right cheliped with tinge of reddish brown or maroon red on dorsal faces. Ambulatory legs generally maroon red, dactyli distally white.
Remarks. Diogenes heteropsammicola sp. nov. belongs to the D. edwardsii species group because of the intercalary rostriform process being smooth on the lateral margins, the antennal peduncle distinctly overreaching the distal corneal margin, and the antennal flagellum bearing a pair of long setae on the distal margin of each article ventrally [28]. The new species is readily distinguished from all other species in this group by its exceedingly slender chelipeds and ambulatory legs, its symmetrical telson, red and white coloration, and the unique symbiotic habit with solitary corals.
Etymology. The new species is named after its mutualistic relationship with the solitary scleractinian corals of the genera Heteropsammia, keeping in mind that this hermit crab is also associated with Heterocyathus corals.
Ecological and behavioral account
Diogenes heteropsammicola sp. nov. was obtained from shallow waters (depth of ca. 60-80 m) in Oshima Strait, where the periodic tidal current is strong even near the bottom. The bottom sediment was shelly sand, including numerous fragments of bryozoans, foraminiferans, and molluscan shells. The sand was also inhabited by lancelets.
The hermit crab was found only in the coiled cavity within solitary scleractinian corals of the genera Heterocyathus and Heteropsammia, and was not found in empty and naked gastropod shells. Observations in aquaria demonstrated that the hermit crab was ambulatory while carrying the host coral (Fig 6A). When the coral was overturned, the hermit crab leaned out of the corallum cavity to grasp the bottom with its long ambulatory legs and left cheliped, and then turned the coral to an upright position using the pleon (Fig 7A-7C). When the coral was buried, the hermit crab pushed away the sediment using its chelipeds and ambulatory legs, and then crawled away while still in the coral (Fig 7D-7F).
When feeding, the hermit crab filtered organic particles using the antennal flagella and third maxillipeds. The hermit crab worked the antennae in a circular motion, in which the right antenna made clockwise turns and the left antenna made counterclockwise turns. As a result, the hermit crab appeared to create an upward current near the third maxillipeds. If organic particles attached to the antennal flagellum, the hermit crab swept it clean using the third maxillipeds.
Some female specimens carried eggs that were red and subspherical, with major and minor axis diameters of 0.5 and 0.4 mm, respectively. One female had 72 eggs on its pleopods. This hermit crab is sympatric with the sipunculan symbiont of walking corals. Among 22 walking corals collected from shallow waters of Oshima Strait between the years 2012 and 2016, 12 (about 55%) were inhabited by hermit crabs, and other 10 by sipunculans (Fig 8). There were no differences in the structures of the corallum cavities between the hermit craband the sipunculan-occupied corals. In Oshima Strait, Heteropsammia cochlea was thought to be more abundant than Heterocyathus alternatus because the former was captured more frequently by dredging.
Discussion
In contrast with other congeneric hermit crab species, this new species has a unique lodging that replaces the usual gastropod shell with a living solitary coral that grows with the hermit crab. In the environment, the two studied coral species were also observed to be symbiotic with sipunculans (Fig 8). Because there were no morphological differences between the conspecific corals inhabited by the sipunculan and hermit crab, individuals of both coral species can be symbiotic with either the sipunculan or hermit crab. The fact that the hermit crab is found only in association with the corals but never in naked molluscan shells suggests that the hermit crab is obligately associated with the coral, whereas the coral is not obligately associated with the hermit crab. Among the Indo-Pacific area where these walking corals live, hermit crab-inhabited corals are currently known only from the Amami Islands.
Observations of behavior in aquaria (Fig 7) have shown that the hermit crab transports its symbiotic coral and rescues the coral from being overturned or buried, just as the symbiotic sipunculan does, suggesting that the coral-hermit crab association is the same accommodation-transportation symbiosis observed between the coral and sipunculan. This case is interesting in that an animal species phylogenetically distant from the original symbiont takes over its role transporting the host coral.
Although both the hermit crab and sipunculan play similar roles in transportation and rescue of the host coral, their feeding habits differ. The hermit crab is a suspension or filter feeder, whereas the sipunculan is a detritus feeder found in muddy substrates, suggesting that the microhabitat of each symbiosis may be different.
Furthermore, the question of whether the sipunculan is the original symbiont of the coral in the accommodation-transportation mutualism is intriguing. The complete fit of the sipunculan's body within the corallum cavity [31] and the dominance of sipunculans over the entire distributional range of the corals suggest that the sipunculan is the original symbiont and that the hermit crab is a secondary alternative symbiont. Although little is yet known about the early stages of these symbioses, both the sipunculans and hermit crabs are thought to start their juvenile life by inhabiting vacant minute gastropod shells [15][16].
The extremely slender body of D. heteropsammicola sp. nov. is considered to be an adaptation to life in the narrow, coiled cavity of the walking coral. The corallum cavity fits the slender body of the symbiotic sipunculan and is narrower and more loosely coiled than that of the gastropod shells utilized by most other hermit crabs. Accordingly, D. heteropsammicola has likely evolved its slender body to fit the narrow cavity.
The symmetrical telson of this hermit crab is unique among species of Diogenes. The symmetry of the telson may be due to the unusual habitat of this species. Because most gastropod shells are coiled dextrally, most gastropod shell-inhabiting hermit crabs have asymmetrical telsons to fit into the dextral shells. On the other hand, the corallum cavity may be coiled dextrally or sinistrally; 25% and 75% of the walking corals inhabited by the hermit crab were dextral and sinistral, respectively (Fig 9). The coexistence of dextral and sinistral corals may be related to the symmetry of the telson.
Why has this saltatory symbiont shift occurred in this accommodation-transportation mutualism? This symbiosis is unique because two different coral species (Hc. aequicostatus and Hp. cochlea in Okinawa Island) share two clades of Aspidosiphon sipunculans as symbionts [31]. A shift in symbiont acquistion strategy by the corals has allowed the hermit crab to take over the transportation role of the usual sipunculan partners. Our data suggest that the corals are obligately symbiotic with sipunculans and this hermit crab, both of which are also obligately symbiotic with the two genera of the corals. By becoming symbiotic with the corals, the hermit crab has probably gained extra security through protection by coral nematocysts and its more permanent lodging means that it no longer needs to change shells as it grows in size.
|
2018-02-26T10:29:43.140Z
|
2017-09-20T00:00:00.000
|
{
"year": 2017,
"sha1": "103372961b33f0e9cd58afce516a507b8e3a3ad5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0184311&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "270180f332b036c557c021466dbd4e378efaf146",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
160009213
|
pes2o/s2orc
|
v3-fos-license
|
Household member substance problems and children's health in the United States
A sizable number of children are exposed to household member substance problems, an adverse childhood experience (ACE), yet little research uses a nationally representative sample of U.S. children to examine this association. We used newly released data from the 2016 National Survey of Children's Health (NSCH), a nationally representative sample of noninstitutionalized children in the United States, and logistic regression models to investigate the relationship between household member substance problems and 14 indicators of children's health. We find 9.0% of children in the United States have experienced household member substance problems. We also find children exposed to household member substance problems are more likely to have health problems than children not exposed to household member substance problems, but that most of these descriptive differences can be explained by household characteristics and other ACEs. Children exposed to household member substance problems are a vulnerable population. Given that household member substance problems are concentrated among socioeconomically disadvantaged children, children at a greater risk of health problems than their counterparts, this ACE may exacerbate existing socioeconomic inequalities in children's health.
Introduction
A substantial proportion of the U.S. population has a substance use disorder, defined as clinically significant impairments resulting from the use of drugs or alcohol (Lipari & Van Horn, 2017a, 2017b. In 2016, approximately 20.1 million individuals over age 12 had a substance use disorder, including 7.4 million (2.7% of the population) with an illicit drug use disorder and 15.1 million (5.6% of the population) with an alcohol use disorder (Lipari & Van Horn, 2017a, 2017b. Accordingly, a sizable number of children reside with individuals who have impairments from drug or alcohol use. About one in eight-or 8.7 million-children live with at least one parent suffering from a substance use disorder (Lipari & Van Horn, 2017a, 2017b. Household member substance problems is an adverse childhood experience (ACE) that may be consequential for children's health (Dube, Felitti, Dong, Files, & Anda, 2003;Felitti, 2009;Peleg-Oren & Teichman, 2006;Schilling, Aseltine, & Gore, 2007). The stress process perspective provides a theoretical lens for understanding the relationship between household member substance problems and children's health. This perspective suggests that stressors, such as household member substance problems, are concentrated among vulnerable populations and that these stressors can have deleterious consequences for physical and mental health (Pearlin, 1989;Pearlin, Menaghan, Lieberman, & Mullan, 1981).
Moreover, the stress process perspective suggests that stressors can proliferate across individuals, with the stressor experienced by one individual having deleterious consequences for the physical and mental health of those connected to the individual (Pearlin, Aneshensel, & LeBlanc, 1997). Children are especially important to the stress process perspective (Avison, 2010). Substance use problems can strain household economic resources (Rehm et al., 2009), fracture relationships between family members (Whisman, 1999), and have corresponding physical and mental health problems (Kessler, Chiu, Demler, & Walters, 2005;Whiteford et al., 2013), all of which are social determinants of children's health (Bloom, Cohen, & Freeman, 2009;Hardie & Landale, 2013;McLeod & Shanahan, 1993;Turney, 2011;Turney & Hardie, 2017). Therefore, in accordance with the stress process perspective, the stressor of household member substance problems may be associated with health impairments among children.
Despite good reasons to expect an association between household member substance problems and impairments in children's health, little research considers the health disparities between U.S. children exposed and not exposed to household member substance problems (Berg, Bäck, Vinnerljung, & Hjern, 2016;Jääskeläien, Holmila, Notkola, & Raitasalo, https Kandel, 1990;Osborne & Berger, 2009;Raitasalo & Holmila, 2017;Thompson, Alonzo, Hu, & Hasin, 2017;Zebrak & Green, 2016). This is an important oversight given the sizable number of children exposed to household member substance problems, many of whom experience other vulnerabilities. Existing research generally finds that parental substance problems are linked to poor health in children. For example, one study used data from the Fragile Families and Child Wellbeing Study, a cohort of U.S. urban children born to mostly unmarried parents, and found that parental substance abuse is associated with an increased risk of behavioral problems and fair or poor overall health in children. This study also found that these associations are strongest when both parents abuse substances (Osborne & Berger, 2009). Another study, using data from a prospective cohort study of African Americans, finds that parental alcohol use is associated with problem behaviors in adolescence (Zebrak & Green, 2016). Research examining children in countries outside the United States come to similar conclusions about the deleterious consequences of parental or household member substance problems (Berg et al., 2016;Jääskeläien et al., 2016;Raitasalo & Holmila, 2017).
In this article, we use newly released data from the 2016 National Survey of Children's Health (NSCH) to provide one of the first examinations of the association between household member substance problems and children's health among a nationally representative sample of non-institutionalized children in the United States. We examine three global indicators (fair or poor overall health, activity limitations, chronic school absence) and nine specific indicators (including mental health [e.g., depression, anxiety] and physical health [e.g., asthma, obesity]) of children's health. We also examine two summary measures of children's health, one that indicates the child currently has any of the nine specific health conditions and another that indicates the number of specific health conditions. Our analyses isolate the relationship between household member substance problems and children's health by adjusting for a large number of child, parent, and household characteristics including poverty and exposure to other ACEs (e.g., parental divorce or separation, parental incarceration). Taken together, these analyses complement and extend prior research on this topic by using a nationally representative sample of U.S. children, by considering an array of general and specific measures of children's health, and by adjusting for factors that might render the relationship between household member substance problems and children's health spurious.
Participants
We estimate the relationship between household member substance problems and children's health using the newly released 2016 National Survey of Children's Health (NSCH), a cross-sectional survey funded and directed by the Health Resources and Services Administration Maternal and Child Health Bureau (HRSA MCHB). These data comprise a nationally representative sample of 50,212 non-institutionalized children ages 0 to 17. Between June 2016 and February 2017, researchers used the Census Master Address File to identify eligible households and, within each household, identified a focal child. Eligible households were first asked to participate in a web-based survey and, among those that did not respond to the first two web survey invitations, were then asked to participate in a mailed paper survey. The majority (80.6%) of completed surveys were web-based surveys. The overall weighted response rate was 40.7%. An analysis of non-response bias finds "no strong or consistent evidence of nonresponse bias" and our analyses employ survey weights to adjust for non-response (Census Bureau. 2017Bureau. ., 2017; Child and Adolescent Health Measurement Initiative Data Resource Center for Child and Adolescent Health, 2016).
Outcome variables
The outcome variables include 14 indicators of children's health, all reported by the parent respondent (the focal child's mother in 63% of observations). First, these include three global measures of children's health: (1) fair or poor overall health (compared to good, very good, or excellent health); (2) activity limitations (1 = limited or prevented in ability to do things most same-age children can do because of medical, behavioral, or other health condition); and (3) chronic school absence (1 = missed 11 or more school days because of illness or injury in past year). Fair or poor overall health provides an indication of the child's overall health. Though parent responses are skewed toward reporting favorable health, and the validity of parent-reported health is not understood in the same way as adult self-reported health, prior research collapses parent-reported fair and poor overall health (Bzostek & Beck, 2011;Case, Lubotsky, & Paxson, 2002;Chen, Martin, & Matthews, 2006;Idler & Benyamini, 1997;Turney, 2013). Additionally, analyses that instead estimate good, fair, or poor overall health (compared to very good or excellent health) and analyses that use the full distribution of response produce estimates consistent with those presented. Activity limitations and chronic school absence provide an indication of how the child's health interferes with his/her life.
Second, the outcome variables include nine specific measures of current health conditions: (1) learning disability; (2) Attention Deficit Disorder/Attention Deficit Hyperactivity Disorder (ADD/ADHD); (3) depression; (4) anxiety problems; (5) behavioral or conduct problems; (6) developmental delay; (7) asthma; (8) obesity; and (9) speech or other language disorder. These measures were chosen because they are all relatively common childhood health conditions that have been at least partly attributed to the social environment. Third, the outcome variables include two summary measures of children's health: (1) a binary variable indicating the child had any of the nine current health conditions and (2) a count variable indicating the number of current health conditions the child had (ranging from 0 to 9).
Explanatory variable
A binary variable indicates exposure to household member substance problems. Parent respondents were asked to report on one indicator of household member substance problems: "To the best of your knowledge, has this child ever … lived with anyone who had a problem with alcohol or drugs?" Below we include a discussion of the limitations of this measure.
Control variables
The multivariate analyses adjust for three sets of control variables, as it is important to isolate the relationship between household member substance problems and children's health (Berg et al., 2016;Jääskeläien et al., 2016;Osborne & Berger, 2009). The first set includes demographic characteristics: child age, child gender (1 = female), child born low birth weight, child race/ethnicity (White [non-Hispanic], Black [non-Hispanic], Hispanic, other race [non-Hispanic]), child immigrant status (1 = first-or second-generation immigrant), household language (1 = English), mother's age (29 years or younger, 30-39 years, 40-49 years, 50 years and older), and parent's educational attainment (less than high school, high school diploma, some college, Bachelor's degree and higher).
The second set includes additional household characteristics: parent's marital status (1 = married), parent's employment status (1 = employed at least 50 of the last 52 weeks), mother's health (1 = fair or poor overall health), household member welfare receipt, household member WIC receipt, household income below the federal poverty line, household member smokes inside the home, and neighborhood safety (1 = neighborhood always safe for child).
The third set includes eight additional ACEs: (1) parental incarceration; (2) parental divorce or separation; (3) parental death; (4) witness of household member abuse (i.e., child saw or heard parents or adults slap, hit, kick, or punch one another); (5) violence exposure (i.e., child was a victim of violence or witnessed neighborhood violence); (6) household member mental illness; (7) racial discrimination; and (8) income difficulties (i.e., it was very often hard to get by on the family's income since the child was born). See Appendix Table 1 for a correlation matrix of ACEs.
Statistical analyses
The analyses proceed in two stages. The first analytic stage presents frequencies of the outcome variables by household member substance problems. Chi-square tests examine whether the differences among children exposed and not exposed to household member substance problems are statistically significant. The second analytic stage presents results from regression models that estimate each of the dependent variables as a function of household member substance problems. Logistic regression models estimate all dependent variables except for number of health conditions, which is estimated with a negative binomial regression model. Model 1 adjusts for a limited set of variables. Model 2 adjusts for additional parent and household variables. Model 3 further adjusts for ACEs to isolate the independent association between household member substance problems and children's health. By including a relatively limited set of control variables, Model 1 provides an upper-bound estimate of the association between household member substance problems and children's health. Conversely, by including an extended set of control variables that may precede or follow household member substance problems, Model 2 and especially Model 3 provide a lower-bound (and conservative) estimate of this association. We also estimate the association between household member substance problems and children's health separately for the following three groups: (1) children ages 0 to 5, (2) children ages 6 to 11, and (3) children ages 12 to 17. These models adjust for child, parent, and household control variables (the equivalent of Model 2 in the estimates of the full sample).
Relatively few observations were missing data (with control variables missing, on average, 3% of observations). We preserved missing observations by producing 20 imputed data sets, using the multivariate normal method, and averaging results across the data sets. All analyses employ survey weights to account for the complex sampling design and non-response. All analyses were conducted with Stata 13.0. Table 1 presents weighted frequencies of children's health. As expected, relatively few children are in fair or poor health (1.8%), have an activity limitation (5.2%), or have chronic school absence (3.9%). Among the nine specific health outcomes, obesity is most common (16.1%), followed by asthma (8.4%), ADD/ADHD (7.6%), behavioral or conduct problems (6.3%), anxiety problems (6.0%), learning disability (5.7%), speech or other language disorder (4.7%), developmental delay (4.5%), and depression (2.7%). More than one-quarter (27.1%) of children currently have any of the nine specific health conditions and, on average, children have 0.52 health conditions (range: 0 to 9). Table 2 presents weighted descriptive statistics of the sample. About half (48.9%) of children are female. The majority are White (51.9%), followed by Hispanic (24.5%), Black (12.7%), and other race (10.9%). More than one-quarter (26.0%) are immigrants or children of immigrants. More than one-fifth (21.9%) are living in households with incomes below the poverty line. Among the eight additional ACEs considered, parental divorce or separation is most common (25.4%), followed by parental incarceration (8.0%), household member mental illness (7.8%), income difficulties (6.4%), witness of household member abuse (5.6%), witness of violence (3.7%), racial discrimination (3.6%), and parental death (3.2%). Table 2 also shows demographic, socioeconomic, and household differences between children exposed and not exposed to household member substance problems. Children exposed to household member substance problems are less likely than others to have married parents (25.2% compared to 64.6%, p < .001), less likely to have employed parents (88.9% compared to 94.5%, p < .001), and more likely to be living in households with incomes below the poverty line (28.7% compared to 21.3%, p < .001). Children exposed to household member substance problems are also more likely to experience all of the eight other ACEs.
Frequency of household member substance problems
Fig. 1 documents the weighted frequencies of household member substance problems, first for the full sample and then by child age. Nearly one-tenth (9.0%) of children have been exposed to household member substance problems. As expected, the percentage of children exposed to household member substance problems increases with age. About 1.6% of children under age 1 have been exposed to household member substance problems, compared to 7.0% of 6-year-old children, 10.3% of 12-year-old children, and 14.2% of 17-year-old children. Table 3 presents frequencies and means of children's health by exposure to household member substance problems. Children exposed to household member substance problems, compared to other children, are nearly four times as likely to experience depression (8.1% compared to 2.1%, p < .001). These children are three times as likely to have anxiety problems (16.7% compared to 5.0%, p < .001), behavioral or conduct problems (17.1% compared to 5.3%, p < .001), and ADD/ ADHD (18.1% compared to 6.5%, p < .001). They are also about twice as likely to have a learning disability (12.1% compared to 5.0%, p < .001), developmental delay (7.6% compared to 4.2%, p < .001), and asthma (12.8% compared to 8.0%, p < .001). Children exposed to household member substance problems are also more likely to have worse global indicators of health, measured by fair or poor health (3.5% compared to 1.6%, p < .01), activity limitations (9.1% compared to 4.8%, p < .001), and chronic school absence (8.1% compared to 3.4%, p < .001). They are also nearly twice as likely to have any specific health condition (47.2% compared to 25.1%, p < .001) and, on Note: Any specific health condition defined as an affirmative response to any of the following nine indicators: (1) learning disability, (2) ADD/ADHD, (3) depression, (4) anxiety problems, (5) behavioral or conduct problems, (6) developmental delay, (7) asthma, (8) obesity, and (9) speech or other language disorder. Number of health conditions is a summary measure of these health conditions (ranging from 0 to 9).
Descriptives of children's health, by household member substance problems
average, have about twice as many health conditions (1.09 compared to 0.46, p < .001). Table 4 presents results from logistic regression models (and, in the case of one outcome variable, negative binomial regression models) estimating children's health as a function of household member substance problems. Model 1, which adjusts for limited demographic characteristics, shows that children exposed to household member substance problems have worse health outcomes than their counterparts. The negative association between household member substance problems and children's health persists across all dependent variables. For example, children exposed to household member substance problems, compared to their counterparts, are more likely to be in fair or poor health (OR = 2.10; 95% CI = 1.17, 3.76), have an activity limitation (OR = 1.70; 95% CI = 1.37, 2.16), and have chronic school absence (OR = 2.05; 95% CI = 1.40, 2.98). These children are also more likely to have mental health conditions such as depression (OR = 2.59; 95% CI = 2.06, 3.26) and anxiety (OR = 2.81; 95% CI = 2.23, 3.54) and physical health conditions such as asthma (OR = 1.37; 95% CI = 1.09, 1.72) and obesity (OR = 1.31; 95% CI = 1.07, 1.67). They are also more likely to have any specific health condition (OR = 2.09, 95% CI = 1.78, 2.45) and, on average, have a greater number of specific health conditions (b = 0.65, 95% CI = 0.55, 0.76).
Estimating children's health as a function of household member substance problems
Model 2 further adjusts for parent and household characteristics. The magnitude of the association between household member substance problems and children's health is reduced across outcomes, with these additional characteristics explaining between 26% (anxiety problems) and 69% (activity limitations) of the association (coefficients not presented). The associations remain statistically significant for eight of the 14 outcomes. Children exposed to household member substance problems, compared to their counterparts, have a greater likelihood of chronic school absence (OR = 1.62; 95% CI = 1.07, 2.46), learning disability (OR = 1.48; 95% CI = 1.18, 1.85), ADD/ADHD (OR = 1.69; 95% CI = 1.40, 2.05), depression (OR = 1.79; 95% CI = 1.40, 2.29), anxiety problems (OR = 2.14; 95% CI = 1.70, 2.71), and behavioral or conduct problems (OR = 2.07; 95% CI = 1.69, 2.54). They also have a greater likelihood of any specific health condition (OR = 1.65, 95% Note: Asterisks compare children exposed to household member drug or alcohol problem and children not exposed to household member drug or alcohol problem. *p < .05, **p < .01, ***p < .001. Note: Asterisks compare children exposed to household member drug or alcohol problem and children not exposed to household member drug or alcohol problem. Ns vary across outcome variables (see Table 1). **p < .01, ***p < .001. CI = 1.40, 1.93) and have a greater number of specific health conditions (b = 0.40, 95% CI = 0.29, 0.50). Model 3, which further adjusts for eight additional ACEs, shows household member substance problems is significantly associated with only two of the 14 outcomes. Children exposed to household member substance problems, compared to their counterparts, have 1.25 times the odds of ADD/ADHD (95% CI = 1.00, 1.56) and 1.39 times the odds of anxiety problems (95% CI = 1.06, 1.83). In supplemental analyses, we examined which of the eight additional ACEs explained the largest percentage of the association between household member substance problems and children's health. We found that household member mental health explained the largest share of the association (55% of the association, on average, across all of the outcome variables), followed by parental incarceration (17%), witness of household member abuse (16%), witness of violence (13%), income difficulties (9%), parental divorce or separation (8%), racial discrimination (6%), and parental death (0%).
The models presented combine children of all ages, but it is possible that the association between household member substance problems and children's health varies across children's age. We consider this possibility in Appendix Table 2, which presents results from regression models estimating children's health as a function of household member substance problems separately across the following three age groups: (1) children ages 0 to 5, (2) children ages 6 to 11, and (3) children ages 12 to 17. By and large, this table shows that the statistically significant associations between household member substance problems and children's health are consistent across children's age groups. However, there is some evidence that the magnitude of the association is larger among children ages 0 to 5 (compared to children ages 6 to 11 or children ages 12 to 17). For example, household member substance problems is associated with 1.96 times the odds of any specific health condition among children ages 0 to 5 (95% CI = 1.37, 2.81), compared to 1.65 times the odds among children ages 6 to 11 (95% CI = 1.25, 2.16) and 1.53 times the odds among children ages 12 to 17 (95% CI = 1.25, 1.88).
Discussion
Living with a household member who has problems with drugs or alcohol is considered an adverse childhood experience (ACE), defined as a potentially stressful or traumatic event (Felitti, 2009). Despite the sizable number of children exposed to household member substance problems, as well as reasons to believe that household member substance problems is a social determinant of children's health, little research examines differences in children's health among children exposed and not exposed to household member substance problems (Berg et al., 2016;Jääskeläien et al., 2016;Kandel, 1990;Osborne & Berger, 2009;Raitasalo & Holmila, 2017;Thompson et al., 2017;Zebrak & Green, 2016). This is an especially important oversight given the burgeoning research on the social determinants of children's health (Mehta, Lee, & Ylitalo, 2013;Turney, Lee, & Mehta, 2013) and the health consequences of ACE exposure more generally (Anda et al., 1999;Chapman et al., 2004;Corso, Edwards, Fang, & Mercy, 2008;Felitti, 2009;Felitti et al., 1998;Gilbert, Patel, Farmer, & Lu, 2015;Klassen, Chirico, O'Leary, Cairney, & Wade, 2016;Turney, 2018;Wade, Shea, Rubin, & Wood, 2014). In this article, we use newly released data from the 2016 National Survey of Children's Health (NSCH), a probability sample of U.S. children, to provide a nationally representative accounting of the association between household member substance problems and children's health.
Results suggest three conclusions. First, we find that a sizable percentage of U.S. children are exposed to household member substance problems (Lipari & Van Horn, 2017a, 2017b. Nearly one-tenth (9.0%) of children have lived with a household member who has drug or alcohol problems. By age 17, about one-seventh (14.2%) of children have experienced household member substance problems. Importantly, household member substance problems is the second most commonly reported ACE (following parental divorce or separation).
Second, in accordance with the stress process perspective, we find that children exposed to household member substance problems are more likely than other children to have health problems (Avison, 2010; Table 4 Regression models estimating children's health as a function of household member with drug or alcohol problem: 2016 National Survey of Children's Health (NSCH). Notes: Each row represents a separate dependent variable. Negative binomial regression models estimate number of specific health conditions. Logistic regression models estimate all other dependent variables. Odds ratios (OR) or coefficients and 95% confidence intervals (CI) presented for household member drug or alcohol problem. Model 1 adjusts for the following: child age, child gender, child born low birth weight, child race/ethnicity, child first-or second-generation immigrant, household language is English, mother age, and parent educational attainment. Model 2 adjusts for all variables in Model 1 and the following: parent married to child's biological father, parent employed, mother in fair or poor health, household member receives welfare, household member receives WIC, household income below the poverty line, household member smokes inside the home, and neighborhood always safe for child. Model 3 adjusts for all variables in Model 2 and the following: parental incarceration, parental divorce or separation, household member mental health problem, parental death, witness of household member abuse, witness of violence, racial discrimination, and income difficulties. All analyses account for complex sampling design. Ns vary across outcome variables (see Table 1). *p < .05, **p < .01, ***p < .001. Pearlin, 1989;Pearlin et al., 1981Pearlin et al., , 1997. These descriptive differences exist across all health outcomes considered. On average, children exposed to household member substance problems have worse global health (measured by fair or poor health, activity limitations, and chronic school absence) than other children. They are also between two and four times as likely to have mental health problems (such as depression and anxiety) and physical health problems (including obesity and asthma). Third, for all but two of the 14 outcomes considered, the descriptive associations between household member substance problems and children's health fall from statistical significance in fully adjusted multivariate models. Adjusting for parent and household characteristics (such as parental marital status and educational attainment) explains some of the association between household member substance problems and children's health. Adjusting for additional ACEs (such as household member mental illness and parental incarceration) further explains the association between household member substance problems and children's health. Importantly, these models provide a conservative estimate of the relationship between household member substance problems and children's health, as they adjust for other ACEs that are both correlated with household member substance problems and are possibly endogenous to household member substance problems. For example, it is quite possible that household member substance problems engenders an additional ACE, parental incarceration, which then initiates and exacerbates health problems among children.
In the most rigorous models-those that adjust for child, parent, and household characteristics (including additional ACEs)-household member substance problems is only associated with an increased risk of ADD/ADHD and anxiety. This means that, for most health outcomes, the association between household member substance problems and children's health results not from household member substance problems but instead from characteristics correlated with household member substance problems (especially additional ACEs). This also means that household member substance problems are independently associated with ADD/ADHD and anxiety. The stress process perspective is non-specific in nature, so does not provide guidance as to why these two outcomes may be especially reactive to household member substance problems, but future research should work to understand how social determinants of children's health may be differentially associated with specific outcomes (Pearlin et al., 1981).
The analyses suggest some possible explanations for the descriptive association between household member substance problems and children's health. For example, the magnitude of these associations decreases after adjusting for socioeconomic characteristics such as parental educational attainment and household poverty. The relationship between household member substance problems and socioeconomic characteristics is likely bi-directional, and the cross-sectional data do not facilitate disentangling these bi-directional relationships, but this finding provides suggestive evidence that household economic instability is a pathway linking household member substance problems to children's health (Bloom et al., 2009;Rehm et al., 2009).
Similarly, the analyses provide suggestive evidence that additional ACEs-such as parental divorce or parental incarceration-may link household member substance problems to children's health. Recent research finds that children who experience parental incarceration are eight times as likely as their counterparts to experience household member substance problems (Corso et al., 2008). Again, the direction of causality is likely bi-directional, with parental incarceration and household member substance problems influencing one another, and these data do not allow us to consider these complexities. That said, the findings suggest a relationship between household member substance problems and other ACEs, which together may have deleterious consequences for children's health (Felitti, 2009;Jimenez, Wade, Lin, Morrow, & Reichman, 2016). Establishing proper time-ordering between the dependent, independent, and control variables, as well as untangling the mechanisms linking household member substance problems to children's health, are important directions for future research.
Limitations
We used nationally representative data that are appropriate to examine the association between household member substance problems and children's health. However, limitations exist. First, these observational data preclude causal conclusions, as it is possible there exist unobserved characteristics that would render the relationship between household member substance problems and children's health statistically non-significant. Unobserved characteristics may render the association between household member substance problems and children's ADD/ADHD and anxiety spurious. Relatedly, the cross-sectional data necessitate that household member substance problems is measured contemporaneously with the control variables; therefore, the analyses likely obscure some characteristics that link household member substance problems and children's health. Second, similar to other data sources, the measure of household member substance problems is reported by the parent respondent and is not a clinical or diagnostic indicator of alcohol or drug abuse (Osborne & Berger, 2009). Third, important information about household member substance problems remains unobserved. The data do not include information about who in the household has substance problems, how long the child was exposed to household member substance problems, and how frequently the child was exposed to substance use; these contingencies may be differentially associated with children's health. These limitations are outweighed by the large, recent, and nationally representative sample but provide important directions for future research.
Conclusions
These findings have important public health implications. Household member substance problems is a commonly experienced ACE, with nearly one-tenth of all children in the United States having lived with a household member with an alcohol or drug problem. Children exposed to household member substance problems are a vulnerable population, as these children have more health problems than their counterparts. Pediatricians should consider screening parents for substance problems and direct parents to appropriate resources for treatment and, in turn, alleviate stressors in children's lives. Pediatricians should also pay particular attention to the health of children living with family members who have substance problems. Given that household member substance problems are concentrated among socioeconomically disadvantaged children, children at a greater risk of health problems than their counterparts, this ACE may exacerbate existing socioeconomic inequalities in children's health (Bloom et al., 2009).
Ethics
This research does not involve human subjects. This research was deemed exempt from human subjects research by the institutional review board at the University of California, Irvine.
Appendix A. Supplementary data
Supplementary data to this article can be found online at https://doi.org/10.1016/j.ssmph.2019.100400. Notes: Each row represents a separate dependent variable. Negative binomial regression models estimate number of specific health conditions. Logistic regression models estimate all other dependent variables. Odds ratios (OR) or coefficients and 95% confidence intervals (CI) presented for household member drug or alcohol problem. Models adjust for all covariates in Model 2 of Table 4. All analyses account for complex sampling design. Ns vary across outcome variables (see Table 1). *p < .05, **p < .01, ***p < .001.
|
2019-06-13T13:24:11.792Z
|
2019-04-01T00:00:00.000
|
{
"year": 2019,
"sha1": "62d3b15086716f82f97b358e79bcec3ac3857766",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ssmph.2019.100400",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62d3b15086716f82f97b358e79bcec3ac3857766",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
225792334
|
pes2o/s2orc
|
v3-fos-license
|
여대생의 외모만족도와 자아존중감이 성적자기주장에 미치는 영향 The Impact of Appearance Satisfaction and Self-Esteem on Sexual Assertiveness among Female University Students
Background: University students belong to the life stage of early adulthood, where they are required to establish a right attitude and value system about sexual matters. When teenagers go to college, they have more chances to contact with the other sex and become to live a relatively freer life, which make them face novel situations of dealing with sexual issues. Sexual assertiveness is a mandatory communication strategy for university students, especially for female students, who are living in this rapidly changing sexual culture. So, this study aimed to examine the relationship among appearance satisfaction, self-esteem and sexual assertiveness and identify the influencing factors on sexual assertiveness in female university students. Methods: A total of 166 female undergraduate students participated in this study. Data were collected through self-reported questionnaires consisted of the Body Esteem Scale, Self-esteem Scale, and Sexual Assertiveness Scale, between September and October 2018. Data were analyzed with SPSS/WIN25.0 for descriptive statistics using t-test, one-way ANOVA, Pearson’s correlation coefficient, and multiple linear regression. Results: Sexual assertiveness was positively correlated with appearance satisfaction (r=.50, p < .001) and self-esteem (r=.64, p < .001). Multiple regression showed that satisfaction with major ( β =.12, p=.043) and self-esteem ( β =.10, p < .001) explained 48.3% of the variance in sexual assertiveness (F=52.28, p < .001). Conclusions: Thus, in order to improve sexual assertiveness, it may be helpful to positively accept one’s appearance and self-esteem and to incorporate appearance satisfaction and self-esteem when planning to counseling and intervention about sexual assertiveness for female university students.
Introduction
University students belong to the life stage of early adulthood, where they are required to make specific plans for future, develop a good character, and establish a right attitude and value system about sexual matters (Lee YR et al., 2013). When teenagers go to college, they have more chances to contact with the other sex and become to live a relatively freer life, which make them face novel situations of dealing with sexual issues. Although they are fully mature in a physical sense, however, university students may not be so in terms of establishment of values which enables controlling sexual impulses and making decisions. Therefore, many sexual conducts are done neither based on independent decision making nor by their own choices (Jeon GS et al., 2004). This can cause both physical and mental health problems, and women are especially vulnerable compared to men if such problems arise. In order to minimize such dangers, it is required for women to assert for their sexual rights themselves (Choi MS et al., 2014).
As the recent cases of sexual harassment and sexual assault have become issues, the 'Me Too movement' is spreading in the world. Especially, the spreading of 'Me Too movement' in university campuses revealed sex-related incidents and has raised the awareness of sexual violence issues in the university. To prevent and solve sexual harassment in the university, individual characteristics that can have large impact on not hiding the harm, controlling sexual situations, communicating opinions or thoughts are important (Roh ES, 2016). Yet, women's sexual assertiveness tends not to be accepted actively in Korean culture and society due to sexually discriminating social norms and attitudes (Lee YR et al., 2013), to which our attention is required.
We are living in a society where skinny body is idealized and appearance is emphasized more than almost anything (Heo NR, 2018). The appearance satisfaction describes the overall satisfaction of the body using the individual satisfaction with one's face and appearance, including subjective and comprehensive evaluation on the image of how an individual oneself thinks about one's appearance and how an individual thinks others perceive about one's appearance (Kim HK, 2018). We can tell that female college students consider appearance very seriously, according to a research conducted by Gallup Korea in 2015 where 88% of respondents aged 19∼29 thought that appearance is important in life, and 80% answered that they paid attention to how they appear (Gallop Korea, 2015). For many women, satisfaction with their appearance is related with satisfaction with themselves: when the appearance satisfaction is higher, they show more active attitude and higher self-esteem. When a woman has a sense of inferiority on her appearance, on the other hand, it can affect her self-concept negatively and thus cause low self-esteem (Nho JH et al., 2014).
Self-esteem is the evaluation and judgment that an individual consciously maintains about oneself. People with higher self-esteem would have characteristics of respecting oneself and thinking one is worthy, and have ability of self-expression, self-assurance and good impression (Rosenberg, 1965). That is, the level of self-esteem is considered as an index for an individual's socio-cultural adaptation and as a motivational factor that controls one's behavior, which makes self-esteem an important concept in terms of psychosocial health (Heo NR, 2018). Female university students' body image and self-esteem have a positive correlation (Nho JH et al., 2014), and stress from appearance does affect one's self-esteem (Heo NR, 2018).
Sexual assertiveness is a vital element to protect oneself and maintain sexual health (Zerubavel et al., 2013). In sexually conservative cultures, not many university students are assertive in sexual situations. Female students especially tend to manifest less sexual assertiveness than male students (Jang HS et al., 2019), because stricter norms are applied to women than men when it comes to sexual practices and female students thinks that they are sexually powerless and have fear (Zerubavel et al., 2013). So, female university students need to have the autonomous and responsible manner regarding their sexual situation (Kim YJ et al., 2019). Sexual assertiveness is a mandatory communication strategy for university students in their early adulthood, especially for female students, who are living in this rapidly changing sexual culture (Lee HL, 2019). Previous research shows that sexual assertiveness of university students are related with gender role stereotypes (Choi SH, 2016), self-esteem (Woo CH et al., 2019), parent-child communication (Kim BM et al., 2015), relation satisfaction (Lee JY, 2017) and so on, and self-esteem is especially known to be a factor that have a positive impact on self-assertiveness (Jang HS et al., 2019;Woo http://www.stressresearch.or.kr/ CH et al., 2019). But, the positive or negative feelings about one's body is related to preventive sexual behaviors (Auslander et al., 2012) and female students have a higher stress about their appearance relatively than male students (Heo NR, 2018). But, there is a lack of study identifying how the feelings about female student's body and their satisfaction or stress with appearance affect sexual assertiveness (Chae HJ, 2019). Also, sexual value system is different by one's gender, so female and male university students are expected to show difference not only in their sexual assertiveness but also in the factors that have influence on it. It is thought that the gender difference should be necessarily taken into account in identifying the influencing factors on sexual assertiveness. However, this aspect has not been dealt with seriously enough in previous studies (Lee SY, 2015;Jang HS et al., 2019;Kim YJ et al., 2019).
Thus, this study aimed to examine the relationship among appearance satisfaction, self-esteem and sexual assertiveness and identify the influencing factors on sexual assertiveness in female university students. It would be useful in providing the baseline data for developing the sexual assertiveness improvement program for female university students.
Study design
This study is a descriptive survey study to identify the effect of appearance satisfaction and self-esteem on the sexual assertiveness in female university students.
Study participants
The participants for this study was female university students who are currently studying in the S university. They are 166 students who agree voluntarily in participating in this study and write the informed consent.
To determine the sample size, G*power 3.1.9, a statistical power calculation program based on Cohen's sampling formula was used. As a result of calculating with a two-tailed significant level of .05 for multiple regression analysis, an effect size=.15, and a statistical power=.90, number of predictors=10, and the minimum sample size was 147. A total of 184 questionnaires were distributed in consideration of the dropout rate of 20% of the subjects, and 170 questionnaires were returned (response rate 92.4%). 166 parts were used for final analysis, except for 4 with insufficient responses.
1) Appearance satisfaction
To measure the satisfaction with the appearance, this study used the Lee JA (2005) translated version of Body Esteem Scale of Mendelson et al.(1985). This scale is consisted of total of 20 items, which are 5 items on physical charm, 7 items on physical strength, and 8 items on physical condition. Each item is a Likert's 5-point scale, and the possible scores ranges from a minimum of 20 to a maximum of 100. The score closer to 20 means the more negative appearance satisfaction, while the score closer to 100 means the more positive appearance satisfaction. In terms of the reliability of this scale, its Cronbach's α was .85 in the study of Lee JA (2005) and .91 in this study.
2) Self-esteem
The translated version of the Self-Esteem Scale (RSES) developed by Rosenberg (1965) are used to measure self-esteem. The translated version of the survey was used in a previous study by Lee HJ et al.(1995). This scale is consisted of total of 10 items, which are 5 items on positive self-esteem and 5 items on negative self-esteem. Each item is a Likert's 5-point scale, where 'strongly disagree (1 point)', 'disagree (2 point)', 'undecided (3 point)', 'agree (4 point)' and 'strongly agree (5 point)'. The negative self-esteem items will be counted in reverse, and the higher calculated score means the higher degree of self-esteem. In terms of the reliability of this scale, its Cronbach's α was .89 in the study of Lee HJ et al.(1995), and .89 in this study.
3) Sexual assertiveness
The sexual assertiveness was measured by sexual assertiveness factor from the Sexual Development Assessment Scale developed by Ha EH et al.(2007). There are 12 items in this scale, including items on self-determination of sexual behavior through the heterosexual relationships, consensual decision ability of sex-related behaviors, effective communication and self-assertiveness. This measurement was developed for high school students but the validity and reliability were also identified for university students (Lee KI et al., 2017). Each item is a Likert's 5-point scale, where 'strongly disagree (1 point)', 'disagree (2 point)', 'undecided (3 point)', 'agree (4 point)' and 'strongly agree (5 point)'. The higher score means the higher degree of sexual assertiveness. In terms of the reliability of this scale, its Cronbach's α was .86 in the study of Ha EH et al.(2007), .90 in the study of Lee KI et al.(2017), and .84 in this study.
Data collection and ethical consideration
This study was performed on the female students who are taking courses in Female University Students Career Development Center in the S university located in Jeollanam-do. The researchers got the ethical approval from the institutional review board at S University before collecting data (IRB No. 040173-201807-HR-022-04). Data collection was conducted from September to October in 2018. The researchers explained the purpose of the study to charge of Female university students career development center, visited each classroom to account for this study, and recruited the participants. The subjects of this study were female university students who voluntarily stated their intention to participate in the study. The participants were informed about confidentiality, anonymity, and ability to withdrawal from research at any time depending on the participant's intentions without any disadvantages. The questionnaire was filled out by participants in a private and comfortable setting. The participant sealed completed questionnaire and put it in the collection box located in the classrooms. The questionnaire took about 10 minutes to complete for each participant.
Data analysis
The collected data were analyzed using SPSS/WIN 25.0 program. The statistical significance was set as p<0.05. The general characteristics, appearance satisfaction, self-esteem and sexual assertiveness of the participants were statistically analyzed descriptively using frequencies, percentages, means, standard deviations, minimum and maximum, skewness, and kurtosis. The differences in relevant variables in the general characteristics of the participants were analyzed by independent t-test and one way ANOVA. Scheffe' test was used as post-hoc test. The relationship among the variables were analyzed using Pearson's correlation coefficient. The factors that affect sexual assertiveness were analyzed using stepwise linear multiple regression. 2. The score for appearance satisfaction, selfesteem, and sexual assertiveness
General characteristics of participants
The participants' average score was 58.81±0.95 for appearance satisfaction, 35.16±0.49 for selfesteem and 52.60±8.56 for sexual assertiveness. For the subscale scores in appearance satisfaction, physical charm was 13.27±3.74, physical strength was 21.71±4.72, and physical condition was 24.23±4.80. The subscale scores in self-esteem were 18.60±3.38 for positive self-esteem and 16.56±3.46 for negative self-esteem. Lastly for the subscale scores in sexual assertiveness, selfdetermination of sexual behavior was 11.57±1.66, consensual decision ability of sex-related behaviors was 13.33±1.43, and effective communication and self-assertiveness was 26.38±2.95 (Table 2).
Differences in the participants' sexual assertiveness in relation to their general characteristics
For the score differences in the sexual assertiveness based on the general characteristics of participants, the satisfaction with major (t=−2.52, p=.013) was statistically significant (Table 1).
Relationship between appearance satisfaction, self-esteem, and sexual assertiveness
The appearance satisfaction of participants had positive correlation with self-esteem and sexual assertiveness respectively (r=.64, p<.001; r=.50, p< .001). The self-esteem also had positive correlation with sexual assertiveness (r=.69, p<.001) ( Table 3).
Factors affecting participants' sexual assertiveness
Stepwise linear multiple regression analysis was conducted by appearance satisfaction, self-esteem and general characteristics that were significantly associated with sexual assertiveness as the independent variables to identify the factors that can affect the sexual assertiveness of the participants. In order to test the assumption of linear re- gression analysis, normality and multicollinearity among all variables were checked. To check for multicollinearity among the independent variables for the multiple regression analysis, variation inflation factors (VIF) were calculated, with the range of the VIFs between the variables being determined as 1.145∼1.805; VIF values under 10 indicated an absence of multicollinearity. The Durbin-Watson statistic was 1.848, indicating that the error terms were independent from each other, without any autocorrelation. So it is sufficient to satisfy the assumption for multiple regression analysis.
The scattering of the standardized residuals and the uniformity of the residuals through the P-P plot and the normality were examined. As a result of verifying the singular value, the standardized residual was less than the absolute value of 3, and it was confirmed that there was no singular value because the range of Cook's distance did not exceed the absolute value of 1.
Discussion
This study was performed to identify the relationship among appearance satisfaction, selfesteem and sexual assertiveness, and the factors that affect sexual assertiveness in female university students.
The appearance satisfaction of female university students turned out to be moderate with 58.51 (mean score 2.94) and this result supports Lee SY (2015)'s result that female university students appearance satisfaction was 3.00, which was measured using the same tool. Also, it supports the result of Kim MJ et al.(2004) that male and female university students' satisfaction score on their appearance was 2.97, while Kim MJ et al.(2004), though different tools were measured, argued that the university students' satisfaction with their appearance by self-evaluation was lower than the evaluation by others. Therefore, it seems to be necessary to examine the degree of satisfaction perceived by others as well as by oneself in future studies. Self-esteem of the participants of this study was 35.16 (mean score 3.52), which confirmed the results of Kim MJ et al.(2004) and Choi SH (2016). But it was lower than the findings in other previous research where university students' self-esteem was 38.40 to 39.68 (Lee KI et al., 2017;Lee HL, 2019). Males are known to have higher self-esteem than females (Kim BM et al., 2015), and it seems to be based on the fact that the ratio of male students is higher in many previous research. Self-esteem of university students is influenced by many factors like satisfaction with school life, stress from career finding and from appearance, etc. (Heo NR, 2018). Therefore, it is suggested that a replication study to examine university students' self-esteem be conducted with such different variables controlled and with more population participating as well.
Female students' sexual assertiveness scored 52.60 (means score 4.38); by subcategories, consensual decision ability of sex-related behaviors was 13.33 (mean score 4.44), effective communication and self-assertiveness was 26.38 (mean score 4.40), and self-determination on sexual behavior was 11.57 (mean score 3.86). The scores in the result are lower than 4.45 by Woo CH et al.(2019) which measured sexual assertiveness with the same tool, but they are higher than 4.26 by Lee KI et al.(2017). The difference is considered to have stemmed from difference between the participant groups. The participants in the previous study were students of nursing school and also included both male and female students. The more general knowledge in sex and information about pregnancy, sexually-transmitted disease they have, the more sexually assertive they are (Kim YH et al., 2013). The analysis of the scores for subcategories of sexual assertiveness, the score for self-determination of sexual behavior was the lowest. Therefore, this aspect needs to be fully considered when a counseling or education sessions are planned to foster the sexual assertiveness of female university students. As for the scores of sexual assertiveness based on the general characteristics of participants, those who are satisfied with their major showed more sexually assertive than those who are not. There is no similar previous study to make comparison in this regard, but considering that satisfaction with a major usually leads to better academic performance, this result supports Choi SH (2016), which showed that students with more than average grade points were better at exhibiting sexual assertiveness than those with lower grade points. Although this study did not identify each participant's major, future study wll need to examine the difference in sexual assertiveness of participants by their specific majors as well as their satisfaction with the majors. Meanwhile, there was no significant difference in sexual assertiveness by participants' religion, experience of dating, and parent's attitude towards sexuality, which supports the result of Woo CH et al.(2019).
However, it needs to be noted that the sexual assertiveness by participant's religion and dating experience yielded different results by study (Chae HJ, 2019;Kim YJ et al., 2019). It would be necessary to conduct a replication study with an enlarged group of participants and with more specific examination and analysis of sexual experiences as well.
The appearance satisfaction of participants had positive correlation with self-esteem respectively. This supports the conclusion by Kim MJ et al. (2004) that the more positive is others' perception on their appearance and the greater is the satisfaction with their own appearance, the higher self-esteem the female university students have. As a result of the current study, it was found that sexual assertiveness had statistically significant positive relationship with appearance satisfaction and self-esteem. This supports the findings that young college women dissatisfied with their bodies may be less likely to enforce their rights of sexual autonomy (Auslander et al., 2012) and that objectified body consciousness decreases sexual assertiveness (Manago et al., 2015). This result that self-esteem is closely related with sexual assertiveness is supported by many other previous studies (Auslander et al., 2012;Kim BM et al., 2015;Jang HS et al., 2019;Woo CH et al., 2019). It also coincides with the result of Lee HL (2019) that with higher selfesteem, one has higher sexual assertiveness; that is, evaluating oneself highly and worthy leads to having more control in a sexual situation by being able to communicate what one perceives clearly to the other party, such as what she wants and does not want as well as her emotions and thoughts.
The female students' satisfaction with major (β=.12, p=.043) and self-esteem (β=.10, p<.001) explained approximately 48.3% of sexual assertiveness. This supports the results of many previous research that self-esteem is a strong predictor for sexual assertiveness (Kim BM et al., 2015;Jang HS et al., 2019;Lee HL, 2019;Woo CH et al., 2019). As a result of this study, the appearance satisfaction of female college students have not affect sexual assertiveness statistically which was consistent with the previous study of Lee SY (2015). The perception of appearance is significant predictor of sexual assertiveness and more positive views about their appearance and weight was associated with sexual assertiveness (Auslander et al., 2010). This is different from this study because of the gap of the social culture and prejudice in the environment to be conducted the study. Korean female students like very skinny body like underweighted and a small and pretty face like a celebrity. They have a tendency to feel satisfied when their body is similar to the ideal appearance (Kim HK, 2018). The body mass index (BMI) and body image did not play a role in their perception of sexual assertiveness (Auslander et al., 2010).
It is necessary to consider the difference between the real BMI and the perception of BMI of the female students when we identify the female students' sexual assertiveness (Chae HJ, 2019). So, we need to replicate the study and to explore the factors influencing the sexual assertiveness including a self-perceived appearance satisfaction, other's evaluation for subject's appearance, a perception of weight satisfaction, and BMI delicately. Also it is necessary to compare the sexual assertiveness of female university students among Korea and western countries.
Sexual assertiveness that matches with one's desires, expectations, thoughts, opinions and emotions even in sexual situations can be cultivated and improved by training to enhance sexual consciousness, training to actively express one's legitimate needs and opinions in respect to individual rights, and training to express one's desires and expectations in an honest and appropriate manner in relationship. Therefore, in order to improve sexual assertiveness of female university students in Korea, it is necessary to develop the various education or intervention program considering Korean culture and to practice various personality trainings that allow one to positively accept one's appearance and selfesteem for them. When we plan to develop the program to improve sexual assertiveness, we need to consider interpersonal and attitudinal aspects. Sexual assertiveness program will be able to include a appropriate sexual value and sexual identity's estabilishment, emotion-control training for the positive self-internalization, self-assertiveness and sexual communication education, and integrated intervention for sexual assertive skills and sexual decision-making. It is helpful to conduct a individual counseling or group counseling. These intervention or education should be apply systemically and progressively.
This study was conducted on students taking a liberal arts course at the female university students career center in a university, so a caution is needed not to overinterpret the results. Although students from various majors are included in the study, their specific majors were not identified. In the future, a replication study by furthering the range of participants and identifying various demographic factors including participant's major would be needed to be conducted.
Conflicts of interest
The authors declared no conflict of interest.
|
2020-07-02T10:35:21.564Z
|
2020-06-30T00:00:00.000
|
{
"year": 2020,
"sha1": "a819d636edee6e137cf07398dfab1bb5c3b06995",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.17547/kjsr.2020.28.2.90",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "563363e5754ac3decbec9837d0845eb36ddf2cc5",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
24148606
|
pes2o/s2orc
|
v3-fos-license
|
Effect of interferon and ribavirin combined with amantadine in interferon and ribavirin non-responder patients with chronic hepatitis C (genotype 1)
AIM: To evaluate the efficacy of amantadine plus interferon-alpha and ribavirin in non-responder patients with chronic hepatitis C. METHODS: Twenty-six non-responder patients received the regimen of IFN- -2a at a dose of 6 million units three times a week, 1 000-1 200 mg of ribavirin daily, and 200 mg of amantadine daily in divided doses over 48 wk. After the end of treatment, at the 72 nd wk, a sustained viral response rate was determined. RESULTS: An early (after 12 wk of therapy) response was seen in 34.6% (9/26) of patients. Response rate at the 24 th wk was 42.3% (11/26). End of treatment response (ETR) was 53.8% (14/26). Sustained viral response (SVR) was 42.3% (11/26). There was a statistically significant difference between 0 and 12 wk (P = 0.04), 0 and 24 wk (P = 0.01), 0 and 48 wk (P = 0.00), and 0 and 72 wk (P = 0.001). No patient had severe adverse effects during the treatment. CONCLUSION: Combination regimen of interferon- , ribavirin and amantadine can enhance sustained viral response on IFN- and ribavirin non-responder patients with HCV. Triple therapy with amantadine should be evaluated Altiparmak E. Effect and ribavirin combined with amantadine in interferon and ribavirin patients with chronic
Chronic hepatitis C is one of the leading causes of chronic liver disease and related complications such as cirrhosis and hepatocellular carcinoma [1][2][3][4] . Interferon (IFN) or IFN-containing regimens with ribavirin are still the fundamental treatment strategies for chronic hepatitis C. Ribavirin plus interferonalpha (IFN-) combination has led to marked advance in the treatment of IFN--naive or relapser patients with chronic hepatitis C, but has been shown to be only marginally effective in IFN--non-responders [4][5][6][7][8] . In these patients, combination treatment of IFN--2b and ribavirin reached approximately 10-25% sustained viral response (SVR) [9][10][11] . Amantadine is a relatively inexpensive antiviral drug with activity noted against the flaviviridae family to which the HCV belongs. Although a few early reports documented a good response to amantadine monotherapy, subsequent studies failed to confirm these results [12,13] . Pilot studies have suggested that the addition of amantadine to IFN is effective against HCV. Brillanti et al [14] reported that the combination treatment with IFN, ribavirin and amantadine did reach a relatively high sustained viral eradication rate of 48%. But, it is still debatable if amantadine alone or in combination with IFN- and ribavirin could improve viral response in patients who failed to respond to previous combination therapy with IFN- and ribavirin. We aimed to evaluate the safety and efficacy of IFN-, ribavirin and amantadine in a study group of patients with chronic hepatitis C who were considered "non-responders" to previous treatment. Adverse effects developed during the therapy were also evaluated.
MATERIALS AND METHODS
The Institutional Review Board of the Hospital approved this study. Adult patients with chronic hepatitis C who failed to respond to previous treatment (non-responders) were enrolled. Non-responders had not responded to induction therapy with 9 million units of IFN--2a daily for four weeks, followed by 3 million units three times a week for an additional 11 mo and ribavirin (1 000-1 200 mg in divided daily dose) (n = 27). These patients had no previous evidence of virologic response (undetectable HCV RNA level) or biochemical response (normal alanine aminotransferase level) to the combination regimen.
All patients gave informed consent, and those who agreed to participate and met the inclusion criteria were enrolled in the study. The following criteria were used to exclude patients: decompensated liver disease, immunocompromised patients or human immunodeficiency virus positivity, severe psychiatric conditions, poorly controlled diabetes mellitus, active cardiopulmonary disease, renal insufficiency, seizure disorders, autoimmune disease, uncontrolled thyroid disease, other liver diseases and pregnancy. Patients with a hemoglobin level of <130 g/L (for males), or <120 g/L (for females), platelet count <100 000/mm 3 and leukocyte count <3 000/mm 3 were also excluded. All patients were required to use an effective method of birth control during the entire study. All patients included in the study were proven to have HCV genotype 1b.
Treatment regimen
The regimen used in this study was composed of IFN--2a at a dose of 6 million units three times a week, 1 000-1 200 mg of ribavirin daily, and 200 mg of amantadine daily in divided doses. All potential candidates completed at least 30 d of a washout period from the last dose of their previous regimen prior to enrollment. After enrollment, patients received this regimen for 48 wk. After discontinuation of the treatment regimen, patients were subsequently followed for an additional 24 wk.
Clinical and laboratory evaluation
Patients were seen during treatment wk 2, 4, 8, 12, 16, and 24, and every four weeks thereafter until the end of treatment. Physical examinations were conducted during treatment wk 4, 8, and 12, and every four weeks thereafter until the end of therapy. HCV RNA levels were determined by Cobas Amplicore HCV monitor version 2.0 (threshold detection 100 copies/mL), at baseline and again at therapy wk 12, 24, 48 and 72. Viral genotype was determined at baseline. Virological assays were performed in the institution laboratory.
Liver biopsies were performed within a mo of entry into the study. Pretreatment biopsy reports were classified based on the degree of fibrosis and activity of inflammation.
Severity of adverse events, specific to interferon, ribavirin and amantadine, was recorded at each visit. Modification of IFN or ribavirin dose was not needed due to adverse events.
Study end points
The primary end point of this study was a combined (biochemical and virologic) response. Patients who reached their end points (undetectable HCV RNA level and normalization of alanine aminotransferase level) were recorded as "responders". Patients with detectable HCV RNA level during therapy or at the end of the treatment period were designated as "non-responders". Response rates at 12 th , 24 th , and 48 th wk (end of treatment response; ETR) and at the 72 nd wk (sustained viral response) were compared. Sustained viral response (SVR) was HCV RNA clearance at 24 wk after completion of treatment.
Statistical analysis
Response rates were compared with McNemar test. A P value <0.05 was considered to be statistically significant. All analyses were performed with SPSS statistical software package.
RESULTS
A total of 27 patients who met the inclusion criteria were enrolled in the study. Clinicodemographic characteristics of the patients included in the study are summarized in Table 1. In this study, no patient was cirrhotic, and most had high baseline viral load (>2 000 000 copies/mL) and all had genotype 1b. Of the 27 patients, one decided early against treatment and dropped out, but liver biopsy result was included in the evaluation. There was no emergency condition or severe adverse effects to exclude the patients during the study. The histological activity index (HAI) and fibrosis stage of 27 patients are shown in Table 2. An early (after 12 wk of therapy) response was seen in 34.6% (9/26) of patients. Response rate at the 24 th wk was 42.3% (11/26). ETR was 53.8% (14/26). SVR was 42.3% (11/26). There was a statistically significant difference between 0 and 12 wk (P = 0.04), 0 and 24 wk (P = 0.01), 0 and 48 wk (P = 0.00), and 0 and 72 wk (P = 0.001). Figure 1 shows the ETR (48 th wk) and SVR (72 nd wk).
The pattern of anemia and the decline in hemoglobin levels are depicted in Figure 2. This pattern of ribavirin-induced anemia was similar to that reported in previous studies [20][21][22][23] . Overall, hemoglobin levels dropped between 10 and 15 gr/L during the first 4-8 wk after initiating treatment.
DISCUSSION
For previously untreated patients with chronic hepatitis C, treatment with interferon-alpha monotherapy resulted in a sustained viral eradication rate <20%; combining IFN--2b with ribavirin improved this efficacy to around 40% [9,11,15] . Despite this relatively good efficacy of IFN--2b and ribavirin for untreated hepatitis C patients, this regimen is not very efficacious for those who are considered "non-responders" to previous treatment. This treatment-resistant group is a major subject of concern for hepatologists because that group most likely represents the most difficult group of HCV-infected individuals to treat. Brillanti et al [14] reported that the combination treatment with IFN, ribavirin and amantadine did reach a relatively high sustained viral eradication rate of 48% in non-responders. In this study, we found that addition of amantadine to IFN--2a and ribavirin increased HCV RNA clearance rate in treatment of "nonresponder" HCV patients.
An early virological response to IFN- treatment is a strong predictor of SVR [2,5,6]. In our study, an early (after 12 wk of therapy) response was seen in 34.6% (9/26) of patients. All of these patients had SVR. Response rates were statistically different between the 12 th , 24 th , and 48 th wk, suggesting that the benefit of triple therapy gradually increased with elongation of treatment, especially in the patients who do not respond early during the treatment.
Recent studies have shown nonfavorable results of triple therapy in naive patients [16][17][18][19] . Berg et al [20] claimed that amantadine should be considered a potential anti-HCV drug in future studies. But, in non-responders, it was reported that the addition of amantadine was well tolerated, and led to improvement of SVR compared with retreatment with IFN- and ribavirin [21][22][23] . In contrast, our results showed hopeful response rates with an ETR of 53.8% (14/26), and SVR of 42.3% (11/26) in genotype 1.
Amantadine is an inexpensive and well tolerated drug, and our study showed that it was effective when used in combination with standard IFN- and ribavirin [25] . Amantadine was very well tolerated by these patients. We also noted relatively low withdrawal rates compared with most of the previous studies. One patient dropped out early at the start of the study, and another refused re-treatment after biopsy. We believe amantadine does not produce any significant additional side effects.
In patients with chronic hepatitis C, one of the most effective therapies is the combination of peginterferon-alpha-2b (1.5 g/kg per wk) plus ribavirin [17][18][19] . The benefit is mostly achieved in patients with HCV genotype 1 infections. Manns et al [26] showed that the SVR rate was significantly higher (42%) among patients with HCV genotype 1. The rate for patients with genotype 2 and 3 infections was about 80% in all treatment groups.
In summary, re-treatment of a strictly defined non-responder group with a 48-wk course of a triple combination regimen of IFN--2b, ribavirin and amantadine is associated with a sustained viral response that can not be ignored. Although our results were encouraging, further studies are needed. It is possible that alternative regimens using pegylated interferon-alpha in combination with ribavirin or a triple combination regimen (pegylated interferon, ribavirin and amantadine) may be associated with much higher rates of sustained viral eradication and lower response rates.
|
2018-04-03T00:44:40.186Z
|
2005-01-28T00:00:00.000
|
{
"year": 2005,
"sha1": "444a582d261ecc5813586ee084e4791dd1858121",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v11.i4.580",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "7e4881405ab51dc045cdd494703619666f024e7f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244520165
|
pes2o/s2orc
|
v3-fos-license
|
Effect of the trends of estradiol level on the outcome of in vitro fertilization-embryo transfer with antagonist regimens: a single center retrospective cohort study
Background The outcome of in vitro fertilization-embryo transfer is often determined according to follicles and estradiol levels following gonadotropin stimulation. However, there is no accurate indicator to predict pregnancy outcome, and it has not been determined how to choose subsequent drugs and dosage based on the ovarian response. This study aimed to make timely adjustments to follow-up medication to improve clinical outcomes based on the potential value of estradiol growth rate. levels were measured into four group (10.62(cid:0)Gn4/Gn0 ≤ and group (Gn4/Gn0>21.33); group B1 (Gn7/Gn4 group B2 (2.39(cid:0)Gn7/Gn4 We and between in each group and P< 0.001) both had clinical guiding signicance, and the lower one signicantly reduced the pregnancy rate. The outcomes were positively linked to groups A (P = 0.040, P = 0.041) and B (P = 0.015, P = 0.017). The logistical regression analysis revealed that group A1 (OR = 0.440 [0.223–0.865]; P = 0.017, OR = 0.368 [0.169–0.804]; P = 0.012) and B1 (OR = 0.261 [0.126–0.541]; P< 0.001, OR = 0.299 [0.142–0.629]; P = 0.001) had opposite inuence on outcomes. study aimed to determine if serum E 2 levels and E 2 growth rate during Gn ovarian stimulation were correlated with IVF and pregnancy outcomes in 335 patients on antagonist regimens. Additionally, this study aimed to determine the link between E 2 levels in different Gn stimulation periods and the IVF-ET outcomes; the number of embryos was investigated to distinguish E 2 effects on endometrium and embryos. If the hypothesis is substantiated, it may be the time to seek a new parament to evaluate the ovarian response during Gn stimulation, adjust the dose, and ensure treatment outcome of COH.
Introduction
During in vitro fertilization-embryo transfer (IVF-ET) cycles, controlled ovarian hyperstimulation (COH) treatment using exogenous gonadotropin (Gn) to stimulate the development of follicles is a critical step in ensuring the acquisition of enough mature eggs and a satisfactory pregnancy rate. In clinical practice, assisted reproductive technology (ART) outcome is often monitored according to the size and number of follicles and serum estradiol (E2) level after Gn stimulation. However, the need for combined monitoring (using transvaginal ultrasound and serum estradiol) during ovarian stimulation is controversial.
Additionally, no accurate indicator to predict pregnancy outcome has been identi ed to date, and it has not been determined how to choose subsequent drugs and doses based on the ovarian response. Some people argued that vaginal ultrasound alone should be considered to simplify treatment, as combined monitoring is costly, time-consuming, and inconvenient [1]. Neither E 2 on the day of hCG (human chorionic gonadotropin) administration nor other stages were linked to pregnancy rates in women undergoing ART cycles [2][3][4][5]. Additionally, E2 levels were found to be a poor predictor of treatment success [6]. However, the evidence had a low overall quality [7]. Additionally, other researchers suggested that E 2 levels could be used to predict pregnancy outcomes in combination with FSH, age, inhibin B, and so on [8][9][10]. Some scholars suggested that poor ovarian response can be characterized by peak E 2 levels [11]. Phelps et al. and Kahyaoglu et al. [12,13] explored the relationship between E 2 levels on the fourth day of Gn and IVF outcome and believed that estradiol level on the fourth day of COH cycle could predict the response of early follicles to ovarian stimulation. When the serum estradiol level on the fourth day of Gn was low, the current treatment cycle should be abandoned. Other groups concluded that low E 2 concentration after ve days of Gn stimulation predicted a high cycle cancellation and lower pregnancy outcome even with similar numbers of oocytes and fertilization rates [14]. Lower E 2 levels on the sixth day of Gn were also associated with a lower pregnancy rate and the likelihood of live birth [8,15,16]. It was reported that an appropriate range for E 2 existed, and the higher one was not bene cial [17,18]. Older women (>35 years) appeared to be more vulnerable to the harmful effects of high E 2 levels than younger women (≤35 years).
Valbuena et al. [19] found that a high E 2 concentration affected embryonic adhesion. Additionally, a high E 2 concentration affected endometrium's receptivity [20][21][22]. However, Blazar et al. [23][24][25] discovered that higher E 2 levels on hCG day predicted a greater number of oocytes, and any adverse impact on endometrial can be conquered in IVF-ET. Additionally, according to percentile curves, Papageorgiou et al. [26] did not identify any deleterious effect of high E2 levels. Super physiological estradiol levels also did not affect oocyte and embryo quality [27].
Although everyone has different opinions on how to determine E 2 in COH cycle, it is undeniable that determining E 2 during follicular phase has become a part of routine clinical practice over the last decade. Thus, this study aimed to determine if serum E 2 levels and E 2 growth rate during Gn ovarian stimulation were correlated with IVF and pregnancy outcomes in 335 patients on antagonist regimens. Additionally, this study aimed to determine the link between E 2 levels in different Gn stimulation periods and the IVF-ET outcomes; the number of embryos was investigated to distinguish E 2 effects on endometrium and embryos. If the hypothesis is substantiated, it may be the time to seek a new parament to evaluate the ovarian response during Gn stimulation, adjust the dose, and ensure treatment outcome of COH.
Study subjects and protocol
From April 2017 to July 2020, our center conducted a retrospective analysis of infertility patients who underwent completed IVF-ET cycles and had a fresh ET.
The institutional human ethics committee approved the study protocol.
Exclusion criteria: 1) patients with chromosomal abnormalities, reproductive malformation, adenomyosis, and a history of recurrent spontaneous abortion, 2) patients undergoing coasting to prevent ovarian hyperstimulation syndrome, and 3) patients who underwent freeze-all strategy.
Ovarian stimulation protocol
Patients received IVF-ET treatment according to Fixed GnRH-ant (Cetrotide, Merck, Lyon, France) protocol [28]. On the second day of the menstrual cycle, recombinant human follicle-stimulating hormone 150-225 U (Gonal-F, Merck, Lyon, France; Puregon, MSD, Boulogne, France) was injected as Gn. Additionally, Gn doses were determined based on the patient's age, body mass index (BMI), bFSH, and bAFC. Oocytes were then collected by follicular aspiration under ultrasound, 34-36 h after triggering with GnRH-a (Triptoreline, Decapeptyl, Ipsen, France) or recombinant hCG (rhCG, Ovitrelle, Merck, Lyon, France). Eighteen hours after fertilization, embryo development was monitored daily and graded based on the number and size of blastomeres, fragmentation rate, multinucleation, and early densi cation. Notably, on the third day following oocyte retrieval, embryo with at least seven blastomeres (grades one and two) was de ned as high quality [29].
On day 3, one or two embryos in the best shape were selected and transferred using a soft Wallace catheter. For luteal support in advance, we used an injection of progesterone (20 mg/branch, Zhejiang Xianju Pharmaceutical Co., Ltd.), 40 mg daily, and oral dydrogesterone tablets (10 mg/tablet, Abbott Healthcare Products B.V.), 20 mg per day, or progesterone vaginal sustained-release gel (90 mg/dose, Crinone VR 8%, Merck, Sherano, Switzerland), one dose daily. In addition, two bags of Chinese medicine Gushen Antai pills were utilized daily.
Measurement of serum E2
Venous blood samples were collected on the day of Gn (Gn 0), on four days of Gn (Gn4), on seven days of Gn (Gn7), and on the day of hCG (HCG). In addition, the ratio between them was calculated: Gn4/Gn0, Gn7/Gn4, HCG/Gn0, HCG/Gn4, and HCG/Gn7 represent the ratios of serum estradiol levels on Gn4 to Gn0, Gn7 to Gn4, HCG to Gn0, HCG to Gn4, and HCG to Gn7, respectively.
Pregnancy outcomes
Clinical pregnancy is de ned as an intrauterine gestational sac with fetal heartbeat detected by transvaginal ultrasonography after six weeks of gestation.
The primary outcome was live birth, which was de ned as the birth of at least one child with breathing and heartbeat, regardless of gestation duration.
Statistical analysis
Statistical analysis was conducted using SPSS version 26.0 (SPSS Inc., Chicago, USA). Shapiro-Wilk test was employed to assess data normality. Due to skewed distributions, quantitative variables were expressed as median (interquartile range, range between the 25th and 75th percentiles), and Mann-Whitney U and Kruskal-Wallis tests were performed. Qualitative variables were expressed as frequencies and analyzed using chi-square test. P≤0.05 was considered statistically signi cant.
Groups A and B were de ned according to the 25th, 50th, and 75th percentiles of each ratio of E 2 levels.
Pearson correlations were used to determine the correlations between quantitative parameters and the increase in E 2 levels. The propensity scores were calculated using binary logistic regression analyses based on the following patients' characteristics: female age, infertility duration, body mass index (BMI), infertility factors, Gn usage time, and Gn dosage. We calculated crude odds ratios (OR) and adjusted OR with a 95% con dence interval (CI).
Study population
From April 2017 to July 2020, we retrospectively analyzed 335 patients who received in vitro fertilization-embryo transfer (IVF-ET) with antagonist regimens in the A liated Hospital of Shandong University of Traditional Chinese Medicine. Table 1 summarizes the characteristics of the study population for positives and negatives in clinical pregnancy and live birth. Clinical pregnancy positives included 160 patients, and negatives included 175 patients. A total of 124 women had a viable live birth, whereas 211 women did not.
The positives and negatives were similar in BMI, infertility duration, Gn days, number of embryos transferred, endometrial thickness on transplantation day, and baseline hormone level.
However, patients with clinical pregnancy and live birth negatives were older than positive ones (P= 0.001; P= 0.006) and had a higher Gn dosage (P= 0.001; P= 0.005)). The patients with clinical pregnancy and live birth positives were signi cantly higher than negatives with IVF-ET outcomes (both P<0.001), such as the number of oocytes, fertilization, blastomere, and embryos.
Serum estradiol levels and ratios
Initially, Table 2 compares the outcomes based on E 2 level and ratio. Gn0 had no impact on IVF outcome (P= 0.134; P= 0.122). However, elevated E 2 levels following gonadotropin stimulation were correlated with higher clinical pregnancy and live birth (both P<0.05), particularly E 2 of Gn7 (P< 0.001; P=0.001) and HCG (P< 0.001; P=0.002). In early follicular growth, estrogen increase rates were more statistically signi cant. Following gonadotropin stimulation, the higher serum estradiol ratios of Gn4/Gn0 (group A) and Gn7/Gn4 (group B) achieved more clinical pregnancies (P= 0.004; P= 0.001) and live births (P=0.006; P= 0.002). In contrast, estrogen increases rates in late follicle growth (HCG/Gn4, HCG/Gn7) were similar across groups, without reaching statistical signi cance.
According to Table 2, data were classi ed into four groups based on the quartile of serum estradiol ratio. As displayed in Table 3, although group A exhibited no signi cant differences in terms of female age, BMI, and infertility time, group B indicated signi cant differences (both P<0.05). For group A, the higher the estrogen growth rate, the shorter the Gn time required and the lower the Gn dose used (both P<0.001), whereas for group B, the opposite was true (both P<0.005). However, when estrogen growth rate increased, both groups A and B produced superior IVF-ET outcomes in terms of number of embryos harvested, blastomeres, embryos, clinical pregnancy positives, live birth, and so on (both P<0.05). The ratio of group A increased as the ratio of group B decreased (both P<0.001). When the ratio of group B was larger, the corresponding ratio of group A was smaller. When the values of the two groups were higher, E 2 levels on the hCG day were also higher (both P<0.001).
Effects of estradiol ratios of groups A and B on IVF-ET outcome As indicated in Table 5
Discussion
The role of E 2 in IVF-ET is well known to the stage of the trigger day of hCG injection, and it indicates follicular maturation when estrogen level reaches 250 pg/mL, while its role before that stage remains controversial. In this study, the serum estrogen levels of Gn0, Gn4, Gn7, and HCG were measured, and the ratio between them was calculated to evaluate ovarian response, predict treatment outcome, and guide Gn dosage through the increase in estrogen level. In the statistical analysis, the estrogen levels of Gn4, Gn7, and HCG, as well as the ratios of Gn4/Gn0, Gn7/Gn4, and HCG/Gn0, all have clinical guiding signi cance, and the low growth level and rate signi cantly reduces clinical pregnancy and live birth rates. The increment coe cient of estrogen was observed at different stages of IVF-ET, and it was discovered that during Gn stimulation, the change in estrogen in the early and middle stages was also associated with pregnancy outcome, which may be linked to follicle growth mode. During the early follicular stage, a group of antral follicles is recruited and induced to develop. The collected follicle uid includes low estrogen levels. At this moment, high levels of estrogen growth may be linked to the number of recruited follicles. As the follicle grows, follicular granulosa cells increase in number and show aromatase activity. Therefore, follicular uid contained higher estrogen levels. At this moment, the increase in estrogen may be connected with follicular quality.
Second, no study has been conducted to explore the relationship between the growth rate of E 2 during Gn treatment and prognosis with antagonist regimen.
According to Table 2, we revealed that serum E 2 ratios of groups A (Gn4/Gn0) and B (Gn7/Gn4) were statistically signi cant compared with pregnancy outcomes. As a result, we chose these two indicators for further analysis and grouping according to the 25th, 50th, and 75th percentiles. The Chi-square test and the construction of a binary logistics regression analysis model aimed at pregnancy outcome revealed that patients with lower serum E 2 ratios in groups A1 and B1 had lower clinical pregnancy and live birth rates, and group B had more signi cance. It is suggested that in clinical medication, estrogen levels can be observed after four days of Gn treatment, and medication can be adjusted when the increase in estrogen is not ideal to obtain satisfactory e cacy.
Third, we analyzed the in uence factors of E 2 ratio in Gn stimulation cycle. According to statistical analysis, the increase in estrogen levels during the middle stage of Gn stimulation (Gn7/Gn4, group B) was associated with age, infertility years, and BMI, but not with the increase in estrogen levels during the early stage of Gn stimulation (Gn4/Gn0, group A). These results implied that basic characteristics of patients greatly affect the rate of estrogen increase during the stage of Gn7/Gn4 and may be a key factor impacting the quality of follicles. In the follicular growth, higher ratio of estrogen increase can result in a higher clinical pregnancies and live births, regardless of whether they are in group A or B. However, the link between groups A and B is just the opposite. The estrogen ratio of group B decreased as the ratio of group A increased. When estrogen growth is the fastest in the early stage, estrogen growth is slowest in the middle stage in the corresponding patients. An insu cient increase in estrogen in patients who recruit more follicles in the early stage could be due to an insu cient dose of Gn. When the association between group A and Gn usage was examined, it was discovered that the longer the Gn days and the higher the dose of Gn, the slower the early-stage estrogen increase. This may be related to the patient's baseline condition. Typically, patients with high BMI or more antral follicles receive a higher Gn initiation dosage, although a lower estrogen growth rate is generally obtained. On the contrary, in group B study, the longer the Gn days and the larger the required dosage of Gn, the faster estrogen growth in the middle stage. This may indicate the regularity of Gn dosage in follicular development process, and the dosage of Gn7/Gn4 is more critical at this stage, and it is also the time to increase Gn dosage in clinical setting.
Fourth, the main purpose of monitoring estrogen in IVF-ET is to assess the availability of adequate quantity and quality of mature oocytes on the trigger day. This study revealed that estrogen ratios increased during early and middle ovulation induction (Gn4/Gn0, Gn7/Gn4), as well as estrogen levels on hCG day (HCG), are signi cant for IVF-ET outcome. However, the estrogen growth ratio was not signi cantly different in the late stages of Gn-stimulated follicular growth (HCG/Gn4 and HCG/Gn7). Tan et al.
[30] discovered no statistically signi cant differences in pregnancy rates among three groups of patients who respectively received hCG on the day of the leading follicle reaching 18 mm, on the second day, and the third day. At the moment of follicular maturation, estrogen concentration may be more important than estrogen growth ratio, necessitating a rethink of the role of estrogen in trigger day selection.
Predicting outcomes in ART may allow for early treatment strategies earlier and protect patients from unnecessary physical and nancial burdens throughout the treatment cycle. It is currently monitored by recurrent transvaginal ultrasonography or serum E 2 . We considered that ultrasound could measure follicles growth, whereas serum E 2 levels mainly re ect follicle function. As a result, estradiol plays a critical role, despite its relatively low predictive value as a single factor. Additional parameters are required to identify more sensitive biochemical markers that may predict the probability of achieving a clinical pregnancy before hCG administration. Our study demonstrated that accurate monitoring of E 2 ratios was also a key aspect that supports the prognosis of IVF-ET outcome and dose adjustment. This provides the clinic with a new, simple, and convenient prediction method. By observing and calculating the range of E 2 increase ratio, we can guide Gn dosage, predict the likelihood of pregnancy, and evaluate cycle cancellation.
However, because elevated serum estrogen levels may lead to a higher cancellation rate of fresh transfer, this study excluded patients who did not enter fresh embryo transfer and underwent frozen embryo transfer. This is the shortcoming of this study, and the inclusion of cumulative pregnancy rates strengthens a new research avenue for this study.
Consent for publication
Written informed consent for publication was obtained from all participants.
Availability of data and materials
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
Competing interests
The authors declare that they have no competing interests.
Funding
This study was supported by Natural Science Foundation of Shandong Province (ZR2020MH363).
Authors' contributions
Jian-Wei Zhang played a role in the conception, design and review of the manuscript. Chun-Xiao Wei played a role in the conception and design, analysis and interpretation of the data and drafting of the manuscript. Liang Zhang and Ying-Hua Qi played a role in interpretation of the data and drafting of the manuscript, Cong-Hui Pang played a role in the analysis and interpretation of the data. All authors read and approved the nal manuscript. Tables Table 1: Baseline characteristics for clinical pregnancy and live birth Note: Values are given as median (range).
Abbreviations: BMI, body mass index; Gn days, Gonadotropin days; Gn dosage, Gonadotropin dosage. Abbreviations: E 2 , estradiol; Gn0, serum estradiol on the day of gonadotrophin; Gn4, serum estradiol on the four day after gonadotropin stimulation; Gn7, serum estradiol on the seven day after gonadotropin stimulation; HCG, serum estradiol on the trigger day of human chorionic gonadotropin injection; Gn4/Gn0: the ratio of serum estradiol levels on Gn4 to Gn0; Gn7/Gn4: the ratio of serum estradiol levels on Gn7 to Gn4; HCG/Gn0: the ratio of serum estradiol levels on HCG to Gn0; HCG/Gn4: the ratio of serum estradiol levels on HCG to Gn4; HCG/Gn7: the ratio of serum estradiol levels on HCG to Gn7. a. In the pairwise comparison of group A in clinical pregnancy, the comparison between group A1 and group A3 was statistically signi cant ( P=0.013), and group A1 and group A4 was statistically signi cant( P =0.016).
b. In the pairwise comparison of group A in live birth, the comparison between group A1 and group A3 was statistically signi cant ( P=0.011), and group A1 and group A4 was statistically signi cant( P =0.032).
c. In the pairwise comparison of group B in clinical pregnancy, the comparison between group B1 and group B2 was statistically signi cant ( P =0.044), and group B1 and group B4 was statistically signi cant( P =0.002).
d. In the pairwise comparison of group B in live birth, the comparison between group B1 and group B2 was statistically signi cant ( P =0.022), and group B1 and group B3 was statistically signi cant( P =0.033),and group B1 and group B4 was statistically signi cant( P =0.002). Note: The independent variables also included Gn dosage, duration of infertility, BMI, female age and cause of infertility. We de ned the group A4 and B4 as the last reference category.
|
2021-11-24T16:27:53.689Z
|
2021-11-22T00:00:00.000
|
{
"year": 2021,
"sha1": "9b46686028ba1b611b5f720a9cab9c6e6932a7e0",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1066884/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "1a2146fc6176bcf0942bd0f1ca4c57644c1ba99c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
15324635
|
pes2o/s2orc
|
v3-fos-license
|
Investigation of the Influence of the As-Grown ZnO Nanorods and Applied Potentials on an Electrochemical Sensor for In-Vitro Glucose Monitoring
The influence of the as-grown zinc oxide nanorods (ZnO NRs) on the fabricated electrochemical sensor for in vitro glucose monitoring were investigated. A direct growth of ZnO NRs was performed on the Si/SiO2/Au electrode, using hydrothermal and sol-gel techniques at low temperatures. The structure, consisting of a Si/SiO2/Au/GOx/Nafion membrane, was considered as a baseline, and it was tested under several applied potential 0.1–0.8 V. The immobilized working electrode, with GOx and a nafion membrane, was characterized amperometrically using a source meter Keithely 2410, and an electrochemical impedance Gamry potentiostat. The sensor exhibited the following: a high sensitivity of ~0.468 mA/cm2 mM, a low detection limit in the order of 166.6 μM, and a fast and sharp response time of around 2 s. The highest sensitivity and the lowest limit of detection were obtained at 0.4 volt, after the growth of ZnO NRs. The highest net sensitivity was obtained after subtracting the sensitivity of the baseline, and it was in the order of 0.315 mA/cm2·mM. The device was tested with a range of glucose concentrations from 1–10 mM, showing a linear line from 3–8 mM, and the device was saturated after exceeding high concentrations of glucose. Such devices can be used for in vitro glucose monitoring, since glucose changes can be accurately detected.
Introduction
Diabetes mellitus is one of the most common diseases across the globe, and one of the major causes of death and disabilities for hundreds of millions of people [1][2][3].Electrochemical sensors based on amperometric measurements are the most common devices for glucose detection.However, the specificity and the purity of the detected signals are still a challenge, due to several distributions, such as the effects of the substrates of the working electrode and the applied voltages during the oxidation and reduction process [4,5].The ability to monitor glucose accurately and regularly requires highly sensitive, highly selective, and reusable glucose sensors.[6][7][8][9][10].A ZnO NR-based glucose sensor was tested by Wei et al.Although ZnO NRs were grown on a thin gold film, the influence of gold was not investigated and the device was tested under 0.8 V.The obtained sensitivity was in the order of 23.1 µA•cm −2 •mM −1 , and it was not filtered by subtracting the influences of the substrate [11].A glucose sensor, based on ZnO NRs, was investigated by Liu et al.ZnO NRs were grown on a conductive indium tin oxide (ITO) and immobilized with glucose oxidase (GO x ).The obtained sensitivity was considered to be the net sensitivity, without subtracting the effects of ITO [12].The growth of ZnO NRs on different substrates was investigated by Nozaki et al.Galuim nitride (GaN) was chosen as the best substrate to fabricate a glucose sensor and the fabricated device was tested under different glucose concentrations.The influence that GaN has on the device performance was not taken into consideration [13].Recently, Rafiq Ahmad et al. fabricated an integrated field effect transistor-based biosensor and they obtained a high output drain current after the applied-biased gate voltage reached a high value.[14].The effects of surface morphologies were investigated by Jing et al. and a glucose sensor was fabricated out of the synthesized ZnO nanorods.The influence of the substrate and the oxidation-reduction potential were not covered, and the sensor exhibited sensitivity in the order of 2.08 µA•mM −1 •cm −2 [15].A non-enzymatic ZnO NR-based sensor was fabricated and characterized by Saranqi et al.The focus was on the effects of the enzyme and the use of the ultraviolet UV-light on the performance of the sensor [16].An enzymatic glucose sensor, based on AlGaN/GaN and ZnO nanorods, was studied by Lee and Chiu.A photoelectrochemical passivation method was used to cover the synthesized ZnO nanorods from both sides, and the device showed a sensitivity of ~38.9 µA/mM, with a range in the glucose concentration of 800 nM-25 mM [17].
In this work, we report on the fabrication and characterization of an electrochemical glucose sensor, based on Si/SiO 2 /Au/ZnO NRs/GO x /Nafion, that can be used for the in vitro monitoring of glucose.Cost effective, environmentally friendly, and accurate hydrothermal and sol-gel techniques, were utilized in low temperatures, in order to grow ZnO NRs.An electrochemical glucose sensor from the as-grown ZnO NRs was fabricated, characterized, optimized, and tested, under room temperature.Safe, biocompatible, and non-toxic equipment and tools were used for characterization.In this study, more attention was spent on investigating the influence of the substrate and the applied potentials, which have not been scrutinized in detail, in previously reported articles.
Preparation of the Substrate
Si/SiO 2 wafers were diced into 0.5 × 0.5 cm 2 specimens, and cleaned chemically and carefully in an ultrasonic path.The samples were dipped in trichloroethane and sonicated for 10 min, after which they were cleaned by DI water and dipped in an ethanol, before being sonicated for another 10 min.The same cleaning process was carried out to clean the samples using acetone and DI water.Following this, a spin coating cleaning technique was utilized.The samples were placed in a substrate and located in the spin coater, and an acetone-methanol-acetone cleaning process was performed for 15 s with acetone, 15 s with methanol, and for a further 15 s with acetone.The sample was dried with nitrogen while spinning for 15 s.A thin gold film with a 100 nm thickness was deposited using an e-beam evaporator with a deposition rate of 10 Å/s.The deposition was performed under vacuum, with a pressure of ~10 −7 Torr.
Zinc Oxide Nrs Growth Procedure
A low-temperature hydrothermal growth method and sol-gel solution were adopted to directly synthesize ZnO NRs on Si/SiO 2 /Au.The synthesis procedure consists of two major steps: preparation of the seed layer (sol-gel) and preparation of the growth solution.Zinc acetate dehydrate with a concentration of 0.5 M, which is equal to 1.1 gm, was dissolved in 10 mL of methoxyethanol 99%, and the solution was stirred for 1 h.Following this, 0.25 M of ethanolamine +99% was added to the solution to assure a complete dissolving of the zinc acetate in the methoxyethanol 99%.After an hour, the solution was placed in an ultrasonic path for 15 min, and filtered and stored at room temperature.The growth solution was performed by dissolving 0.05 M of zinc nitride hexahydrate Zn(No 3 ) 2 •6H 2 O in 10 mL of deionized water, and 0.05 M of hexamethylenetetramine (CH 2 ) 6 N 4 in 10 mL of deionized water.Both solutions were placed on a magnetic stirrer for 1 h, and then hexamethylenetetramine was added drop-by-drop to zinc nitride hexahydrate, while stirring.The two mixed solutions were left for further purification.An ultrasonic path was used to further prepare the growth solution and finally, a small filter was used to prevent any undissolved particles from remaining in the solution.
Three layers of the prepared sol-gel were spun coated on top of the cleaned and prepared Si/SiO 2 /Au samples, at 3000 rotations per minute (rpm).A one hour annealing procedure was performed for the seed layers at 110 • C, and the annealed samples were pasted onto a glass slide and immersed upside down in the growth solution in a 20 mL screw glass vial, before being placed in a furnace.The growth time and temperature were chosen to be 4 h and 85 • C, respectively, and then the samples were cooled naturally and rinsed with deionized water three times to stop any further growth of ZnO NRs.Finally, they were dried at 300 • C for 15 min.
Working Electrode Surface Modification
After a successful growth of ZnO NRs using the method described above, the surface of the as-grown ZnO Nrs was modified as follows: (A) The prepared structure was placed in phosphate buffer solution (PBS) with a pH level of 7.4, and left in the air to increase the absorbance of the surface of the electrode by generating a hydrophilic surface [18].(B) Different concentrations of the enzyme glucose oxidase (GO x ) were performed and used to immobilize the working electrode.Details regarding the optimization of the enzyme can be found in the results and discussion section.A 40 mg/unit of the enzyme glucose oxidase was utilized to immobilize the surface of the working electrode with GO x .Glucose oxidase with a concentration of 40 mg/mL was dissolved in 0.01 M of PBS, and 1 µL of the prepared GO x was dropped on top of the Si/SiO 2 /Au/ZnO NRs using a micropipette, before being left at 4 • C for 6 h [19].(C) The non-adsorbed GO x by ZnO NRs was removed from the surface of the working electrode, using a covalent method with a higher ionic solution.A phosphate buffer solution with pH level of 7.0 and 4.4 M that provides 80 mM of ionic strength, is used to remove the excess of the non-adsorbed GO x .The working electrode was rinsed in the prepared solution five times, for 20 s each time.Removing the non-adsorbed GO x helps to increase the stability of the enzyme and increases the lifetime of the electrode from days, to months.(D) The last modification step of the working electrode was completed by covering the surface of the electrode with a nafion membrane.A 1 µL drop of the nafion solution was deposited on the working electrode's surface, and the electrode was left to dry for 2 h at 4 • C. A schematic structure of the working electrode can be seen in Figure 1.
Material and Device Characterizations
The high density and good alignment of the as-grown ZnO NRs can be seen in Figure 2a,b.Figure 2a shows an SEM image of the as-grown ZnO NRs with a 400 nm focusing area and 200,004 × magnification.From the image, one can see both the uniform distribution of the diameters of the nanorodes, and the three dimensional active area that hosts the electrochemical reaction.The focus
Material and Device Characterizations
The high density and good alignment of the as-grown ZnO NRs can be seen in Figure 2a,b.Figure 2a shows an SEM image of the as-grown ZnO NRs with a 400 nm focusing area and 200,004 × magnification.From the image, one can see both the uniform distribution of the diameters of the nanorodes, and the three dimensional active area that hosts the electrochemical reaction.The focus area of the SEM image in Figure 2b has a dimension of 3 µm, with a magnification of 34988 ×.A uniform distribution of the seed layer at 3000 rpm resulted in a well aligned and high density set of nanorods in the surface of the working electrode, with a small variation in their diameter.This was a direct consequence of the high active electrochemical reaction area and consequently, the high output current.In addition, ZnO NRs provide a free path of electrons between the glucose and the surface of the electrode, and a high surface-to-bulk ratio.Glucose oxidase can be adsorbed in a large surface area that is provided by the high surface-to-volume ratio of the as-grown ZnO NRs [20].
Material and Device Characterizations
The high density and good alignment of the as-grown ZnO NRs can be seen in Figure 2a,b.Figure 2a shows an SEM image of the as-grown ZnO NRs with a 400 nm focusing area and 200,004 × magnification.From the image, one can see both the uniform distribution of the diameters of the nanorodes, and the three dimensional active area that hosts the electrochemical reaction.The focus area of the SEM image in Figure 2b has a dimension of 3 µ m, with a magnification of 34988 ×.A uniform distribution of the seed layer at 3000 rpm resulted in a well aligned and high density set of nanorods in the surface of the working electrode, with a small variation in their diameter.This was a direct consequence of the high active electrochemical reaction area and consequently, the high output current.In addition, ZnO NRs provide a free path of electrons between the glucose and the surface of the electrode, and a high surface-to-bulk ratio.Glucose oxidase can be adsorbed in a large surface area that is provided by the high surface-to-volume ratio of the as-grown ZnO NRs [20].For device characterization, all measurements were performed using a Source Meter Keithley 2410 and an electrochemical impedance analyzer (Gamry potentiostat, Gamry Instruments, Warminster, PA, USA), and the measurements were performed in a PBS with a pH of 7.4.The surface of the fabricated electrode was immobilized with glucose oxidase (GOx) as an enzyme For device characterization, all measurements were performed using a Source Meter Keithley 2410 and an electrochemical impedance analyzer (Gamry potentiostat, Gamry Instruments, Warminster, PA, USA), and the measurements were performed in a PBS with a pH of 7.4.The surface of the fabricated electrode was immobilized with glucose oxidase (GO x ) as an enzyme mediator, and a nafion membrane for enhancing device stability and increasing ion exchange.An optimization procedure was performed to optimize the amount of GO x needed for working electrode surface immobilization.The working electrode was modified with four different concentrations of GO x , 10, 20, 30, and 40 mg/unit, and the highest current density was observed at 40 mg/unit.This is evidence that the coupling between ZnO NRs and GOx helps in different ways.For instance, ZnO NRs help in creating a nano-incubative environment for GOx, so more enzyme can be adsorbed in the nano-sensable reactive area [21].Furthermore, ZnO NRs create multi-tunnels, so electrons can be easily transferred from the surface of the ZnO NRs, to the surface of the electrode [22].On the other side, GOx provides a suitable nano-environment for the glucose to react with the oxygen, and to produce hydrogen peroxide (H 2 O 2 ) on the surface of the working electrode [23,24].The optimization procedure can be seen in Figure 3, and 40 mg/unit of GO x is used, in order to immobilize the working electrode for further investigations.can be adsorbed in the nano-sensable reactive area [21].Furthermore, ZnO NRs create multi-tunnels, so electrons can be easily transferred from the surface of the ZnO NRs, to the surface of the electrode [22].On the other side, GOx provides a suitable nano-environment for the glucose to react with the oxygen, and to produce hydrogen peroxide (H2O2) on the surface of the working electrode [23][24].The optimization procedure can be seen in Figure 3, and 40 mg/unit of GOx is used, in order to immobilize the working electrode for further investigations.A platinum plate was used as a counter electrode and the device was tested in a range of voltages, 0.1-0.8V. To optimize the applied potential, a steady-state current study is performed, in order to test the sensor's response time changes in glucose concentrations.A range of applied potentials, 0.1-0.8V, was utilized in the optimization study.The fastest and sharpest response was observed at 0.4 V, and the steady-state current and the saturation level were reached within 2 s with different glucose concentrations.The optimization condition is presented in Figure 4.A platinum plate was used as a counter electrode and the device was tested in a range of voltages, 0.1-0.8V. To optimize the applied potential, a steady-state current study is performed, in order to test the sensor's response time changes in glucose concentrations.A range of applied potentials, 0.1-0.8V, was utilized in the optimization study.The fastest and sharpest response was observed at 0.4 V, and the steady-state current and the saturation level were reached within 2 s with different glucose concentrations.The optimization condition is presented in Figure 4.
For instance, ZnO NRs help in creating a nano-incubative environment for GOx, so more enzyme can be adsorbed in the nano-sensable reactive area [21].Furthermore, ZnO NRs create multi-tunnels, so electrons can be easily transferred from the surface of the ZnO NRs, to the surface of the electrode [22].On the other side, GOx provides a suitable nano-environment for the glucose to react with the oxygen, and to produce hydrogen peroxide (H2O2) on the surface of the working electrode [23][24].The optimization procedure can be seen in Figure 3, and 40 mg/unit of GOx is used, in order to immobilize the working electrode for further investigations.A platinum plate was used as a counter electrode and the device was tested in a range of voltages, 0.1-0.8V. To optimize the applied potential, a steady-state current study is performed, in order to test the sensor's response time changes in glucose concentrations.A range of applied potentials, 0.1-0.8V, was utilized in the optimization study.The fastest and sharpest response was observed at 0.4 V, and the steady-state current and the saturation level were reached within 2 s with different glucose concentrations.The optimization condition is presented in Figure 4. Additionally, the effects of the substrate were studied by testing the structure Si/SiO 2 /Au/GO x /nafion (baseline in this paper), without the growth of ZnO NRs.Following the growth of ZnO NRs on Si/SiO 2 /Au/GO x , and after immobilizing the working electrode with GO x and the nafion membrane, the measured current densities, as a function of glucose concentrations of the fabricated device, were subtracted from the obtained current densities of the baseline.Both the current densities for the device and the baseline were sensed under 0.4 V.The same procedure was completed with the obtained lower limit of detection.
The electrochemical sensor works because of the oxidation of hydrogen peroxide (H 2 O 2 ) and the reduction of oxygen in the presence of the enzyme GO x , and the chemical reaction can be described as follows [25][26][27]: Glucose oxidase helps in enhancing the specificity of the sensor by allowing only glucose to react with the oxygen.A nafion membrane is necessary for enhancing the exchange of ions between H 2 O 2 and the surface of the working electrode (ZnO NRs), and for increasing the stability of the working electrode.The glucose reacts with oxygen and produces H 2 O 2 , and this H 2 O 2 gives H 2 , O 2 , and free electrons to the surface of the working electrode.Under the applied potentials, the generated electrons move in order to compensate for the lack of electrons in the oxygen molecules, since oxygen is reduced in the surface of the counter electrode.Those electrons can be collected as an output current that is proportional to the glucose concentrations in the analyzed medium.The importance of ZnO NRs in this electrochemical reaction is very clear since the reaction takes place on the surface of the immobilized electrode.The electrochemical reaction area can be calculated as follows: [Nanorods' area ×density of nanorods x substrate's area] and this can be written as [2πrh ( No.o f NRs Area ) × substrate's area], where h and r are the length and radios of ZnO NRs, respectively.Increasing the density of the nanorodes leads to an increase in the active area, and as it can be seen, the nanostructure of the working electrode helps in increasing the reaction area several times more than the bulk structure.In other words, the as-grown ZnO NRs have three dimensional areas, while bulk materials only have two dimensional areas.In addition, there is a high affinity between ZnO NRs and GO x , since the as-grown ZnO NRs have a high isoelectric point of around 9.5, with a positively charged surface, whereas the enzyme GO x only has a 4.2 isoelectric point, with a negatively charged surface.A higher amount of the immobilizing enzyme can be effectively adsorbed into the three dimensional nanostructured areas, allowing more glucose to react with oxygen on the surface of the working electrode.This is one of the reasons which can be used to explain the ability of the fabricated electrochemical sensor to detect a wide range of glucose concentrations.
The effects of the (baseline) Si/SiO 2 /Au/GO x /Nafion on sensitivity, LOD, and time response, were studied under a range of applied potentials, 0.1-0.8V, and with glucose concentrations ranging from 1-10 mM.In addition, the sensitivity, LOD, and time response of the Si/SiO 2 /Au/ZnO NRs/GO x /Nafion (device), were calculated in the same rage of voltages and same range of glucose concentrations.The energy that is required to force the charges to transfer from the working electrode to the counter electrode, is known as the applied potential.At the interface surface between the electrode and the electrolyte, there is an equilibrium potential, and thus no electrochemical reaction accrues.Once an external potential is applied, the hydrothermal equilibrium breaks, and an electrical current starts to flow, due to the oxidation reaction at the surface of the immobilized working electrode, and a reduction process at the surface of the counter electrode [28].In glucose electrochemical sensors, the specificity must be taken into consideration to increase the purity of the detected signal, and to prevent other electroactive species in the tested medium from being oxidized.
Results and Discussion
Figure 5 shows the current density as a function of glucose concentrations of different oxidation voltages of the (device) Si/SiO 2 /Au/ZnO NRs/GO x /Nafion.It shows the influence of ZnO NRs on the sensing area of the working electrode.The sensitivities were calculated to be the slope of the linear line, and the LOD can be calculated from 3 × δ/slop, where δ is the standard deviation of the intercept.As it can be noted from the figure, the fabricated working electrode showed a steady-state current with different glucose concentrations under different applied potentials, and it was saturated when the glucose concentrations reached 8 mM.This is because of the higher affinity between the surface of the as-grown ZnO NRs and the enzyme.The nanostructured surface provided a three dimensional-nonstructural hosting area, in which more GO x could be adsorbed.In addition, the as-grown ZnO NRs provided a free path to transfer electrons.The device showed a higher sensitivity at 0.4 V ~0.468 mA/cm 2 mM with a detection limit of ~166.6 µM, with a maximum current density detected at ~4 mA/cm 2 .The fabricated device exhibited a linear line of changes in current densities, with the addition of more glucose to the PBS, from 3-8 mM.This increases the capability of the fabricated electrochemical sensor to be with the devices that are used in real-time glucose detection.saturated when the glucose concentrations reached 8 mM.This is because of the higher affinity between the surface of the as-grown ZnO NRs and the enzyme.The nanostructured surface provided a three dimensional-nonstructural hosting area, in which more GOx could be adsorbed.In addition, the as-grown ZnO NRs provided a free path to transfer electrons.The device showed a higher sensitivity at 0.4 V ~0.468 mA/cm 2 mM with a detection limit ~166.6 µ M, with a maximum current density detected at ~4 mA/cm 2 .The fabricated device exhibited a linear line of changes in current densities, with the addition of more glucose to the PBS, from 3-8 mM.This increases the capability of the fabricated electrochemical sensor to be with the devices that are used in real-time glucose detection.Figure 6 represents the current density of the baseline under the same conditions.The working electrode is characterized without ZnO NRs grown on top of it, so the sensible area has two dimensions, instead of three dimensions.It is the reason behind the lower observed sensitivity.The maximum value of the steady-state current density was ~1.5 mA/cm 2 , and comparing that with the maximum steady-state current density of ~4 mA/cm 2 after the growth of ZnO NRs on the surface of the working electrode, gives a clear picture regarding the enhancement accrued to the detected signal.Not only the steady-state current density was enhanced, but also the fabricated electrochemical sensor showed a linearity behavior with higher glucose concentrations.The fabricated working electrode showed a saturation in the current density after the glucose concentrations passed the physiological-clinical levels for diabetes detections.Without the growth of ZnO NRs, the baseline showed a linear line from ~(1-4) mM, which means that the working electrode is out of the range for any clinical usage of glucose monitoring.The sensitivity of the baseline at 0.4 V was calculated to be 0.085 mA/cm 2 mM, with a detection limit of ~384 µ M. The net sensitivity of the fabricated electrochemical sensor at a 0.4 applied potential is 0.468 − 0.085 = 0.303 mA/cm 2 mM, and it emphasizes the influence that the as-grown ZnO NRs have on such devices.Since the surface of the working electrode without ZnO NRs has a lower surface-to-bulk ratio, less GOx can be adsorbed, and this is why the sensor was saturated in lower Figure 6 represents the current density of the baseline under the same conditions.The working electrode is characterized without ZnO NRs grown on top of it, so the sensible area has two dimensions, instead of three dimensions.It is the reason behind the lower observed sensitivity.The maximum value of the steady-state current density was ~1.5 mA/cm 2 , and comparing that with the maximum steady-state current density of ~4 mA/cm 2 after the growth of ZnO NRs on the surface of the working electrode, gives a clear picture regarding the enhancement accrued to the detected signal.Not only the steady-state current density was enhanced, but also the fabricated electrochemical sensor showed a linearity behavior with higher glucose concentrations.The fabricated working electrode showed a saturation in the current density after the glucose concentrations passed the physiological-clinical levels for diabetes detections.Without the growth of ZnO NRs, the baseline showed a linear line from ~(1-4) mM, which means that the working electrode is out of the range for any clinical usage of glucose monitoring.The sensitivity of the baseline at 0.4 V was calculated to be 0.085 mA/cm 2 mM, with a detection limit of ~384 µM.The net sensitivity of the fabricated electrochemical sensor at a 0.4 applied potential is 0.468 − 0.085 = 0.303 mA/cm 2 mM, and it emphasizes the influence that the as-grown ZnO NRs have on such devices.Since the surface of the working electrode without ZnO NRs has a lower surface-to-bulk ratio, less GO x can be adsorbed, and this is why the sensor was saturated in lower concentrations of glucose.The baseline has an impact on the ultimate performance of the fabricated electrochemical sensor tested at different potentials.The influence of the baseline should be taken into consideration during the process of designing electrochemical sensors, and the net performance of the fabricated sensor in terms of sensitivity, LOD, and time response, is a crucial point to consider in terms of applying such sensors in real-time glucose monitoring.
Figure 7a is the current density of the device and the baseline as a function of the glucose concentrations at 0.4 V.The fabricated working electrode showed constant changes in the current density when adding or changing glucose concentrations.After the applied potential passes the equilibrium potential between the working and the counter electrodes, H 2 O 2 is produced as an electrochemical reaction product, and it is oxidized on the surface of the as-grown ZnO NRs, as shown in Equation (2).In Figure 7a, one can see slight changes in the current density, with increasing glucose concentrations on the surface of the working electrode without the growth of ZnO NRs.After the growth of ZnO NRs on the surface of the working electrode, the fabricated sensor exhibited clear changes in current density with the addition of glucose to the PBS.In addition, the glucose active range was extended from 3-8 mM, and this can be found in Figure 7b. Figure 7a is the current density of the device and the baseline as a function of the glucose concentrations at 0.4 V.The fabricated working electrode showed constant changes in the current density when adding or changing glucose concentrations.After the applied potential passes the equilibrium potential between the working and the counter electrodes, H2O2 is produced as an electrochemical reaction product, and it is oxidized on the surface of the as-grown ZnO NRs, as shown in Equation (2).In Figure 7a, one can see slight changes in the current density, with increasing glucose concentrations on the surface of the working electrode without the growth of ZnO NRs.After the growth of ZnO NRs on the surface of the working electrode, the fabricated sensor exhibited clear changes in current density with the addition of glucose to the PBS.In addition, the glucose active range was extended from 3-8 mM, and this can be found in Figure 7b.The time response of the electrochemical sensor with and without the growth of ZnO NRs tested at 0.4 V for different glucose concentrations from 1-10 mM, is shown in Figure 8. Glucose concentrations were continuously increased by adding 1 mM of glucose to the PBS every 50 s, with a continuous stirring to give the glucose enough time to dissolve in the solution.The fabricated Si/SiO 2 /Au/ZnO NRs/GO x /Nafion (device) showed a fast and sharp step changes response of ~2 s, with a much lower level of noise than the baseline, especially in high glucose concentrations.Furthermore, the average increase in the current density with each new concentration of glucose was around 0.45 mA/cm 2 for the working electrode with ZnO NRs, whereas the baseline showed an average increase in the current density of ~0.1 mA/cm 2 with the same glucose concentrations.The fast response and the low noise behaviors of the device are direct results of the free path of electrons from the surface of the as-grown ZnO NRs, to the gold surface.Also, those results are associated with a high surface-to-volume ratio, created by the as-grown ZnO NRs on the surface of the electrode, which could accelerate the oxidation-reduction reaction.
concentrations were continuously increased by adding 1 mM of glucose to the PBS every 50 s, with a continuous stirring to give the glucose enough time to dissolve in the solution.The fabricated Si/SiO2/Au/ZnO NRs/GOx/Nafion (device) showed a fast and sharp step changes response of ~2 s, with a much lower level of noise than the baseline, especially in high glucose concentrations.Furthermore, the average increase in the current density with each new concentration of glucose was around 0.45 mA/cm 2 for the working electrode with ZnO NRs, whereas the baseline showed an average increase in the current density of ~0.1 mA/cm 2 with the same glucose concentrations.The fast response and the low noise behaviors of the device are direct results of the free path of electrons from the surface of the as-grown ZnO NRs, to the gold surface.Also, those results are associated with a high surface-to-volume ratio, created by the as-grown ZnO NRs on the surface of the electrode, which could accelerate the oxidation-reduction reaction.Cyclic voltammetry measurements were performed using a Gamry potentiostat to analyze the oxidation and reduction reaction between the working and counter electrodes, with and without glucose, and this can be found in Figure 9a,b.The tests were conducted at different scanning rates, 50, 100, and 200 mV/s, and from −1 to 1 V.The oxidation anodic peak and the reduction cathodic peak were not clear in the absence of glucose in the PB solution, indicating that the device has a good Cyclic voltammetry measurements were performed using a Gamry potentiostat to analyze the oxidation and reduction reaction between the working and counter electrodes, with and without glucose, and this can be found in Figure 9a,b.The tests were conducted at different scanning rates, 50, 100, and 200 mV/s, and from −1 to 1 V.The oxidation anodic peak and the reduction cathodic peak were not clear in the absence of glucose in the PB solution, indicating that the device has a good chemical stability, as can be seen in Figure 9a.Coating the working electrode with a nafion membrane helps to create a stable electrochemical sensor that is sensitive only to changes in glucose in the electrochemical microenvironment.Figure 9b shows the cyclic voltammogram graph after an addition of 2 mM of glucose, with a sweep voltage from −1 to 1 V.The anodic oxidation peak and the cathodic reduction peak are proportional to the scan rate, as can be seen from the figure.The reason behind the appearance of both peaks in the negative part of the scan, is that the electrochemical oxidation of hydrogen peroxide could be revisable.In other words, H 2 O 2 can be oxidized to H 2 , O 2 , and a free electron, and H 2 and O 2 can be reduced again to H 2 O 2 , in the reverse direction.In this case, the oxidation of H 2 O 2 can be stronger than the reduction of O 2 , and the reaction can be called a foreword reaction, or vice versa.Figure 10 represents the stability and the reproducibility of the working electrode, characterized at 0.4 V.It can be ascertained that the degradation of the sensed current is around 11% after ten days.The maximum current density achieved at day one was 3.99 mA/cm 2 , and the working electrode exhibited a current density of around 3.86 mA/cm 2 at day 10.This can be considered as demonstrating the high stability of the working electrode, and is a positive indicator of the effectiveness of using a covalent immobilizing method with a higher ionic solution.Nevertheless, the immobilized working electrode with the enzyme GO x should still be stored in a fridge at 4 • C, after each use.In fact, this triggers the need for a non-enzymatic glucose sensor to eliminate the reproducibility problem, and to increase the stability and the lifetime of the sensor.
in the electrochemical microenvironment.Figure 9b shows the cyclic voltammogram graph after an addition of 2 mM of glucose, with a sweep voltage from −1 to 1 V.The anodic oxidation peak and the cathodic reduction peak are proportional to the scan rate, as can be seen from the figure.The reason behind the appearance of both peaks in the negative part of the scan, is that the electrochemical oxidation of hydrogen peroxide could be revisable.In other words, H2O2 can be oxidized to H2, O2, and a free electron, and H2 and O2 can be reduced again to H2O2, in the reverse direction.In this case, the oxidation of H2O2 can be stronger than the reduction of O2, and the reaction can be called a foreword reaction, or vice versa.
Figure 10 represents the stability and the reproducibility of the working electrode, characterized at 0.4 V.It can be ascertained that the degradation of the sensed current is around 11% after ten days.The maximum current density achieved at day one was 3.99 mA/cm 2 , and the working electrode exhibited a current density of around 3.86 mA/cm 2 at day 10.This can be considered as demonstrating the high stability of the working electrode, and is a positive indicator of the effectiveness of using a covalent immobilizing method with a higher ionic solution.Nevertheless, the immobilized working electrode with the enzyme GOx should still be stored in a fridge at 4 °C, after each use.In fact, this triggers the need for a non-enzymatic glucose sensor to eliminate the reproducibility problem, and to increase the stability and the lifetime of the sensor.To examine the specificity and selectivity of the fabricated enzymatic glucose sensor, uric acid and ascorbic acid, which are the main electroactive species in the blood, were used alongside glucose.Figure 11 shows the amperometric response of the sensor with different electroactive species.The amount of glucose in the blood is around 30-50 times higher than the concentrations of both uric and ascorbic acid [6,29].Nevertheless, three different ratios of uric acid/glucose, 1/10, 2/10, 3/10, and three different ratios of ascorbic acid/glucose, 1/10, 2/10, and 3/10, were prepared, in order to investigate and prove the selectivity of the sensor.It seems that ascorbic acid at higher concentrations, (AA/G) 2/10 and (AA/G) 3/10, has a slight influence on the performance of the device.This could be because of the leakage in the nafion membrane, which caused a distortion in the detected signal.To examine the specificity and selectivity of the fabricated enzymatic glucose sensor, uric acid and ascorbic acid, which are the main electroactive species in the blood, were used alongside glucose.Figure 11 shows the amperometric response of the sensor with different electroactive species.The amount of glucose in the blood is around 30-50 times higher than the concentrations of both uric and ascorbic acid [6,29].Nevertheless, three different ratios of uric acid/glucose, 1/10, 2/10, 3/10, and three different ratios of ascorbic acid/glucose, 1/10, 2/10, and 3/10, were prepared, in order to investigate and prove the selectivity of the sensor.It seems that ascorbic acid at higher concentrations, (AA/G) 2/10 and (AA/G) 3/10, has a slight influence on the performance of the device.This could be because of the leakage in the nafion membrane, which caused a distortion in the detected signal.The advantages of the fabricated electrochemical sensor, when compared to other published sensors, are summarized in Table 1.
Table 1.The comparison between the current fabricated glucose sensor and other published sensors using differences in terms of sensitivity, applied oxidation-reduction potential, linear range, and the
Figure 1 .
Figure 1.A schematic structure of the immobilized working electrode of the fabricated electrochemical glucose sensor.
Figure 1 .
Figure 1.A schematic structure of the immobilized working electrode of the fabricated electrochemical glucose sensor.
Figure 1 .
Figure 1.A schematic structure of the immobilized working electrode of the fabricated electrochemical glucose sensor.
Figure 2 .
Figure 2. (a) SEM images of the directly grown ZnO NRs on Si/SiO2/Au at focusing area 400 nm and magnification 200,004 ×.(b) SEM image of the directly as-grown ZnO NRs on Si/SiO2/Au at focusing area 3 µ m and magnification 34,988 ×.
Figure 2 .
Figure 2. (a) SEM images of the directly grown ZnO NRs on Si/SiO 2 /Au at focusing area 400 nm and magnification 200,004 ×.(b) SEM image of the directly as-grown ZnO NRs on Si/SiO 2 /Au at focusing area 3 µm and magnification 34,988 ×.
Figure 3 .
Figure 3.The current density as a function of glucose oxidase concentrations, in order to optimize the concentration of the enzyme GOx for the working electrode immobilization condition.
Figure 4 .
Figure 4. Time response of the steady-state current as a function of different applied potentials as a part of the optimization procedure.
Figure 3 .
Figure 3.The current density as a function of glucose oxidase concentrations, in order to optimize the concentration of the enzyme GO x for the working electrode immobilization condition.
Figure 3 .
Figure 3.The current density as a function of glucose oxidase concentrations, in order to optimize the concentration of the enzyme GOx for the working electrode immobilization condition.
Figure 4 .
Figure 4. Time response of the steady-state current as a function of different applied potentials as a part of the optimization procedure.
Figure 4 .
Figure 4. Time response of the steady-state current as a function of different applied potentials as a part of the optimization procedure.
Figure 5 .
Figure 5. Current density of Si/SiO2/Au/ZnO NRs/GOx/Nafion as a function of different glucose concentrations at different applied potentials.
Figure 5 .
Figure 5. Current density of Si/SiO 2 /Au/ZnO NRs/GO x /Nafion as a function of different glucose concentrations at different applied potentials.
glucose.The baseline has an impact on the ultimate performance of the fabricated electrochemical sensor tested at different potentials.The influence of the baseline should be taken into consideration during the process of designing electrochemical sensors, and the net performance of the fabricated sensor in terms of sensitivity, LOD, and time response, is a crucial point to consider in terms of applying such sensors in real-time glucose monitoring.
Figure 6 .
Figure 6.Current density of Si/SiO2/Au /GOx/Nafion (baseline) without the growth of ZnO NRs as a function of different glucose concentrations at different applied potentials.
Figure 6 .
Figure 6.Current density of Si/SiO 2 /Au /GO x /Nafion (baseline) without the growth of ZnO NRs as a function of different glucose concentrations at different applied potentials.
Figure 7 .
Figure 7. (a) Current density of Si/SiO 2 /Au/ZnO NRs/GO x /Nafion (device) and Si/SiO 2 /Au/GO x /Nafion (baseline) as a function of different glucose concentrations at 0.4 V; (b) calibration line of Si/SiO 2 /Au/ZnO NRs/GO x /Nafion starting from 3 mM glucose concentrations and ending with 8 mM glucose concentrations.
Figure 10 .
Figure 10.The degradation and producibility of the current density for the fabricated working electrode, which was characterized at 0.4 V.
Figure 10 .
Figure 10.The degradation and producibility of the current density for the fabricated working electrode, which was characterized at 0.4 V.
Chemosensors 2017, 5 , 4 11 of 13 Figure 11 .
Figure 11.The amperometric response of the glucose sensor with different electroactive analytical solutions, uric acid and ascorbic acid, to examine the selectivity of the electrochemical sensor.
Figure 11 .
Figure 11.The amperometric response of the glucose sensor with different electroactive analytical solutions, uric acid and ascorbic acid, to examine the selectivity of the electrochemical sensor.
|
2017-02-17T08:44:35.884Z
|
2017-01-26T00:00:00.000
|
{
"year": 2017,
"sha1": "38f1d85aa80c2ddc317c8b121d57fca641a1e8fb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9040/5/1/4/pdf?version=1486204886",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3e5caca7df7f66c92b2f58dc9990835a9b2905ca",
"s2fieldsofstudy": [
"Chemistry",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.