id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
268401637 | pes2o/s2orc | v3-fos-license | Android Application for Automatic Door Control System using Body Temperature Measuring Sensor
— One way for people to prevent the transmission of Covid-19 in indoor and outdoor activities is by using body temperature detection. People who have a body temperature higher than the measurement limit are prohibited from mingling with other people because they have a high probability of being infected with the virus and can transmit it to other people. Manual checks in public places can cause officers to potentially contract the virus and can spread it to other visitors more quickly. Therefore, it is necessary to develop an automatic body temperature measurement system by displaying the results of the measurements on a screen and automatic door as a further prevention step, and temperature data can be stored in a database to help make the right decisions in preventing the spread of Covid-19 in that place. The implementation stages of the activities include: Data Collection, Preparation of Technology Design and Purchase of Materials, Product Manufacturing, Product Testing, Product Evaluation. An Android application for an automatic door control system using a body temperature measurement sensor can be developed into an even better tool. The development of this prototype will be developed by adding a PIR sensor and system modifications. The addition of a PIR sensor is used to increase the security of the tool to detect only human body temperature which can be recorded in order to minimize data errors. The system modifications made can be used for initial examination assessments with body temperature and also the addition of a system for measuring body weight and height to complete the assessment data.
I. INTRODUCTION
Coronavirus Disease-19 (Covid-19) is a disease caused by the corona virus that first appeared at the end of 2019 in Wuhan, China, which is currently causing a pandemic throughout the world.Fever is one of the symptoms of Covid-19.To support human survival during the spread of Covid-19, technological innovation in various fields has grown rapidly.For example, research on an Automatic Body Temperature Measuring System Based on the Internet of Things for Health Protocols.In this research, a body temperature measuring device was created using the ESP8266 Wifi module, GY-906 MLX90614 Sensor, Ultrasonic Sensor, Oled LCD, and Buzzer [1].
Places such as offices, shopping centers or public facilities are the locations where the Covid-19 virus is transmitted with the highest numbers.Therefore, preventive measures are needed that have been regulated by complying with health protocols, namely washing hands, wearing masks and maintaining distance [2].One way for people to prevent the transmission of Covid-19 in indoor and outdoor activities is by using body temperature detection.People who have a body temperature higher than the measurement limit are prohibited from mingling with other people because they have a high probability of being infected with the virus and can transmit it to other people.Manual checks in public places can cause officers to potentially contract the virus and can spread it to other visitors more quickly.Therefore, it is necessary to develop an automatic body temperature measurement system by displaying the results of the measurements on a screen and automatic door as a further prevention step, and temperature data can be stored in a database to help make the right decisions in preventing the spread of Covid-19 in that place.
During the current Covid-19 pandemic, entrance doors to public places such as shopping centers, hospitals and other public facilities that do not yet use automatic doors cause visitors to come into physical contact with other visitors, although not directly.In a similar case, there are temperature check officers who continue to monitor the temperature of each visitor closely, so that direct physical contact can occur.Not only that, temperature monitoring by officers does not yet record the average body temperature of visitors which is used to help determine various conditions.In this case, a body temperature recording tool will be created which is equipped with an automatic sliding door and the recording results can be accessed mobile.
II. RESEARCH METHODS
The implementation stages of activities include figure 1 Collect library data, read and take notes, and manage research materials completely and comprehensively regarding the need for tools and materials to the manufacture of tools/products for the system design that will be completed.
Preparation of Technology Design and Purchase
of Materials Carry out system design and purchase tools and materials needed to manufacture tools/products.
Product Manufacturing
Carrying out system design designs and assembling tools according to designs that have been made with materials that have been purchased so that they can be applied according to the designs that have been obtained from literature studies that have been carried out.
Product Testing
Once the system is deemed sufficient at the manufacturing stage, the next step is to test all product functions so that the evaluation stage can be carried out.
Product Evaluation
Provide an assessment of products that have been tested to correct deficiencies obtained from product testing
Collection of supporting materials and tools
The programming language used is C++ using the Arduino IDE application with supporting components used in the Android Application for the Automatic Door Control System using Body Temperature Measuring Sensors including NodeMCU ESP8266, LCD I2C 16x2, MLX 90614 temperature sensor, Infrared Sensor, GM861 Barcode/ QR reader, Relay Module, Buzzer, Servo Motor, LED, DC 12V Adapter, Jumper Cable, Gear and sliding door rail, and Acrylic, Male Jumper Cable, Male Jumper Cable, Female Jumper Cable.Along with the assembly, the creation of the application by coding the PHP programming language system and also the database with MySQL is carried out via localhost first with the XAMPP server, after the system is complete the collection of code and database is stored on the hosting so that the API code can then be run to connect the application with the Arduino device.so it can work.The final step is to create a display on the code to produce an Android application with the Apk extension.
Program Design and Realization a) Prototype Overview
The Android application used to control automatic doors using body temperature sensors aims to reduce direct physical contact and prevent the spread of Covid-19 through droplets attached to conventional door handles.This technology can be used in business centers such as shops, offices and restaurants which are located in indoor areas or indoor areas that are not exposed to direct sunlight.As shown below is an Arduino component design created using the Arduino programming language, C++.The Kodular code created is used to create a web application so that it can be used in Android-based applications.The function of this system is: 1) Provide comfort for temperature monitoring officers and visitors 2) Reduce physical contact between visitors and temperature monitoring officers with a more effective system.3) As a solution to reduce physical contact during the Covid-19 pandemic.4) As a further prevention step, temperature data can be stored in a database to help make the right decisions in preventing the spread of Covid-19.In its implementation, users are divided into 2, namely general users and registered users.Registered users can be thought of as users who already have an account on the system or can be analogous to patients.When a user checks their temperature, they first scan the user's QR code to detect them as a registered user, but if they don't scan they will be declared as general users.Next, the Infrared Sensor detects an object, such as a human hand, which activates the MLX90614 Temperature Sensor.The reading results are displayed on a 16x2 LCD screen and the data is sent to the server for automatic data retrieval.
If the temperature read is a normal temperature between 36 and 37°C, the Servo motor with gear will be activated.This will rotate the door automatically, and the LED will light up as protection for a few seconds.a sign that if the recorded temperature is abnormal, two conditions will occur: if there is registered user data, the door will remain open; if a general user uses a servo motor, the LED will flash several times within a few seconds as a warning signal.
The results of processing temperature recording by the user displayed in the system are temperature recording graphs and recording tables with the status of temperature conditions during the recording of the last 10 data.In this system there is also a Qr Code image that registered users can use to record.Additionally, admins are responsible for monitoring system data; Based on user status and type, the number of people checking temperature can be tracked.The admin can also monitor all user data registered for weighing and create master temperature status data based on human body temperature categories.
c) Advantages and Predicted Benefits for Users
The advantages of this system are: 1) Can be one of the newest options in implementing the Covid-19 health protocol, namely checking body temperature with a more effective system.2) Provide comfort for temperature monitors and visitors.3) Can reduce physical contact between visitors and temperature monitors.4) Creating security and comfort for all visitors.5) Temperature recordings stored in the database are used to provide daily reports of temperature recording results, from these reports you can also find out the average temperature recorded as well as the date and time of each recording.
Prediction of Benefits for Users, namely: 1) Users can find out how high, status and time their body temperature is recorded.2) Provide comfort for temperature monitors and visitors.3) Can reduce physical contact between visitors and temperature monitors.4) Creating security and comfort for all visitors.
Testing and analysis of results 1) Test Results
The results of this prototype trial prove that the level of functionality of this prototype with Black Box testing can work at 100%, with the results recorded as follows: Table 1.Test Results In this trial, it was carried out in conjunction with temperature recording from the MLX90614 Temperature Sensor with a Thermogun, it was found that the level of accuracy of the temperature sensor readings with the MLX90614 Temperature Sensor was 94% with the recording results as follows: Table 2. MLX90614 Temperature Sensor Accuracy Table 2) Development Potential An Android app that uses body temperature sensors to control automatic doors might be a better tool.By adding PIR sensors and changing the system, this prototype will be developed.The addition of a PIR sensor increases the safety of the device by only detecting human body temperature to reduce data errors.System modifications can be used for initial examination assessment with body temperature, and weight and height measurement systems are added to complete the assessment data.
VI. CONCLUSION
An Android application that utilizes body temperature sensors for an automatic door control system has been completed.This prototype uses a nodemcu microcontroller and uses the C++ programming language, PHP to create web applications, and Kodular to run applications on Android phones.With 95% accuracy compared to a thermogun, the MLX90614 temperature sensor can be used as a tool to measure body temperature.Test results on the prototype show that all applications and tools in the Android application for the automatic door control system can work with 100% accuracy.Therefore, this system can be applied in busy areas such as shopping centers, offices and restaurants which are located in central indoor areas or indoor areas that are not exposed to direct sunlight.
Figure 2 .
Figure 2. Component Circuit Design and Programming Code
Figure 3 .
Figure 3. API Code and Database Connection | 2024-03-15T15:17:26.596Z | 2023-10-27T00:00:00.000 | {
"year": 2023,
"sha1": "a099f473417b6448502f6e0ecdea2a69e5fbec83",
"oa_license": "CCBY",
"oa_url": "https://ijcis.net/index.php/ijcis/article/download/139/120",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "34b97f8a1499217ddc7d2b50f9e3f8daa88edca7",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
} |
252820867 | pes2o/s2orc | v3-fos-license | Anesthetic management using high-flow nasal cannula therapy during cardiac catheter examination of a neonate with hypoplastic left heart syndrome
Background Sedation during cardiac catheter examination in neonates with complex congenital heart disease is challenging, as even the slightest change in the circulatory or respiratory status can lead to hemodynamic collapse. Here, we report a case wherein we achieved adequate sedation with a high-flow nasal cannula (HFNC) for catheter examination in a neonate with a congenital cardiac anomaly. Case presentation An 11-day-old boy with hypoplastic left heart syndrome was scheduled for a cardiac catheter examination prior to the Norwood procedure. He underwent bilateral pulmonary artery banding (PAB) on day 1 and was receiving dobutamine, milrinone, alprostadil, and dexmedetomidine in addition to air and nitrogen insufflation via HFNC, which was applied following extubation on day 3 and nitrogen therapy on day 6 owing to persistent pulmonary overcirculation symptoms (tachypnea and low arterial blood pressure) despite bilateral PAB. A catheter examination was performed on day 11 with careful monitoring of expired carbon dioxide and observation of chest wall motion. Adequate sedation was provided with supplemental midazolam and fentanyl along with HFNC without tracheal intubation. Conclusions The findings from this case suggest that HFNC is a safe and effective tool for oxygenation during cardiac catheter examination under sedation in neonates.
Background
The effectiveness of high-flow nasal cannula (HFNC) therapy for post-extubation respiratory support in neonates is well known [1,2]. However, there are few reports of HFNC use in the cardiac catheter examination of neonates with complex congenital heart disease. Hypoplastic left heart syndrome (HLHS) is one of the most severe congenital heart diseases [3,4]. Patients with HLHS require strict control of various hemodynamic factors, including arterial pH, partial pressure of oxygen and carbon dioxide, and respiratory conditions such as airway pressure and respiratory system compliance that have an impact on pulmonary vascular resistance (PVR).
Here, we successfully managed anesthesia for the cardiac catheter examination of a newborn patient with HLHS without tracheal intubation using HFNC and careful monitoring. Written informed consent for this publication was obtained from the patient's family. In the pediatric intensive care unit (PICU), he received continuous alprostadil and dobutamine infusion to maintain systemic blood flow using a peripherally inserted central catheter (PICC) secured in the right upper limb. Balloon atrial septostomy was not performed because there were no signs of a restrictive patent foramen ovale. On day 1, bilateral pulmonary artery banding (PAB) was performed. HFNC therapy (Optiflow Junior 2 TM , prong size M, Fisher and Paykel Healthcare, Auckland, New Zealand) was applied following extubation on day 3 and nitrogen (N 2 ) therapy on day 6 owing to persistent pulmonary overcirculation symptoms (tachypnea and low arterial blood pressure) despite bilateral PAB.
Considering the risk of cardiovascular collapse (e.g., hypotension due to pulmonary overcirculation) associated with general anesthesia and positive pressure ventilation and potential complications due to prolonged mechanical ventilation, after consultation with the surgeon, we decided that HFNC would have significant advantages from scientific and clinical perspectives and selected this modality for our patient.
Anesthetic process
For the management of anesthesia without intubation using HFNC, the following two strategies were performed to secure stable spontaneous ventilation: (1) the patient's head was extended, with a shoulder roll used to maintain the upper airway and (2) the patient underwent capnography through a sampling line connected to an 8-Fr suction tube tip inserted in the pharynx to detect any airway obstruction or respiratory depression during the procedure.
Midazolam (total 0.27 mg/kg) and fentanyl (total 4.1 μg/kg) were titrated to safely increase the sedation level while considering the respiratory rate, chest wall movement, and SpO 2 level. Following sufficient sedation and stable spontaneous ventilation (respiratory rate 30-40 breaths/min), the surgeon anesthetized the insertion site with 1% lidocaine. A 5-Fr sheath was placed in the right femoral vein, followed by an examination. The arterial blood gas analysis at treatment initiation revealed a pH of 7.378, PaCO 2 of 44.9 mmHg (EtCO 2 , 42 mmHg), base excess of 0.9, and PaO 2 of 48.3 mmHg. Cardiac catheter examination showed the following: Qp/Qs, 1.72; Rp, 2.39 Wood units・m 2 ; and Rp/Rs, 0.14. No body movement was observed during the examination. Cardiac catheterization was completed without upper airway obstruction, apnea, or substantial oxygen desaturation (Fig. 1). Approximately 5 h after the examination, the respiratory rate returned to 80-100 breaths/min. Respiratory support with HFNC therapy was continued in PICU after the catheter examination until the day of the Norwood procedure.
Discussion
Neonates with complex cardiac malformations have poor tolerance for cardiac preload and afterload; therefore, slight deviations in systemic vascular resistance, PVR, and fluid balance from the normal safe range can lead to fatal cardiac collapse.
HFNC use during cardiac catheterization under spontaneous breathing in neonates with HLHS prior to surgical repair is advantageous in the following manner. First, HFNC provides a fresh gas flow rate higher than that of patients' inspiration and therefore secures the constant actual F I O 2 regardless of patients' breathing pattern [5], which must remain constant during cardiac catheterization because PVR may be affected. Second, HFNC reduces the anatomical dead space through the washout effect in the upper airway [6] and is expected to partially offset the impact of anesthesia on carbon dioxide retention, which may affect PVR and patients' breathing [7][8][9]. Third, the positive pressure generated by HFNC can maintain the patency of the upper airway and avoid the large intrathoracic negative pressure and increased venous return blood volume caused by strong respiratory effort against the increased airway resistance under sedation. Fourth, HFNC has the effect of continuous positive airway pressure and can be expected to maintain peripheral airway patency under conditions of weak spontaneous breathing during procedures, which can prevent alveolar collapse [2,10] and increase PVR. In the present case, the patient had pulmonary overcirculation before the examination and decreased pulmonary compliance due to alveolar edema and atelectasis. Under these conditions and without HFNC, induction of deep sedation may result in decreased ventilation, additional atelectasis, carbon dioxide retention, and acidemia, eventually leading to increased PVR. We used HFNC with the aim of maintaining the pre-examination PVR throughout the examination. Although the respiratory rate decreased from 80 to 30 breaths/min due to sedation during the examination, we were able to maintain oxygenation and PaCO 2 by using HFNC. The lack of significant differences in vital signs or blood gas analysis before and during the examination indicated that we successfully minimized changes in PVR. Therefore, HFNC therapy allows even neonates with pulmonary overcirculation to maintain stable spontaneous breathing and hemodynamics during a sedated examination. General anesthesia with intubation and mechanical ventilation was avoided because of various reasons. First was the risk of decreased PVR due to inappropriate manual ventilation during induction of anesthesia, which could cause further pulmonary overcirculation and result in hypotension and shock. The second was the risk of decreased cardiac output due to decreased preload caused by positive pressure ventilation. Third, the risk of myocardial ischemia due to reduced coronary artery blood flow is high in patients with HLHS. The fourth was the risk of pneumonia if weaning from mechanical ventilation became difficult.
Despite the difficulty in performing capnography in non-intubated patients, the insertion of a sampling line in the pharynx can detect airway obstruction and apnea. We found little disparity between EtCO 2 and PaCO 2 in this case, indicating that a sampling line in the patient's pharynx may also be used to predict poor minute ventilation in patients with spontaneous ventilation. The patient required strict respiratory control, so we used capnography as a general monitoring tool for respiratory depression and arrest [11]. We also measured PaCO 2 using blood gas analysis as needed.
The key point in the successful management of this case was the safe induction of anesthesia by administering anesthetic agents while carefully observing chest wall movement and respiratory rate to confirm effective spontaneous respiration. The choice and dosage of sedatives and analgesics require careful consideration when performing an examination or treatment with spontaneous breathing. Our patient received continuous high-dose dexmedetomidine in PICU. We administered dexmedetomidine throughout the examination because it is considered safe for use as a sedative without any hemodynamic Fig. 1 Hemodynamic changes in the patient during cardiac catheterization upon A entry into the operating room, B initiation of examination/ vascular puncture, C insertion of the 5-Fr sheath/arterial blood gas analysis, D completion of the examination, and E exit from the operating room. sABP, systolic arterial blood pressure; dABP, diastolic arterial blood pressure or respiratory effects during cardiac catheterization [12]. Furthermore, dexmedetomidine reportedly does not affect PVR when used to sedate children with pulmonary hypertension [13]. We considered that sedation with dexmedetomidine alone would be inadequate; therefore, we added midazolam and fentanyl (familiar agents for us), because they are less associated with cardiac depression and can be antagonized with reversal agents in case of unexpected apnea.
To summarize, we successfully managed a neonate with a complex cardiac malformation under intravenous anesthesia using HFNC therapy during a non-intubated cardiac catheter examination. | 2022-10-12T13:53:33.307Z | 2022-10-12T00:00:00.000 | {
"year": 2022,
"sha1": "90b7a0027ca0dc8ffc8232db547d4602974b9557",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "90b7a0027ca0dc8ffc8232db547d4602974b9557",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16144307 | pes2o/s2orc | v3-fos-license | Resumption of mass accretion in RS Oph
The latest outburst of the recurrent nova RS Oph occurred in 2006 February. Photometric data presented here show evidence of the resumption of optical flickering, indicating re-establishment of accretion by day 241 of the outburst. Magnitude variations of up to 0.32 mag in V-band and 0.14 mag in B on timescales of 600-7000 s are detected. Over the two week observational period we also detect a 0.5 mag decline in the mean brightness, from V~11.4 to V~11.9, and record B~12.9 mag. Limits on the mass accretion rate of ~10^{-10} to 10^{-9} Msun/yr are calculated, which span the range of accretion rates modeled for direct wind accretion and Roche lobe overflow mechanisms. The current accretion rates make it difficult for thermonuclear runaway models to explain the observed recurrence interval, and this implies average accretion rates are typically higher than seen immediately post-outburst.
INTRODUCTION
Recurrent novae (RNe) are interacting binary systems in which multiple nova outbursts have been observed. Both thermonuclear runaway and accretion models have been hypothesised as the outburst mechanism in these systems (Kenyon 1986). While thermonuclear is generally the preferred mechanism, there are problems with the high accretion rate required given the short outburst recurrence interval. The recurrent nova RS Ophiuchi has undergone six recorded outbursts in the last 108 years (Oppenheimer & Mattei 1993), the most recent occurring on 2006 February 12, which we take as day 0 (Hirosawa et al. 2006). RS Oph consists of a white dwarf primary accreting material from a red giant secondary within a nebula formed from the red giant wind. Attempts to classify the secondary component have resulted in suggestions ranging from K0 III (Wallerstein 1969) to M4 III (Bohigas et al. 1989), with several concluding M2 III to be most likely (Barbon, Mammano & Rosino 1969;Rosino 1982;Bruch 1986;Oppenheimer & Mattei 1993). The white dwarf in the system is close to the Chandrasekhar mass limit (Dobrzycka & Kenyon 1994), hence the ratio of mass accreted to mass ejected will determine whether RS Oph is a potential supernova Ia progenitor (Sokoloski et al. 2006).
The quiescent characteristics of RS Oph have led to its classification as a symbiotic star, although with a weak hotcomponent spectrum. Most symbiotic stars do not exhibit the variability on time-scales of minutes seen in cataclysmic variables (Sokoloski, Bildsten & Ho 2001), yet short-time-scale, aperiodic variations in optical brightness have long been known in RS Oph in its quiescent state (Bruch 1986). These stochastic or aperiodic brightness variations are known as flickering, with 'strong' flickering being of the order of a few tenths of magnitudes (Sokoloski et al. 2001). While symbiotic stars are a heterogenous class, other members show similarities to RS Oph that are applicable here.
To date, there have been no reported observations of the reestablishment of optical flickering in the immediate post-outburst phase of a recurrent nova, a fact that contributes to our uncertainty of the nature of the outburst mechanism. Observations by Zamanov et al. (2006) on day 117 (2006 show no flickering of amplitude above 0.03 mag in B, from which they conclude that an accretion disc around the white dwarf has been destroyed as a result of the 2006 outburst. The lightcurve reached a post-outburst minimum in 2006 September. Following discovery of rebrightening (Bode et al. 2006a), we monitored RS Oph photometrically for two weeks in B and V bands, detecting the resumption of optical flickering (Worters et al. 2006).
OBSERVATIONS
Observations of duration 37 to 118 minutes were made on eleven nights between 2006 October 11 and 24, the shorter observations being curtailed by cloud. Observations were made with the South African Astronomical Observatory (SAAO) 1-m telescope and the SAAO CCD camera, a 1024×1024 pixel SITe back-illuminated chip. The field of view is 5 ′ ×5 ′ , which is sufficient to include several comparison stars close to the target, including USNO-B1.0 0833-0368817 and -0368883. Integration times were typically 10 s in Johnson V (20 s in Johnson B), with a readout time of 19 s, allowing continuous V-band monitoring with a temporal resolution of ∼30 s. Longer exposure times were occasionally used to compensate for poorer sky conditions. Details of each night's observations are given in Table 1. The 3 nights lacking data were lost due to cloud.
Preliminary data reduction was performed using standard procedures in IRAF. The resulting images were then processed using CCD tasks in the SAAO STAR package (described in Balona 1995;Crause, Balona & Kurtz 2000) to determine aperture magnitudes of the target and selected comparison stars. Figure 1 shows the diversity of flickering amplitude and time-scale present in the V-band lightcurves obtained on ten nights of the 2 week period of observations. Visual inspection reveals an increase in flickering amplitude during nights towards the end of the run. Figures 2 and 3 show differential lightcurves of RS Oph compared with two comparison stars in the field for the nights during which we detect some of the smallest and greatest flickering amplitudes, respectively. Comparing the weakest flickering detected in the target ( Figure 2) with brightness variations in the constant comparison stars verifies the intrinsic variability of RS Oph. Flickering is also detected in the B-band data, plotted in Figure 4. Gromadzki et al. (2006) observed a selection of symbiotic stars, performing a statistical evaluation of the significance of flickering in the data. They calculate mean magnitudes and stan- dard deviations in their variable targets (σvar) and comparison stars (σcomp). Since the comparison stars in the field are all 2 mag fainter than RS Oph, standard deviations on the value expected for a constant star of the same brightness as the target (σ ′ comp ) are derived from an empirical formula. With the number of counts in the data presented here being significantly lower than the Gromadzki et al. (2006) values (a few 1000s, cf. 10 5 ), this method proved less reliable when applied to our data. Two alternative methods of deriving σ ′ comp were used in the current analysis: (a) fitting a power law to the mean magnitude and σcomp values for the comparison stars, obtaining an estimate of σ ′ comp in RS Oph by extrapolation, and (b) estimating σ ′ comp by equating it to σcomp for the brightest comparison star (13.2 mag), thus yielding very conservative values. All results presented here were obtained using (b), the more conservative technique, i.e. giving larger error bars.
RESULTS
The ratio Rvar = σvar/σ ′ comp can be used to assess the significance of the flickering. The criteria specified by Gromadzki et al. (2006) to determine the existence of flickering are: • 1.5 Rvar < 2.5 -flickering "probably present", and • 2.5 Rvar -flickering "definitely present".
Evaluating the full dataset for each night according to the above criteria suggests that flickering is definitely evident in RS Oph on all but 3 nights observed. Even using the conservatively large error estimates, Rvar is close to the cut-off value for definite flickering on these 3 nights. Applying these criteria to ten minute periods within each night's data, we detect at least probable flickering for all ten minute periods on 6 nights, and definite flickering for at least half of all ten minute periods on 6 nights. Again, despite being conservative estimates, these values are very close to Rvar for definite flickering. Table 1 shows the mean ratio Rvar averaged over all ten minute intervals for each night, and also for each full night's data. The statistical analysis presented here is adequate to demonstrate that significant flickering is detected on time-scales of ten minutes to 2 hours.
Using Equation (3) of Gromadzki et al. (2006), we obtain Vband flickering amplitudes (A) in RS Oph ranging from 0.06 mag to 0.32 mag. Table 1 lists flickering amplitudes derived from both the full data set for each night, as well as mean values for ten minute intervals within each night's data.
A decrease in the mean magnitude of RS Oph over the 2 week period is depicted in figure 5, from which the range in V magnitude detected each night is also apparent. The mean magnitude for each night is given in Table 1.
DISCUSSION
During observations made between 241 and 254 days postoutburst, we detect aperiodic V-band variability in RS Oph, with amplitudes ranging from ∼ 0.1 − 0.3 mag, constituting 'strong flickering' (Sokoloski et al. 2001). Observations made by Zamanov et al. (2006) on day 117 of the 2006 outburst show no variability with amplitude above 0.03 mag. In dwarf novae, optical flickering is attributed to two sources: the turbulent inner regions of the disc and the bright spot, where the stream of matter from the Roche lobe-filling donor star impacts the outer edge of the accretion disc, with inhomogeneities in the flow thought to result in flickering (e.g. Warner 1995; Kenyon 1986). The physical mechanism that causes flickering in symbiotics is not well understood, but is believed to originate from accretion onto a white dwarf (Zamanov & Bruch 1998). Adopting this assumption, these observations are consistent with re-establishment of accretion between 117 and 241 days after the onset of the 2006 outburst. This is the earliest reported detection of flickering subsequent to an outburst in RS Oph.
Mass transfer rate
Mass transfer from the secondary component is generally attributed to one of two mechanisms: either Roche lobe overflow (RLOF) onto an accretion disc; or through direct accretion of matter from the red giant wind onto the white dwarf. Assuming the flickering we observe originates from a re-established accretion disc, we can place a constraint on the mass transfer rate. Sokoloski & Kenyon (2003) relate the time taken to re-establish the disc (the viscous time-scale, tvisc) to the inner radius of the disc (RI). This radius can be further related to the rate of mass transfer through the disc (which in this case we assume to equate to the white dwarf accretion rate,Ṁacc) and the dynamical time-scale (t dyn ), which is approximately the time-scale of flickering. Rearranging these equations sourced from Frank et al. (1992), we find: whereṀacc is in units of M ⊙ yr −1 , and α depends on the state (high or low) of the disc, with α = 0.03 in the low state (Warner 1995), which we assume in this case. We take MWD to be 1.35M ⊙ (Hachisu & Kato 2000). As flickering recommenced between days 117 and 241, we have a range of 1.01×10 7 tvisc 2.08×10 7 s. The shortest time-scale on which we see flickering is t dyn ≈ 600 s. Thus for a low state we obtain an upper limit ofṀacc 4.1×10 −9 and a lower limit ofṀacc 3.7 × 10 −10 M ⊙ yr −1 .
Mass transfer mechanism
In order to put this into context in terms of the mass transfer mechanism operating in the system, we now consider these values relative to mass transfer rates expected for accretion direct from the red giant wind and via RLOF. A mass accretion ratio, f , defined as the ratio of the mass accreting onto the primaryṀacc, to the mass-loss rate from the donor companionṀgiant, has been calculated by Nagae et al. (2004). They quote f 1 % in a typical wind case, increasing to f ∼ 10 % for RLOF. Studies of the symbiotic star EG And by Vogel (1991) yield a massloss rate from the red giant of 10 −8 M ⊙ yr −1 . Since EG And has a number of similar parameters to RS Oph (M2 red giant secondary, 483 day orbital period (Fekel et al. 2000) cf. ≈ 460 days in RS Oph (Dobrzycka & Kenyon 1994), similar absolute magnitude (Sokoloski et al. 2001)), we adoptṀgiant ∼ 10 −8 M ⊙ yr −1 for RS Oph. Applying the ratios from Nagae et al. (2004) to this mass loss rate results in accretion rates ofṀacc ∼ 10 −9 M ⊙ yr −1 for RLOF, andṀacc 10 −10 M ⊙ yr −1 for direct wind accretion. Thus ourṀacc limits calculated in § 4.1 span the range required for direct wind accretion and RLOF at the time accretion resumed.
Outburst mechanism
Since the outburst mechanism is dependent on the mass transfer rate, we now consider the implications of the rate determined for this early stage of resumed accretion. Yaron et al. (2005) present a grid of outburst characteristics compiled from models of thermonuclear runaway in novae. These data predict that for a system with a mass transfer rate of 10 −9 to 10 −10 M ⊙ yr −1 onto a hot 1.4 M ⊙ (2000)) produces a further increase in the outburst recurrence interval since the accreted mass required to trigger thermonuclear runaway is higher for a lower mass white dwarf. To allow for discrepancies in the white dwarf mass, basing these calculations on the value of 1.2M ⊙ determined by Starrfield et al. (1996) results in a lowerṀacc, lengthening the recurrence interval still further. From Yaron et al. (2005), a recurrence period of ∼20 yr is achievable only if we have 100% accretion efficiency, i.e.Ṁacc =Ṁgiant = 10 −8 M ⊙ yr −1 , which far exceeds the findings of e.g. Nagae et al. (2004) ( § 4.2). While our upper limit on the accretion rate approaches 10 −8 M ⊙ yr −1 , the non-linear relation of the Yaron et al. (2005) model means that the recurrence period remains several times longer than 20 yr foṙ Macc ∼ 4×10 −9 M ⊙ yr −1 , and indeed a factor of two greater than the longest interval between observed outbursts in this system.
The accretion luminosity of the system would be most accurately measured at UV wavelengths. While observations were made in the UV with Swif t, none exists prior to day 25 (Goad & Beardmore, private communication). From this point on the UV tracks the behaviour of the supersoft X-ray emission attributed to fusion on the white dwarf surface (Hachisu, Kato & Luna 2007). The 1985 observations came at a similar point post-outburst. Hence between outbursts we need to estimate accretion rates by less direct methods. Standard accretion theory predicts that disc luminosity is proportional to the mass transfer rate (Zamanov & Bruch 1998). Thus the visual quiescent variation of 2.5 mag reported by Oppenheimer & Mattei (1993) implies a factor 10 variation in mass transfer rate during quiescence. As the visual magnitude during our observations was at the lower end of the quiescent magnitude range this implies that the inter-outburst accretion rate is typically higher than we see here.
Such variations of mass transfer rate are plausible in either the RLOF or wind accretion scenario; either on short time-scales due to erratic or clumpy mass transfer, or over longer periods, perhaps increasing as the disc becomes better established. Hachisu & Kato (2000), for example, determine a much larger mass accretion rate ofṀ = 1.2 × 10 −7 M ⊙ yr −1 for RS Oph between the outbursts in 1967 and 1985, and brightness variations of up to 3 mag have been observed during periods of quiescence (Rosino 1987). Furthermore, recurrence intervals in this object vary from 9 to 35 yr.
Orbital eccentricity may have a particularly marked effect on the rate of mass transferred by direct wind accretion, as the white dwarf trajectory would trace a route through varying densities of the red giant wind. Indeed, the eccentricity in the system is completely unconstrained; Dobrzycka, Kenyon & Milone (1996) quote e = 0.25 ± 0.70 when modeled using the giant component and e = 0.40 ± 1.40 using the white dwarf. Another factor not accounted for in the models that could potentially cause inconsistencies in the nova recurrence interval is that of residual heating of the white dwarf following an outburst, lowering the accreted mass required to trigger a subsequent outburst. Further work is needed to fully verify the outburst mechanism in this and similar systems.
CONCLUSIONS
(i) Statistically significant flickering is detected in RS Oph on days 241 to 254 of the 2006 outburst, consistent with the reestablishment of accretion between days 117 and 241 after outburst.
(ii) Over the 2 week period of observations, the mean V magnitude decreases by ∼ 0.5 mag from 11.4 to 11.9 mag.
(iii) Calculated limits on the white dwarf accretion rate of 4 × 10 −10 < ∼Ṁ acc < ∼ 4 × 10 −9 M ⊙ yr −1 span the range required for both direct wind accretion and RLOF mechanisms. We therefore find no conclusive evidence favouring one accretion mechanism over the other in RS Oph.
(iv) Current models are not sufficiently complete to confidently determine the accretion and outburst mechanisms in RS Oph. | 2007-06-08T17:26:14.000Z | 2007-06-08T00:00:00.000 | {
"year": 2007,
"sha1": "89f09e88531d3bc692043884c22716220de5fc93",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/379/4/1557/3195590/mnras0379-1557.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "6d407ce38aafa5d05229a3c79d53991dc580eff7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
132078827 | pes2o/s2orc | v3-fos-license | Spatial and taxonomic patterns of honey bee foraging: A choice test between urban and agricultural landscapes
The health of honey bee colonies cannot be understood apart from the landscapes in which they live. Urban and agricultural de-velopments are two of the most dramatic and widespread forms of human land use, but their respective effects on honey bees remain poorly understood. Here, we evaluate the relative attractiveness of urban and agricultural land use to honey bees by con-ducting a foraging choice test. Our study was conducted in the summer and fall, capturing a key portion of the honey bee foraging season that includes both the shift from summer- to fall-blooming flora and the critical period of pre-winter food accumulation. Colonies located at an apiary on the border of urban and agricultural landscapes were allowed to forage freely, and we observed their spatial and taxonomic foraging patterns using a combination of dance language analysis and pollen identification. We found a consistent spatial bias in favor of the agricultural landscape over the urban, a pattern that was corroborated by the prevalence in pollen samples of adventitious taxa common in the agricultural landscape. The strongest bias toward the agricultural environment occurred late in the foraging season, when goldenrod became the principal floral resource. We conclude that, in our study region, the primary honey bee foraging resources are more abundant in agricultural than in urban landscapes, a pattern that is especially marked at the end of the foraging season as colonies prepare to overwinter. Urban beekeepers in this region should, therefore, consider supplemental feeding when summer-blooming flora begin to decline.
Introduction
The collective ability to survey a large foraging area and concentrate foraging effort on the most rewarding resources is a hallmark of honey bee foraging biology (Seeley 1995). This ability is conferred by the sophisticated dance language (von Frisch 1967) whereby individual foragers integrate their knowledge of resource availability, scent, and location (Seeley 1995;Grü ter and Farina 2009). The intelligibility of the dance language to human observers allows the logic of honey bee foraging to be inverted to yield ecological insight: as a honey bee colony assesses its environment and allocates its foragers to the most rewarding resources, the spatial allocation of foragers revealed by the dance language can be used to infer the types of available habitat most suitable for honey bee foraging (Couvillon et al. 2014a;Garbuzov et al. 2014Garbuzov et al. , 2015Couvillon and Ratnieks 2015). Spatial habitat inferences from dance language analysis can also be V C The Author 2017. Published by Oxford University Press. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/ licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com supported by the taxonomic identification of pollen loads collected by honey bees in the same study area (Garbuzov and Ratnieks 2013;Garbuzov et al. 2015).
Understanding the suitability of different habitat types for honey bee foraging is of central importance in the task of improving honey bee health and productivity, and honey bee habitat utilization may also inform the conservation of other pollinator species (H€ artel and Steffan-Dewenter 2014). Moreover, the foraging decisions made by a honey bee colony with respect to its surrounding landscape can furnish theoretical insights into how the honey bee foraging system has evolved to optimize the collection of resources in complex environments (e.g. Visscher and Seeley 1982).
Because of the honey bee's close association with humans, any discussion of honey bee foraging habitat must emphasize the role of human land use in shaping the composition, distribution, and abundance of floral resources (H€ artel and Steffan-Dewenter 2014). Two main categories of human land use, urban development and agricultural cultivation, are comparably profound but divergent departures from a natural or semi-natural condition, and it is important to understand their respective effects on honey bee health. Studies directly comparing urban and agricultural landscapes with respect to honey bee health are equivocal, perhaps reflecting the enormous diversity of landscape composition subsumed by the terms 'urban' and 'agricultural'. In the UK, Garbuzov et al. (2014) found that honey bees located within the city of Brighton foraged almost exclusively within the urban environment rather than extending their flights into the agricultural countryside, and Donkersley et al. (2014) found that the protein content of beebread was correlated positively with urban land use and negatively with agricultural land use. In Denmark, hives located in pre-dominantly urban landscapes were shown to have a higher average weight than those in mixed or pre-dominantly agricultural landscapes (Lecocq et al. 2015). In the Midwestern USA, however, Sponsler and Johnson (2015) found that agricultural landscapes tended to favor honey bee productivity compared with urban or seminatural (forest) habitat. Similar results were reported from Luxembourg, where Clermont et al. (2015) found frequent positive correlations between various forms of urban land use and honey bee overwintering colony loss, while certain forms of agriculture and rural land use tended to be negatively correlated with colony loss. Other studies of honey bees in either urban or crop-dominated landscapes (not in direct comparison) have often suggested negative effects of both compared with more diversified landscapes (Steffan-Dewenter and Kuhn 2003;Couvillon et al. 2014a;Odoux et al. 2014;Requier et al. 2015;Danner et al. 2016;Dolezal et al. 2016;Smart et al. 2016;Youngsteadt et al. 2015).
Here, we directly compare the foraging quality of urban and agricultural landscapes by a field-scale choice test using honey bee colonies located at a site along the interface of city and farmland. To determine the relative allocation of foraging activity between these two landscapes, we use a combination of dance language analysis and pollen identification to infer both spatial and taxonomic patterns of foraging.
Study site and timeframe
An apiary consisting of five honey bee colonies-two in standard Langstroth hives and three in three-frame observation hiveswas established on the grounds of a historic cemetery located on the western edge of the metropolitan area of Columbus, OH (Fig. 1). Using QGIS 2.1 software (QGIS Development Team 2016), we digitized the landscape within a 5-km radius of the apiary and classified it using the binary categories of 'urban' (predominantly residential and commercial) and 'agricultural' (predominantly field crop). Digitization was performed by tracing the boundaries between residential/commercial development ('urban') and neighboring farmland ('agricultural') visible in 2013 aerial imagery from the Ohio Statewide Imagery Program, corroborated by reference to the 2011 National Land Cover Database land use layer (Homer et al. 2015). While the categories of 'urban' and 'agricultural' represent internally heterogeneous landscapes, parsing these categories into more specific landscape classifications was beyond the scope of this study; our central question was how the general land use syndromes represented by the terms urban and agricultural affect honey bee foraging. Thus, roadways and small residential areas occurring in predominantly agricultural surroundings were classified as part of the larger pattern of agricultural land use; similarly, forest patches and fields occurring in pre-dominantly built-up surroundings were classified as urban.
Our study was conducted in the summer and fall of 2014, beginning in late July and continuing to late September. This time frame encompasses the phenological transition between summer and fall flora along with the critical period of pre-winter food storage.
Dance recording and decoding
From 7 August to 26 September 2014, dance behavior was recorded one day per week from the three observation hives, representing a total of seven days of foraging activity (no dances were recorded on August 21 due to poor weather conditions). On each recording day, a morning (0930-1100 h) and an afternoon (1300-1600 h) session were recorded, each lasting $45 min. During each recording session, all three colonies were recorded simultaneously using three separate cameras. See Supplementary Material S1 for a description of the camera models used.
Video from each recording session was first split into 1-min segments, and then every fifth segment was subsampled for analysis. Each 1-min analysis segment was imported separately into the FIJI distribution (Schindelin et al. 2012) of the image analysis software ImageJ (Schneider et al. 2012), and dances were decoded using the MTrackJ plugin (Meijering et al. 2012). Following Couvillon et al. (2012), four waggle runs from each dance were decoded, including two right turns and two left turns. See Supplementary Material (S1) for details on the application of FIJI to dance decoding.
Decoded dances from all three colonies were mapped together using the Bayesian probabilistic method developed by Schü rch et al. (2013) in which decoded locations are plotted not as discrete points, but as probability clouds derived by sampling 1000 point location estimates from the posterior probability distribution of each dance. This method acknowledges the intrinsic uncertainty in the dance language and allows for the computation of credible intervals to test for habitat-based foraging biases (Garbuzov et al. 2014(Garbuzov et al. , 2015.
Pollen collection and identification
The two Langstroth hives were fitted with bottom-mounted pollen traps (Sundance I, Ross Rounds, Inc), and pollen was collected in one-week intervals on the same schedule as the dance recordings plus one sample on 31 July prior to the first date of dance recording. To minimize nutritional stress, pollen was alternatively trapped from only one of the two colonies each week while the other was allowed to forage freely. Thus, each pollen sample represents seven consecutive days of pollen trapping for one of the two colonies. Upon return to the laboratory, pollen samples were stored in an airtight container at -20 C.
A pollen reference collection was constructed by collecting floral specimens from the vicinity of the research apiary and from other locations in the region. $360 voucher specimens were collected, and an additional 74 pollen samples were collected directly from the anthers of plants in curated botanical gardens. All specimens were identified to the lowest possible taxonomic level. For a complete list of voucher specimens and prepared reference slides, see Supplementary Material S3.
Trapped pollen was first weighed (wet weight) and subsampled. From samples with a total mass of >100 g, a 10 g subsample was taken. All other samples were subsampled at 10% of their total mass. Pollen pellets from each subsample were then sorted by color, texture, and other visual characteristics into preliminary taxonomic groups, with mixed pollen pellets [i.e. rarely occurring single pellets consisting of visible bands of contrasting pollens, such as described by Percival (1947)] and singletons (groups represented by only one corbicular pollen pellet) being omitted from further analysis. These preliminary taxonomic groups were weighed and then scanned using a flatbed scanner (Canon LiDE 210) to record the visual characteristics of the pollen prior to the destructive process of microscopic preparation.
From each of the preliminary groups, 10 pollen pellets (or all pellets for groups having fewer than 10 total) were mixed with several drops of water in a microcentrifuge tube to form a homogenous suspension. Then, small aliquots of suspended pollen were mounted on a microscope slide using glycerin jelly stained with basic fuchsin (Kearns and Inouye 1993).
Mounted pollen specimens were examined at Â400-1000 and identified by comparison with similarly prepared specimens from the reference collection and by the corbicular characteristics recorded in the scanned images. For slides containing >1 pollen type (due to imperfect sorting or mixed foraging), the relative abundance of each pollen type was estimated by identifying and counting all grains within the microscope field of view and shifting the field of view until a total of $500 grains were counted. Because the grains of different pollens vary widely in size, it is more informative to express the relative abundance of different pollens in terms of volume rather than grain count (O'Rourke and Buchmann 1991). Following O'Rourke and Buchmann (1991), we modeled each pollen type as either a sphere or ellipsoid and estimated its volume by measuring its mean polar and equatorial axis length (based on five randomly selected grains) and applying the corresponding formula. The proportional volume of each pollen type was then multiplied by the total mass of the sorted group represented by the microscope slide to estimate the proportion of the total mass contributed by each pollen type.
Statistical analysis
Following Garbuzov et al. (2014Garbuzov et al. ( , 2015, we computed Agresti-Coull 95% credible intervals for the proportion of urban The remainder of the 5 km radius was classified as 'agricultural'. Honey bee foraging choice test | 3 foraging activity, treating the apportionment of point locations between the urban and agricultural landscape classes as a binomial distribution. Pseudoreplication was avoided by dividing the number of urban points (p) and the total number of points (n) by the number of simulations of each dance (1000) (Garbuzov et al. 2014(Garbuzov et al. , 2015. Credible intervals not including 0.5 (i.e. an equal apportionment of urban and agricultural foraging) were interpreted as indicating a statistically significant positive or negative bias. Credible intervals were computed individually for each day of dance recording and then also for the pooled data set of all days. All analyses were performed in R (R Core Team 2015) using the 'prevalence' package (Devleesschauwer et al. 2014).
Spatial foraging patterns
For all dates, the majority of foraging activity occurred in the agricultural landscape, and this bias was significant in each case (credible interval for proportion of urban foraging < 0.5) (Fig. 2). The proportion of urban foraging rose each sampling date from 7 August (0.16) to 4 September (0.38), began to decline on 12 September (0.20), and then fell sharply on 19 September (0.04) and remained low on 26 September (0.10).
Foraging activity was most concentrated near the apiary, as expected from previous studies of honey bee foraging distance Figure 2. Spatial foraging data (top) aligned with corresponding pollen data (bottom). On all dates, a significant majority of foraging activity inferred from dance language analysis occurred in the agricultural landscape (error bars depict 95% Agresti-Coull credible intervals for the proportion of urban foraging activity, all of which are <0.50). Color-coded area plot shows the relative abundance of major (! 2.5% of at least one sample) pollen taxa; minor taxa are shown in gray. *Chamaecrista types 1 and 2 differed notably in corbicular color but were both matched microscopically to a C. fasciulata reference specimen. This is most likely due intraspecific variation in the pollen or variation in the honey added to it by the bees, but it is possible that our samples included the closely related C. nictitans, which was not represented in our reference collection but is found in Ohio.
(e.g. Couvillon et al. 2014b). When foraging activity ranged >1 km from the apiary, it was consistently concentrated in the agricultural landscape to the south and west, though occasional foraging occurred along the urban-agricultural interface to the north (7 August, 4 September) and in the urban landscape to the east (12 September) (Fig. 3). The most distant foraging occurred 4 September, when a small amount of activity occurred in the agricultural landscape $4 km southwest of the apiary.
Taxonomic foraging patterns
Between 31 July and 12 September, pollen samples were comprised mainly of legumes (Trifolium and Chamaecrista) and wild carrot (Daucus carota) (Fig. 2). These gradually gave way to Canada goldenrod (Solidago canadensis), which became the predominant pollen source in the last two weeks of the study period.
Beside these major taxa, many minor pollen types occurred in low abundance. A total of 42 pollen types were identified in our samples, representing at least 11 plant families (Supplementary Material S2), and this number almost certainly underestimates the true taxonomic richness due to our omission of singleton pollen pellets. In addition to these, one sample contained several pellets of fungal spores, as has been occasionally documented by other observers (Wingfield et al. 1989).
Discussion
Honey bees in our study exhibited a consistent, and often dramatic, foraging bias in favor of the agricultural over the urban landscape. This pattern, observed directly in our spatial foraging data, was corroborated taxonomically by the overwhelming prevalence in our pollen samples of flora common in Midwest agricultural landscapes (D. carota, Trifolium spp., Chamaecrista fasciculata, S. canadensis). It should be noted that spatial patterns inferred from dance analysis reflect both pollen and nectar foraging, whereas pollen identification reveals only the former. Nevertheless, most of the flora that dominated our pollen samples-particularly the Trifolium spp. and S. canadensis-are also known to be major nectar sources for honey bees (Pellett 1920;Goltz 1975;Ayers and Harman 1992), so it is likely that the spatial foraging patterns we observed largely represent patches of the principle floral taxa found in our pollen samples.
The degree of bias toward agricultural habitat varied across sampling dates, and this variation corresponded to taxonomic shifts in pollen collection (Fig. 2). The greatest proportion of urban foraging occurred on 4 September, when pollen samples were dominated by clovers (Trifolium spp.). The sharp decline of urban foraging activity starting 12 September coincided with the taxonomic shift from clovers to goldenrod, the latter evidently being concentrated in the agricultural landscape to the south and west of the apiary (Fig. 3). White clover, due to its prostrate growth habit, can tolerate frequent mowing and is common in urban open areas including residential lawns (Frank and Hathaway 2015) as well as in the field margins and roadsides of the agricultural landscape. In contrast, goldenrod, with its tall growth habit, is restricted to unmowed open areas; in our study region, these include mainly uncultivated fields and conservation strips, consistent with the growth patterns and habitat associations described by Pavek (2011). By mid-September in our study region, the summer-blooming clovers are in decline and goldenrod emerges as the last major pollen and nectar source of the year. In the absence of late blooming urban flora, like the ivy (Hedera helix) common in the UK landscape studied Figure 3. Spatial foraging patterns inferred from dance analysis for each sample date. The complete black ring depicts a 3 km radius around the study apiary, and the incomplete black ring in the lower left of each panel represents the southwestern extremity of the 5 km radius to which landscape classification was constrained. The urban-agricultural border is demarcated by a black line, and the urban area (east of the border line) is shaded darker. Foraging activity is represented for each sample date by a probability density cloud (red color ramp) depicting the relative probability that each patch (25 Â 25 m bin) was visited by bees whose dances were decoded.
The number of decoded dances for each sample date is shown in the bottom right of each panel.
Honey bee foraging choice test | 5 by Garbuzov and Ratnieks (2013), strictly urban honey bees without access to other foraging habitat might suffer an early end to their foraging season, as predicted by Burgett et al. (1978) and inferred by Sponsler and Johnson (2015). Urban beekeepers in our study region should, therefore, consider providing supplemental feeding as soon as the major summer flora begin to decline.
The preference of honey bees for agricultural over urban landscapes observed in this study must be interpreted cautiously. Urban landscapes can differ markedly from one another, both within and between cities. In Ohio, for example, the densely developed landscape of Columbus differs strongly from the landscape of nearby Cleveland, which contains extensive ruderal land colonized by adventitious plants. Similarly, agricultural landscapes, even those with the same major crops, can differ significantly in the composition and prevalence of noncrop vegetation, which may provide the bulk of honey bee foraging resources (Requier et al. 2015;Long and Krupke 2016). Nevertheless, the adventitious plants of the agricultural landscape that supported honey bee foraging in our studyparticularly clovers and goldenrod-are widely recognized as major pollen and nectar sources throughout much of the U.S. and Canada (Pellett 1920;Goltz 1975;Severson and Parry 1981;Ayers and Harman 1992;Stimec et al. 1997;Long and Krupke 2016), and it is likely that wherever these plants are of prime importance, agricultural landscapes will surpass their urban counterparts in the provision of honey bee foraging resources in the summer and fall. This pattern could potentially be offset by changes in urban land management that would allow flowering plants to grow in areas conventionally maintained as turfgrass, such as residential yards and public greenspaces.
Data availability statement
Complete pollen data are available in supplemental material S2. Decoded dance data and GIS layers are available from authors upon request.
Conflict of interest statement. None declared. | 2019-04-26T14:25:51.715Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "3a8349057596f156e04ffbe5836c7ca79db2014d",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/jue/article-pdf/3/1/juw008/10462181/juw008.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4f4ed4f1050de823c0bc7903b4a1a71d259f4fc4",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
265503354 | pes2o/s2orc | v3-fos-license | A robust optimal control by grey wolf optimizer for underwater vehicle-manipulator system
Underwater vehicle-manipulator system (UVMS) is a commonly used underwater operating equipment. Its control scheme has been the focus of control researchers, as it operates in the presence of lumped disturbances, including modelling uncertainties and water disturbances. To address the nonlinear control problem of the UVMS, we propose a robust optimal control approach optimized using grey wolf optimizer (GWO). In this scheme, the nonlinear dynamic model of UVMS is deduced to a linear state-space model in the case of the lumped disturbances. Then, the GWO algorithm is used to optimize the Riccati equation parameters of the H∞ controller in order to achieve the H∞ performance criterion, such as stability and disturbance rejection. The optimization is performed by evaluating the performance of the closed-loop UVMS in real-time comparison with the popular artificial intelligent algorithms, such as as ant colony algorithm (ACO), genetic algorithm (GA), and particle swarm optimization (PSO), using feedback control from the physical hardware-in-the-loop UVMS platform. This scheme can result in improved H∞ control system performance, and it is able to ensure that UVMS has strong robustness to these lumped disturbances. Last, the validity of the proposed scheme can be established, and its performance in overcoming modeling uncertainties and external disturbances can be observed and analyzed by performing the hardware-in-the-loop experiments.
Introduction
Nowadays, underwater vehicle-manipulator systems (UVMS) have become a crucial and functional tool for humans to perform complex tasks in water [1].In order to effectively control the UVMS, it is necessary to first build its dynamic model.This is because the dynamic model provides a mathematical representation of the system's behavior, which can then be used to design effective control strategies and algorithms.However, as is widely recognized, creating an accurate dynamic model for UVMS remains a challenging task that requires precise determination and calculation of its parameters.Despite advancements in technology and methodology, developing a dynamic model that truly reflects the behavior of the UVMS continues to be a formidable undertaking, requiring a high degree of skill, expertise, and resources.This difficulty arises from the complexities inherent in the system and the dynamic environment in which it operates, making the process of obtaining accurate parameters a labor-intensive and time-consuming effort.Therefore, in some real-world applications of UVMS, approximations are made in the modeling process to simplify implementation.In addition to the difficulties in modeling, the presence of external disturbances in the underwater environment must also be acknowledged and taken into consideration.These disturbances, such as undercurrents, waves, and motion damping, are a ubiquitous aspect of the aquatic environment and can significantly impact the behavior and functionality of the UVMS.To address these modeling uncertainties and external disturbances, it is imperative that the UVMS control technique be designed with robustness and adaptability.This is crucial in order to maintain stability and ensure safe and effective operation in the face of these challenges.The need for robust and adaptive control techniques for the UVMS is of utmost urgency, and it is imperative that they be proposed and designed.
There have been a variety of control methods [2,3] proposed for UVMS by scholars in the field, including proportional-integral-derivative (PID) [4,5], expert system [6,7], fuzzy control theory [8], active disturbance rejection control (ADRC) [9], model predictive control (MPC) [10][11][12], neural network control [13], sliding mode control (SMC) [14,15], etc.However, many of these control theories suffer from limitations, such as ignoring disturbances, losing optimal control performance, reliance on accurate modeling, or high calculation complexity.This highlights the need for the continued development of more effective and robust control techniques for the UVMS that can handle the challenges of the underwater environment.
The H1 controller is a type of optimal control approach that was quickly studied in the late 1970s to early 1980s [16].It is specifically designed to handle disturbances and uncertainties in control systems, making it an effective solution for dealing with the complexities of many real-world systems.The H1 controller uses mathematical min-max optimization techniques to find the optimal control inputs that minimize the impact of disturbances and uncertainties on the performance of the controlled system.By doing so, it can ensure that the control system operates as close as possible to its desired behavior, even in the presence of unpredictable or unmodeled disturbances and uncertainties.Currently, there are numerous research efforts focused on robust H1 control of robots.For instances [17], proposed an H1 control approach for a 6-degree-of-freedom (DOF) manipulator, and the proposed control law is effective for optimizing settling time, overshoot and steady state error for each joints [18], proposed a MATLAB-based structured H1 control approach for the position control of a 4-DOF serial arm studied by Simulink [19], has proposed an H1 control approach for the multi-DOF arm with flexible and stable joints [20], has proposed an asynchronous H1 continuous control approach for the mode-dependent switched mobile robot system [21], has proposed an H1 control of a flexible and stable cable-driven parallel robot [22], has proposed an H1 feedback control approach for the underwater vehicle systems with various communication topology and external disturbances.However, it is noted that the H1 controllers are typically designed and solved offline, meaning that the control design process occurs before the actual deployment of the system [23].The design process involves finding an approximate solution of the Riccati equation, which is a mathematical equation that is central to the H1 control method.The approximation solution is used to calculate the feedback gain matrix that represents the controller's performance and robustness trade-off.Despite the widespread use of H1 controllers, the solution of the Riccati equation can be computationally challenging and may not always converge for large-scale systems like the multi-input and multi-output UVMS.As a result, alternative methods for designing H1 controllers for UVMS are being actively researched and developed.The prior research [24] is the only known discussion of H1 control for UVMS, to the best of your knowledge.It has proposed a robust H1 controller for UVMS using an extended Kalman filter (EKF).However, the calculation efficiency of this scheme has been found to be poor due to its reliance on the Kleinman [23] approximation solution of the Riccati equation.The Kleinman approximation is known to have limitations in terms of calculation efficiency, especially for large-scale systems like UVMS.As a result, the scheme proposed in [24] may not be suitable for real-time control of UVMS, particularly for systems that require fast convergence and efficient calculation.These limitations highlight the need for further research to develop more effective and efficient H1 control methods for UVMS.
Recently, artificial intelligence algorithms have been introduced as alternatives to traditional control methods for various applications.Some of these algorithms include, ant colony algorithm (ACO), grey wolf optimizer (GWO), particle swarm optimization (PSO), and genetic algorithm (GA), etc [25].These algorithms are based on bio-inspired optimization techniques, and they can be used to search for optimal control solutions by imitating the behavior of social animals or natural systems.They are becoming increasingly popular due to their ability to effectively handle complex and highly nonlinear control problems, and they have the potential to be used for developing more effective and efficient control solutions for the H1 optimal control of UVMS.Among them, the GWO algorithm, first developed by Seyedali Mirjalili et al. in 2014, has become a well-known optimization method in the field.The mechanism of GWO involves simulating the natural behavior of grey wolves in their leadership hierarchy and hunting process [26,27].In particular, the algorithm employs four types of grey wolves, alpha, beta, delta, and omega, to simulate the strict social dominant hierarchy observed in wolf packs.Additionally, the three main steps of hunting, tracking and pursuing prey, encircling and harassing prey until it stops moving, and finally attacking, are implemented to perform optimization [26].This unique approach has been shown to be both efficient and effective in solving a variety of optimization problems, making it a reliable and trustworthy optimization technique.The research in [28] firstly proposed the use of the GWO to optimize an H1 controller for controlling UVMS in the presence of underwater disturbances.This method is based on solving linear matrix inequalities (LMI) by the GWO, which can be a complex operation.To address this challenge, we will propose transforming the LMI into a Riccati equation, providing a simpler and more intuitive solution.The proposed technique is considered a valuable motivation for future research.While the Riccati equation in H1 controllers has been solved using various artificial intelligence algorithms [29,30], this is the first time that the GWO has been used for this purpose.
Inspired from the above documents, the following contributions are given in this paper.
1. First, the proposed robust optimal H1 controller in this paper utilizes the well-known GWO to optimize its parameters.The control scheme is specifically designed for the UVMS, which operates in the presence of lumped disturbances such as unmodeled uncertainties and external disturbances.The proposed control scheme takes advantage of the optimization capability of the GWO algorithm to determine the optimal parameters of the H1 controller, which are used to achieve the desired performance criteria of stability and disturbance rejection.
2. Second, the optimization is based on a linear state-space model of the nonlinear UVMS, which enables the evaluation of the closed-loop system in real time.Furthermore, the proposed control scheme is compared with other popular artificial intelligence algorithms, such as PSO, ACO, and GA, using a hardware-in-the-loop UVMS platform, demonstrating its improved performance and robustness against lumped disturbances.The validity of the proposed scheme is established through hardware-in-the-loop experiments, which provide insight into its ability to overcome modeling uncertainties and external disturbances for UVMS.
3. Third, the aim of designing the robust optimal H1 controller optimized by GWO for the UVMS is to address the nonlinear control problem of UVMS, which is commonly used in underwater operations and is subject to lumped disturbances such as unmodeled uncertainties and external disturbances.The proposed scheme improves the control system performance of UVMS and enhances its robustness to these disturbances, thereby enabling efficient and robust underwater operations.
This paper is organized below.Firstly, we will introduce the dynamic modeling of UVMS, including the linearized UVMS without and with lumped disturbances of the UVMS.Secondly, the design of the GWO-robust H1 controller for UVMS is presented, and we will give its detailed theoretical statements.Thirdly, the simulation experiment is performed to verify the proposed control scheme's perfomance for UVMS.Finally, conclusions are made.
Linearized dynamic system analysis of UVMS
The basic structure of the UVMS can be depicted in Fig 1 .Assuming no external disturbances, the dynamic equation of the UVMS is given as follows [1]: In this equation, q, v, and a represent the position, velocity, and acceleration of the UVMS, respectively.The vector q = [q v , q m ] T is composed of the underwater vehicle state q v = [q 1 , q 2 , � � �, q 6 ] T and the underwater manipulator state q m = [q 6+ 1 , q 6+2 , � � �, q 6+n ] T .The inertia matrix, which incorporates the added mass terms, is represented by M(q).The Coriolis and centripetal terms are denoted by C(q, v).The hydrodynamic and motion damping matrix are represented by D(q, v).The effects of gravity and buoyancy are represented by G(q).The vector τ represents the forces, moments, and joint torques.
In an actual dynamic control system, it is challenging to accurately determine the dynamic model Eq (1), as the internal uncertainties and external disturbances are often unknown.These uncertainties and disturbances can be broadly categorized into two types [24]: (a) Uncertainties in modeling parameters, such as mass, length, inertia, and centroid, among others.
(b) External water disturbances, including undercurrents, waves, and motion damping, among others.
By defining C(q, v)v+ D(q, v)v+ G(q) = H(q, v) and unifying all internal and external disturbances into a single bounded unknown lumped disturbance τ d , where t d 2 W and W ¼ ft d ; k t d k� o max g with a maximum bounded assumption ω max , the dynamic equation of the UVMS can be expressed as: The system can be given by a state-space form as follows: Next, the proposed control law is applied: where C 1 and C 2 are matrices representing velocity and position error gains, respectively.Substituting this control law into Eq (3) yields: , Eq (5) can be expressed in state-space model form as follows: Letting d ¼ MðqÞ À 1 t d represent the lumped disturbances (including the internal uncertainties and external disturbances), we obtain the following linear state-space model under the condition of disturbances: Thus, the nonlinear dynamic equations of UVMS have been successfully reduced to a linear state-space model, making it easier to design a u control law from an H1 controller.
The design of the GWO-robust H1 controller for UVMS
The renowned GWO algorithm [26] is employed to address this optimization problem.It has been demonstrated to be an effective optimization technique similar to algorithms such as PSO, ACO, and GA.The GWO is a meta-heuristic algorithm that is inspired by the hunting behavior of wolf packs.The wolves in the pack are classified into four classes: α (the leader), β and δ (the followers), and ω (the rest).In the context of engineering problems, the optimal solution is considered to be the leader (corresponding to class α), sub-optimal solutions correspond to β and δ, and alternative solutions belong to ω [27].
The hunting process of wolves is comprised of three steps: 1. Tracking, pursuing, and approaching the target.
2. Circling and harassing the target until it stops moving.
Launching an attack.
The hunting actions of wolves can be described as follows: where t is the number of iterations, ã, r1 and r2 are coefficient vectors, Xp and X represent the prey position vector and the wolf position vector, respectively.The ã, r1 and r2 can be used to calculate the following parameters à and C: During the iteration process, each exponent in ã can be randomly chosen from the range 2 to 0, while the values of r1 and r2 are random vectors within the interval [0, 1].
In the GWO algorithm, the wolves in class ω follow the wolves in classes α, β, and δ to hunt and approach the target, as these classes are considered to be more capable of catching prey.Thus, the first three optimal solutions are saved in the project, while the other wolves update the current state based on the current optimal solution.This can be expressed by the following Eq (10): Through Eq (10), the wolves update their positions in the n-dimensional space according to α, β, and δ, as shown in Fig 2 .The final position remains within the range defined by α, β, and δ, while the other wolves in ω continue to update their positions randomly around the target according to α, β, and δ, and they continuously estimate the target position.
The process of the GWO algorithm is described below, with a flowchart of the process shown in Fig 2: 1. Generate a random wolf pack (candidate solutions).
2. Update the position of each wolf in ω according to the target position estimated by α, β, and δ.
3. Decrease the value of ã linearly from 2 to 0 in the iterative process, which emphasizes the hunting or attack.When ã is greater than 1, the wolf moves away from the target, and when ã is less than 1, it gets closer to the target, thereby avoiding local stagnation.
4. Terminate the GWO algorithm when the stopping criteria are met.
The robust H1 control theory was proposed by Canadian scientists Zames et al. [31].The standard control cost function for this theory is given by: The significance of the standard control cost function lies in the fact that it represents a zero-sum game problem, where the disturbance variable dðtÞ tries to maximize the objective function J(t) while the control signal u(t) tries to minimize it.To solve this problem, a Riccati equation can be used to obtain an approximation.The general method for solving the Riccati equation is the Kleinman method [23].However, in this paper, we introduce a novel GWO method for solving the Riccati equation in the robust H1 control of UVMS.
The conventional Riccati equation arrived from the robust H1 control is generally stated as [24]: where given the positive definite matrix Q and coefficients r and ρ to solve a positive definite matrix P. The robust H1 controller used to control the UVMS system is obtained by The utilization of the proposed method is depicted in Fig 3 for better illustration.The objective of incorporating the GWO algorithm to optimize the conventional Riccati equation is to determine the optimal solution for the positive definite matrix P. The method of optimization is outlined in Algorithm 1, and a comprehensive visualization of the control system design for
Algorithm1
Step1 Re-arrange the conventional Riccati equation into a min-function type by: The variable to be optimized using GWO is a positive definite matrix P. For ease of implementation, set Q to be the identity matrix, r = 1, and ρ = 1.
Step3 Initialize the wolves' positions randomly and give them proper quantities, and let a group of wolves represent a matrix P = diag([P 1 , P 2 , . .., P n ]).
Step4 Select the optimal group of wolves as a proper solution of the minimized function in Step1.
Step5 Use the optimal group of wolves as the center of the entire pack of wolves and update the positions of other wolves by moving them closer to the center in one step.
Step6 Update the position of the new lead wolf according to the "winner takes all" rule and update the entire wolf system according to the "strong survive" mechanism.
Step7 Check if the optimization goal of the min-function in Step1 has been reached.If it has, then output the lead wolf's position as the optimal solution of the problem.If not, return to Step5.
Hardware-in-the-loop experiments
The hardware-in-the-loop experiment for the control of the UVMS (QYSEA FIFISH V6, with physical parameters listed in Table 1) is conducted using advanced simulation software, Simurv4.0 [1].The whole control structure is given in , , q q q , q q ( , , ) in the experiment features an AMD Ryzen R7-5800H CPU, a 512GB SSD hard disk with an upgradeable capacity of 1TB SSD, memory options of 32GB, and a standalone RTX3060 graphics card.The control of the UVMS is carried out through the Simurv4.0 simulation environment to the on-board computer, the Jetson TX2 embedded system, with the following parameters: CPU-a dual-core NVIDIA Denver 2 64-bit CPU combined with a quad-core Arm Cortex-A57 MPCore processor, Memory-4 GB 128-bit LPDDR4 51.2 GB/s, Storage-16 GB flash memory with an M.2 M KEY NVME solid state interface.The communication is CANBUS, and digital signal processing (DSP) of the type TMS320C5X is used for sending and analyzing the driver data.The vision feedback detector used in the system is a 4K mono-camera.The sample time chosen is 100 ms.
A composite control scheme, combining a disturbance observer (DOB), an H1 controller, and a GWO online solver, is applied to enhance the control performance of the UVMS in the face of dynamic uncertainties and unknown disturbances.The UVMS is initialized at a position of (0, 0, 0) and a manipulator configuration of q m ¼ 0 À 45 À 45 0 0 0 �=180 * p ½ , while its target position for the manipulator is set at (-0.6, 1.2, 4).The optimal matrix P = diag([P 1 , P 2 , � � �, P n ]) for the H1 controller of the UVMS is found using GWO's Algorithm 1, with randomly initialized artificial wolves and GWO parameters specified in Table 2.The inverse kinematics (IK) of the control goal task is In order to determine the efficiency of the GWO algorithm in optimizing the performance of the H1 controller, we compared it with other popular artificial intelligence algorithms such as PSO, ACO, and GA.We recorded the results of 200 trials to compare the efficiency of the GWO algorithm with other popular artificial intelligence algorithms such as PSO, ACO, and GA, and the data from all the trials is summarized in Table 3.The comparison was made to assess the performance of these algorithms in optimizing the Riccati equation of the H1 controller.The best result among all the trials is presented in Fig 7, which has 100 iterations.The results of the comparison reveal that the GWO algorithm displays the fastest convergence rate in optimizing the Riccati equation, making it the best choice among the algorithms tested.The other algorithms, such as PSO, ACO, and GA, although popular, were found to not perform as well in this particular task.After utilizing the GWO algorithm to optimize the H1 controller, the best tuning results were achieved.The corresponding minimum value of the min-function, as shown in the equation below, was calculated to be 1.314.
The optimized value of the matrix P was determined to be diag([1.321.01 1.01 0.91 1.36 1.67 0.87 4.62 9.01 0.61 0.71 1.26]).With these parameters, the robust H1 controller was then given by the equation u ¼ À 1 r B T Px.To further assess the performance of our proposed method in controlling the UVMS, we compare it with the control results obtained from other well-established techniques, such as the H1 controller [24], the H1 controller based on GA, the H1 controller based on ACO, the traditional H1 controller based on PSO, SMC and PID.The comparison results demonstrate that our proposed method surpasses other algorithms in terms of rapid response, tracking accuracy, system stability, and interference suppression.Furthermore, our method exhibits a more pronounced improvement over traditional non-optimized algorithms.This is attributed to the inherent issue of chaterring in SMC, which, despite its ability to overcome certain bounded interference, yields poor results as it lacks optimization capabilities.
The classic PID method struggles to identify suitable parameters and fails to achieve superior control performance in the presence of real-time water flow changes.This is evident from the convergence output depicted in Fig 8, where our proposed method effectively tracks the target and converges to the reference, showcasing a significant improvement over other algorithms.This substantiates its efficacy in fulfilling the design requirements of the UVMS.
Throughout the experiment, the data was carefully recorded and analyzed to assess the performance of the control system.Utilizing the robust H1 controller optimized by the GWO algorithm, the UVMS was able to successfully track and grasp the target.The entire control process can be observed in Figs 9-12, which demonstrate the UVMS's ability to effectively maneuver and accurately track the target.Additionally, the detailed motion trajectory of the UVMS, including its position tracking, velocity variations, acceleration variations, and generalized forces, are shown in Figs 12-16.These figures clearly illustrate the effectiveness of the proposed control method, demonstrating its ability to accurately track the target and accomplish the task at hand.It can be concluded that the use of the GWO algorithm to optimize the H1 controller results in a highly effective and efficient control system for the UVMS.
Conclusion
In conclusion, the hardware-in-the-loop experiment of the GWO optimized H1 controller for UVMS demonstrates the effectiveness of our approach.By transforming the Lagrange dynamic equation into a state-space linear model equation, the nonlinear UVMS control problem with lumped disturbances was effectively addressed.The GWO algorithm was chosen for its fast convergence trend in optimizing the traditional robust H1 controller, which was successfully utilized to the control of the experimental nonlinear UVMS.The simulation results showed that the convergence speed of the control system met the engineering application standard, and the velocity, acceleration, and generalized forces from the degrees of freedom of the UVMS were all stable, smooth, and with minimal overshoot.This intelligent controller provides a promising solution for controlling nonlinear UVMS in the future.Further tests will be conducted to fully assess its performance.
Fig 5 .
The computer configuration used | 2023-12-01T05:07:10.778Z | 2023-11-29T00:00:00.000 | {
"year": 2023,
"sha1": "b67f91e03a77988fc53a272d01289c14f0136a8f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0287405",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b67f91e03a77988fc53a272d01289c14f0136a8f",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
58555384 | pes2o/s2orc | v3-fos-license | Development of a Whole-Virus ELISA for Serological Evaluation of Domestic Livestock as Possible Hosts of Human Coronavirus NL63
Known human coronaviruses are believed to have originated in animals and made use of intermediate hosts for transmission to humans. The intermediate hosts of most of the human coronaviruses are known, but not for HCoV-NL63. This study aims to assess the possible role of some major domestic livestock species as intermediate hosts of HCoV-NL63. We developed a testing algorithm for high throughput screening of livestock sera with ELISA and confirmation with recombinant immunofluorescence assay testing for antibodies against HCoV-NL63 in livestock. Optimization of the ELISA showed a capability of the assay to significantly distinguish HCoV-NL63 from HCoV-229E (U = 27.50, p < 0.001) and HCoV-OC43 (U = 55.50, p < 0.001) in coronavirus-characterized sera. Evaluation of the assay with collected human samples showed no significant difference in mean optical density values of immunofluorescence-classified HCoV-NL63-positive and HCoV-NL63-negative samples (F (1, 215) = 0.437, p = 0.509). All the top 5% (n = 8) most reactive human samples tested by ELISA were HCoV-NL63 positive by immunofluorescence testing. In comparison, only a proportion (84%, n = 42) of the top 25% were positive by immunofluorescence testing, indicating an increased probability of the highly ELISA reactive samples testing positive by the immunofluorescence assay. None of the top 5% most ELISA reactive livestock samples were positive for HCoV-NL63-related viruses by immunofluorescence confirmation. Ghanaian domestic livestock are not likely intermediate hosts of HCoV-NL63-related coronaviruses.
Introduction
The importance of cronaviruses as emerging zoonotic viruses became evident after the international public health threat caused by severe acute respiratory syndrome coronavirus (SARS-CoV) in 2002/2003 [1]. Thereafter, there have been several studies that looked for novel coronaviruses aimed at assessing their zoonotic potential [2][3][4][5]. Coronaviruses are members of the order Nidovirales and family Coronaviridae which are made up of single-stranded positive sense RNA genomes and infect both mammalian and avian hosts. They are divided into four genera namely Alphacoronavirus, Betacoronavirus, Gammacoronavirus, and Deltacoronavirus [6,7]. In 2003, a coronavirus belonging to the Alphacoronavirus genus was discovered in an infant in the Netherlands and was designated human coronavirus NL63 (HCoV-NL63) [8]. This, among other coronaviruses, namely human coronavirus 229E (HCoV-229E), human coronavirus OC43 (HCoV-OC43), human coronavirus HKU1 (HCoV-HKU1), Middle East respiratory syndrome coronavirus (MERS-CoV), and SARS-CoV predominantly cause respiratory disease [1,9,10]. Human coronavirus NL63 has a worldwide distribution and is known to be associated with both upper and lower respiratory tract infections in both adults and children with seroconversion occurring at a very early age [11,12]. Most of the known human coronaviruses are believed to have originated from mammalian reservoirs such as bats and used other mammalian hosts as intermediate hosts before ending up in the human population. Some of these, like HCoV-229E and MERS-CoV, used camelid species [13][14][15][16], while SARS-CoV went through Himalayan palm civets as intermediate hosts [17,18]. Further, HCoV-OC43 is reported to have originated directly from cattle [19]. Unlike these groups of coronaviruses, HCoV-NL63 and HCoV-HKU1 have no known intermediate mammalian hosts. Human coronavirus NL63 is known to use the same receptor as SARS-CoV [20], and may therefore, like some SARS-CoV-related viruses, be capable of infecting swine [21]. This assertion is, however, yet to be explored through surveillance data. Different serological studies have mainly employed enzyme-linked immunosorbent assay (ELISA) and immunofluorescence assay (IFA) approaches for investigating HCoV-NL63 [22][23][24][25]. Most of these assays are designed for specific purposes ranging from seroprevalence studies to studies of the HCoV-NL63 genome [12,26], and would therefore vary in parameters like sensitivity and specificity. There is no single assay that is widely accepted as the standard for serological detection of HCoV-NL63, and this presents a challenge in the general validation of new assays. Coronaviruses have the potential to recombine to produce new viruses [27], and as such, knowledge of potential hosts other than humans that can be infected by two human coronaviruses is important to provide information on potential sources of novel human coronaviruses that may later spillover into human populations and cause disease. Knowledge of potential intermediate hosts of human coronaviruses will also provide information on the evolution of coronaviruses in general and interspecies transmission events that lead to emergence. The purpose of this study was therefore to assess the potential of domestic livestock species as intermediate hosts for HCoV-NL63.
Study Sites
Commercial and household livestock farms across Ghana were targeted and a purposive sampling strategy was adopted. Target farms were shortlisted and visited to engage and sensitize the farm owners, family, and workers as well as the entire community. During the sensitization visits, the objectives of the study, the study design, and information on use of data was provided to potential participants. Participants were allowed to ask questions and were further encouraged to seek clarification on issues they were not convinced about.
Characteristics of Study Participants
The respondents in the study were sampled from both commercial and household farms. The median age of participants in the study was 34 years (range 13 to 77 years) and majority of participants were below the age of 40 years (n = 153, 61.7%). No ages were recorded for 5 people who either did not know or were not willing to indicate their ages. The ratio of male to female participants in the study was 194 (78.2%) males to 51 (20.6%) females, and the sexes of 3 people (1.2%) were not recorded (Table 1).
Collection of Serum Samples
Serum samples were obtained from livestock farmers and their family members to be used as reference samples for assay evaluation. This was done after consent was obtained from the participants. For livestock, 10 mL of whole blood was collected and for the humans 5 mL was collected. This was done by trained veterinary technicians and clinical phlebotomists, respectively. Blood samples were then transported to the laboratory where they were centrifuged to obtain serum and immediately frozen with liquid nitrogen.
Algorithm for Determination of Seropositivity and Considerations for Testing
We developed a whole-virus enzyme linked immunosorbent assay (ELISA) to test for HCoV-NL63 in livestock as part of a two-stage testing algorithm also involving a recombinant immunofluorescence assay. For a sample to be considered positive for HCoV-NL63, it had to be in the top 5% most reactive samples as determined with the whole-virus ELISA and also positive in a confirmatory test with a more specific recombinant immunofluorescence assay (rIFA) [28]. This was the procedure adopted for swine, sheep, and goat sera. This ELISA relied on bovine products in the form of fetal calf serum in cell culture and milk powder for blocking and dilution of sera, and as such, cattle sera were tested directly with the recombinant immunofluorescence assay that had been optimized with less bovine products in the testing process to minimize background signals. Few donkey samples were obtained, and these were also tested directly with the recombinant immunofluorescence assay. A selection of coronavirus-characterized serum samples was used for assay optimization and the study samples for evaluation. All serum samples were heat inactivated at 56 • C for 30 min before testing. High-titer virus stocks of HCoV-NL63 were produced by growing the wild-type virus (kindly provided by Lina Gottula from the lab of Prof. Christian Drosten) on Rhesus monkey kidney epithelial cells (LLC-MK2). This was done by firstly growing the cells to about 80% confluence in 162 cm 2 cell culture flasks. A 1:17 dilution (v/v) of high-titer virus stock in 10-mL serum-free medium (Gibco™, Thermo Fisher, Waltham, MA, USA) was prepared and used to infect the cells for 1 h at 37 • C. Fresh Dulbecco's Modified Eagle Medium (Gibco™, Thermo Fisher, US) supplemented with 10% Fetal calf serum (FCS) was added after infection. The flasks were incubated at 37 • C in 5% CO 2 and harvested on day seven.
Virus Concentration by Ultracentrifugation
High-titer virus stocks were produced by ultracentrifugation using a 20% sucrose cushion. Centrifugation was done on an SW 32 Ti rotor (Beckman Coulter, Brea, CA, USA) at 32,000 rpm for 4 h in vacuum and at 4 • C. The virus pellet was resuspended in 1 mL 1× phosphate buffered saline (PBS) and kept at 4 • C for 24 h to enable the pellet dissolve fully.
Virus Inactivation
The ultracentrifuge-purified virus was inactivated in a 6-well tissue culture plate with 0.1% beta-Propiolactone (ACROS Organics TM , Thermo Fisher, US). This was done overnight at 4 • C and further incubated at 37 • C in a cell culture incubator to hydrolyze the beta-Propiolactone. The virus was then grown on LLC-MK2 cells and checked by quantitative real-time PCR for virus growth.
Viral Protein Quantification
The amount of protein in the virus stock was quantified using the Bradford assay as previously described [29]. Briefly, the viral protein and a two-fold serial dilution of a protein standard (bovine serum albumin, Carl Roth, Karlsruhe, Germany) in Sodium carbonate (NaCO 3 ) buffer were mixed with Bradford solution (Coomassie Plus TM , Thermo Fischer, US) and incubated at room temperature for 10 min. Protein quantity was subsequently measured on a spectrophotometer (Eppendorf, Hamburg, Germany) at 595 nm.
Western Blot Analysis
For Western blot analysis, 21 µL of the ultracentrifuge purified viral protein was treated with 7 µL of NuPAGE ® Laemmli sample buffer (4×) (Thermo Fisher, US) and heated on a heating block at 99 • C for 5 min with rocking at 400 revolutions per minute. This was then used for sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) along with a recombinant HCoV-NL63 virus derived from transfected LLC-MK2 whole cell lysate. The separated proteins were then electroblotted onto a polyvinylidene difluoride (PVDF) membrane of 0.2 µm pore size (Thermo Fisher, US) and blocked with 5% milk powder dissolved in 0.1% PBS-Tween ® 20 (PBS-T). Following incubation with rabbit anti-Nucleocapsid and rabbit anti-Membrane primary antibodies (kindly provided by Lia van der Hoek, Department of Medical Microbiology, University of Amsterdam), the membrane was washed with PBS-T and incubated with horseradish peroxidase (HRP)-conjugated goat anti-rabbit immunoglobulin G (Cell Signaling Technology, Danvers, MA, USA) for 1 h at room temperature. The membrane was subsequently analyzed with a chemiluminescent substrate, SuperSignal West Femto Maximum Sensitivity Substrate kit (Thermo Fisher, US) according to the manufacturer's instructions. Conditions for sample testing were determined by testing several combinations of conditions using the protocol of an in-house recombinant MERS-CoV ELISA as reference (Appendix A) and adopting the most optimal conditions. Dilution series of antigen ranging from 12 µg/mL to 0.1875 µg/mL and conjugate ranging from 1:500 to 1:4000 were tested to determine the optimal coating concentration and optimal conjugate dilution, respectively. The determined optimal conjugate dilution for human testing was used as a starting concentration for determination of optimal conjugate dilution for livestock testing. Coronavirus-characterized sera previously tested for HCoV-NL63 by a recombinant immunofluorescence assay were used as positive and negative test sera and were tested in replicates at a starting dilution of 1:100 then compared to 1:200 for assay optimization. These sera were also in combination either positive or negative for HCoV-229E and HCoV-OC43 (Table 2) and were used to assess potential cross reactivity with other coronaviruses. These serum samples were obtained from the lab of Prof. Christian Drosten. Different substrate exposure times were also tested to determine the most optimal duration. In brief, viral protein was coated into 96-well Nunc MicroWell TM plates (Thermo Fisher, US) by diluting the stock to 0.75 µg/mL in NaCO 3 buffer (0.1 M, PH 9.6) and coating with 50 µL per well with overnight incubation at 4 • C. The plates were washed 5 times with 0.1% PBS-T then blocked with 5% milk powder (Carl Roth, Germany) in PBS-T for one hour at room temperature and the wash repeated. Serum to be tested were diluted 1:200 in 1% milk powder in PBS-T. After one-hour incubation at room temperature, plates were washed 5 times with PBS-T and goat anti-human antibody labelled with Horseradish peroxidase enzyme (Dianova GmbH, Hamburg, Germany) at a dilution of 1:2000 was added. For livestock testing, HRP-coupled donkey anti-sheep, goat anti-swine, and donkey anti-goat antibodies (Dianova GmbH, Hamburg, Germany) were used for sheep, swine, and goat testing, respectively, also at a 1:2000 dilution. The conjugate was incubated at room temperature for 30 min after which plates were washed 5 times with PBS-T and an enzyme substrate, 3,3 ,5,5 -Tetramethylbenzidine (TMB) (Mikrogen Diagnostik, Neuried, Germany) was then added. This was kept in the dark for 15 min and stopped with 2 Molar sulphuric acid (H 2 SO 4 ) and the absorbances read at 450 nm and 630 nm on a Biotek synergy 2 (BioTek Instruments Inc., Winooski, VT, USA) multi-detection plate reader.
Recombinant Spike Immunofluorescence Testing
Screening by recombinant immunofluorescence assay was conducted as previously described [30]. Briefly, Vero B4 cells were transfected with pCG1 eukaryotic expression vectors bearing the complete HCoV-NL63 spike sequence. Transfected cells were incubated overnight after which cells were harvested and spotted onto multi-test slides (12 spots, 5 mm diameter, Dunn Labortechnik GmbH, Asbach, Germany), fixed with ice-cold acetone/methanol and stored dry at 4 • C until use. Serum samples were tested at a 1:40 dilution for 1 h at 37 • C, which was optimal for reducing nonspecific reactions and maintaining sensitivity. Secondary detection was performed with Alexa Fluor 488-conjugated goat anti-human antibody (Dianova GmbH, Hamburg, Germany) for human testing. For livestock testing, Alexa Fluor 488-conjugted goat anti-bovine, goat anti-horse, goat anti-swine, donkey anti-sheep, and donkey anti-goat antibodies (Dianova GmbH, Hamburg, Germany) were used to test cattle, donkey, swine, sheep, and goat sera, respectively. These secondary antibodies have previously been confirmed to work on the tested species [28,31]. Each sample was spotted onto transfected and non-transfected cells to help distinguish autofluorescence from fluorescence due to immune reactions.
Ethical Issues
Ethical approval for the study was obtained from the committee on human research, publications and ethics of the school of medical sciences, Kwame Nkrumah University of Science and Technology (Protocol number CHRPE49/09). Permission for livestock sampling was also obtained from the wildlife division of the Ghana Forestry Commission (Approval Number: AO4957).
Data Analysis
Descriptive graphs were generated using Microsoft Excel and IBM Statistical Package for Social Sciences (SPSS) version 20. After subtraction of plate background, differences in rIFA-categorized mean optical densities were assessed by one-way analysis of variance (ANOVA) and percentiles were used to determine the most ELISA reactive samples for both human and livestock samples using SPSS. A Mann-Whitney U test was used to compare the mean ranks of optical density values of HCoV-NL63-rIFA-characterized sera for assay optimization. The number of samples with optical density values above the 75th, 80th, 85th, 90th, and 95th percentile optical density values constituted the top 25, 20, 15, 10, and 5 percent most reactive samples respectively. Map data was plotted with Tableau public 10.5 and a livestock distribution beeswarm plot was generated using R statistical package version 3.4.3.
Distribution of Samples Collected
A total of 248 people was sampled from five different regions of Ghana. The majority of people sampled were in the Ashanti region (n = 83, 33.5%) and the fewest in the Brong Ahafo region (n = 16, 6.5%). A total of 1321 serum samples from 397 pigs, 422 sheep, 320 goats, 163 cattle, and 19 donkeys was collected in the study. The majority of swine samples (n = 182, 45.8%) and sheep samples (n = 159, 37.7%) was collected from the Ashanti region. The Northern region was the source of majority of the goat samples (n = 117, 36.6%) as well as all 19 donkey samples and majority of the cattle samples were obtained from the Volta region (n = 76, 46.6%) (Figure 1).
Analysis of Virus Protein
Western Blot Analysis Western blot analysis was performed to confirm the identity and immunogenicity of two major structural proteins of HCoV-NL63 in the concentrated virus antigen. The viral antigen obtained after ultracentrifugation and inactivation showed the nucleocapsid protein (N) around the 40 kilodalton mark and membrane (M) protein around the 25 kilodalton mark ( Figure 2). The band sizes for the nucleocapsid protein for the transfected LLC-MK2 whole cell lysate-derived control was at a similar position (~40 kilodaltons) but was more prominent than that of the test virus protein. The membrane protein of the control was, however, less prominent than that of the test virus protein ( Figure 2) indicating a likelihood of higher composition of whole virion particles in the concentrated virus protein antigen. The approximate molecular masses of the two structural proteins and their detection by the respective primary antibodies confirmed their identity and immunogenicity.
Analysis of Virus Protein
Western Blot Analysis Western blot analysis was performed to confirm the identity and immunogenicity of two major structural proteins of HCoV-NL63 in the concentrated virus antigen. The viral antigen obtained after ultracentrifugation and inactivation showed the nucleocapsid protein (N) around the 40 kilodalton mark and membrane (M) protein around the 25 kilodalton mark ( Figure 2). The band sizes for the nucleocapsid protein for the transfected LLC-MK2 whole cell lysate-derived control was at a similar position (~40 kilodaltons) but was more prominent than that of the test virus protein. The membrane protein of the control was, however, less prominent than that of the test virus protein ( Figure 2) indicating a likelihood of higher composition of whole virion particles in the concentrated virus protein antigen. The approximate molecular masses of the two structural proteins and their detection by the respective primary antibodies confirmed their identity and immunogenicity.
Determination of ELISA Testing Conditions
A variable range of plate coating concentrations were assessed to determine saturation point in order to inform selection of an appropriate coating concentration. There was a two-fold reduction in average optical density (OD) measured using the HCoV-NL63-rIFA-positive test serum on coating concentration range 0.75 μg/mL to 0.1875 μg/mL ( Figure 3A) depicting the range of consistent detectable variation. There was no consistent variation in average OD of the HCoV-NL63-rIFApositive serum sample tested with coating concentrations exceeding 0.75 μg/mL ( Figure 3A), and as such, this value was chosen as the coating concentration for the assay.
An increase in the mean OD of a combined 5-and 15-min incubation time for serum 1 from 1:4000 to 1:500 dilution was observed. There was no increase in mean OD from 1:4000 to 1:2000, but from 1:2000 to 1:500 dilution for serum 4 ( Figure 3B). The 1:2000 dilution was selected for testing human samples due to the low reactivity of the negative serum 4 despite the increase in reactivity of the positive serum 1.
The 15-min substrate incubation time produced higher reactivity for serum 1 with a median (interquartile range) OD of 0.338 (0.236 to 0.344) compared to the 5-min incubation time with 0.187 (0.163 to 0.191). The reactivity was similarly higher for the 15-min incubation time with a median (interquartile range) OD of 0.06 (0.025 to 0.099) than the 5-min incubation time with 0.022 (0.016 to 0.024) for serum 4 ( Figure 4). The 15-min incubation time was selected for higher sensitivity.
The 1:2000 conjugate dilution was also found to be optimal for livestock sera testing in terms of reactivity to noise comparison with a higher difference in mean OD between livestock sera and wells tested with conjugate only compared to other dilutions ( Figure 5A).
A 1:200 serum dilution was chosen for serum testing as this gave a better discrimination between the positive and negative serum samples compared to a 1:100 dilution ( Figure 5B). The further dilution of the serum provided better detection of the target protein in the positive serum and lower reactivity in the negative serum as observed in Figure 5B.
Determination of ELISA Testing Conditions
A variable range of plate coating concentrations were assessed to determine saturation point in order to inform selection of an appropriate coating concentration. There was a two-fold reduction in average optical density (OD) measured using the HCoV-NL63-rIFA-positive test serum on coating concentration range 0.75 µg/mL to 0.1875 µg/mL ( Figure 3A) depicting the range of consistent detectable variation. There was no consistent variation in average OD of the HCoV-NL63-rIFA-positive serum sample tested with coating concentrations exceeding 0.75 µg/mL ( Figure 3A), and as such, this value was chosen as the coating concentration for the assay.
An increase in the mean OD of a combined 5-and 15-min incubation time for serum 1 from 1:4000 to 1:500 dilution was observed. There was no increase in mean OD from 1:4000 to 1:2000, but from 1:2000 to 1:500 dilution for serum 4 ( Figure 3B). The 1:2000 dilution was selected for testing human samples due to the low reactivity of the negative serum 4 despite the increase in reactivity of the positive serum 1.
The 15-min substrate incubation time produced higher reactivity for serum 1 with a median (interquartile range) OD of 0.338 (0.236 to 0.344) compared to the 5-min incubation time with 0.187 (0.163 to 0.191). The reactivity was similarly higher for the 15-min incubation time with a median (interquartile range) OD of 0.06 (0.025 to 0.099) than the 5-min incubation time with 0.022 (0.016 to 0.024) for serum 4 ( Figure 4). The 15-min incubation time was selected for higher sensitivity.
The 1:2000 conjugate dilution was also found to be optimal for livestock sera testing in terms of reactivity to noise comparison with a higher difference in mean OD between livestock sera and wells tested with conjugate only compared to other dilutions ( Figure 5A).
A 1:200 serum dilution was chosen for serum testing as this gave a better discrimination between the positive and negative serum samples compared to a 1:100 dilution ( Figure 5B). The further dilution of the serum provided better detection of the target protein in the positive serum and lower reactivity in the negative serum as observed in Figure 5B.
Potential Cross Reactivity with Other Coronaviruses
The ability of the assay to specifically detect and significantly differentiate HCoV-NL63 in samples co-infected with other coronaviruses was assessed. Likelihood of false detections in samples that were HCoV-NL63 negative but positive for other coronaviruses was also assessed. Different degrees of cross-reactivity with HCoV-229E and HCoV-OC43 were observed with a higher level seen with HCoV-229E than with HCoV-OC43 ( Figure 6). There was a statistically significant difference (U = 27.50, p < 0.001) as determined by the Mann-Whitney U test in mean ranks of optical density between serum 1 (mean OD = 0.33) which was positive by rIFA for both HCoV-NL63 and HCoV-229E and serum 3 (mean OD = 0.27) which was negative for HCoV-NL63 but positive for HCoV-229E ( Figure 6A). The difference in optical density values of serum 1 which was positive for HCoV-NL63 and HCoV-OC43 and serum 2 (mean OD = 0.19) which was negative for HCoV-NL63 but positive for HCoV-OC43 ( Figure 6B) was also statistically significant (U = 55.50, p < 0.001). This indicates the assay's capability of distinguishing HCoV-NL63 from these coronaviruses.
Potential Cross Reactivity with Other Coronaviruses
The ability of the assay to specifically detect and significantly differentiate HCoV-NL63 in samples co-infected with other coronaviruses was assessed. Likelihood of false detections in samples that were HCoV-NL63 negative but positive for other coronaviruses was also assessed. Different degrees of cross-reactivity with HCoV-229E and HCoV-OC43 were observed with a higher level seen with HCoV-229E than with HCoV-OC43 ( Figure 6). There was a statistically significant difference (U = 27.50, p < 0.001) as determined by the Mann-Whitney U test in mean ranks of optical density between serum 1 (mean OD = 0.33) which was positive by rIFA for both HCoV-NL63 and HCoV-229E and serum 3 (mean OD = 0.27) which was negative for HCoV-NL63 but positive for HCoV-229E ( Figure 6A). The difference in optical density values of serum 1 which was positive for HCoV-NL63 and HCoV-OC43 and serum 2 (mean OD = 0.19) which was negative for HCoV-NL63 but positive for HCoV-OC43 ( Figure 6B) was also statistically significant (U = 55.50, p < 0.001). This indicates the assay's capability of distinguishing HCoV-NL63 from these coronaviruses.
Potential Cross Reactivity with Other Coronaviruses
The ability of the assay to specifically detect and significantly differentiate HCoV-NL63 in samples co-infected with other coronaviruses was assessed. Likelihood of false detections in samples that were HCoV-NL63 negative but positive for other coronaviruses was also assessed. Different degrees of cross-reactivity with HCoV-229E and HCoV-OC43 were observed with a higher level seen with HCoV-229E than with HCoV-OC43 ( Figure 6). There was a statistically significant difference (U = 27.50, p < 0.001) as determined by the Mann-Whitney U test in mean ranks of optical density between serum 1 (mean OD = 0.33) which was positive by rIFA for both HCoV-NL63 and HCoV-229E and serum 3 (mean OD = 0.27) which was negative for HCoV-NL63 but positive for HCoV-229E ( Figure 6A). The difference in optical density values of serum 1 which was positive for HCoV-NL63 and HCoV-OC43 and serum 2 (mean OD = 0.19) which was negative for HCoV-NL63 but positive for HCoV-OC43 ( Figure 6B) was also statistically significant (U = 55.50, p < 0.001). This indicates the assay's capability of distinguishing HCoV-NL63 from these coronaviruses.
Evaluation of ELISA with HCoV-NL63-rIFA Test of Human Samples
After testing the 248 human samples using the immunofluorescence assay, 217 samples were determined to be unequivocal positives and negatives by two independent assessors. These were samples that did not produce significant background noise and autofluorescence to hinder result determination and were used in analysis. Out of this number, 183 (84.3%) were positive and 34 (15.7%) were negative. Optical density values for the human sample testing ranged from 0.27 to 0.73 for the HCoV-NL63-rIFA-positive samples and 0.30 to 0.63 for HCoV-NL63-rIFA-negative samples (Figure 7). A comparison of the mean OD values of rIFA positive (0.47 ± 0.10) and negative (0.46 ± 0.09) groups showed no statistically significant difference between the groups as determined by one-way ANOVA (F (1, 215) = 0.437, p = 0.509). Assay validation parameters like sensitivity and specificity could not be reliably estimated as a result of the lack of an available gold standard assay for HCoV-NL63 serology. A gradual increase in the ratio of HCoV-NL63-rIFA positives to negatives with an increase in cut point OD by percentiles was observed. All the top 5% most ELISA reactive samples were HCoV-NL63-rIFA-positive as compared to the top 25% most reactive samples of which only 84% were HCoV-NL63-rIFA-positive (Table 3) indicating a higher probability of the most ELISA reactive samples testing positive by immunofluorescence testing.
samples that did not produce significant background noise and autofluorescence to hinder result determination and were used in analysis. Out of this number, 183 (84.3%) were positive and 34 (15.7%) were negative. Optical density values for the human sample testing ranged from 0.27 to 0.73 for the HCoV-NL63-rIFA-positive samples and 0.30 to 0.63 for HCoV-NL63-rIFA-negative samples (Figure 7). A comparison of the mean OD values of rIFA positive (0.47 ± 0.10) and negative (0.46 ± 0.09) groups showed no statistically significant difference between the groups as determined by oneway ANOVA (F (1, 215) = 0.437, p = 0.509). Assay validation parameters like sensitivity and specificity could not be reliably estimated as a result of the lack of an available gold standard assay for HCoV-NL63 serology. A gradual increase in the ratio of HCoV-NL63-rIFA positives to negatives with an increase in cut point OD by percentiles was observed. All the top 5% most ELISA reactive samples were HCoV-NL63-rIFA-positive as compared to the top 25% most reactive samples of which only 84% were HCoV-NL63-rIFA-positive (Table 3) indicating a higher probability of the most ELISA reactive samples testing positive by immunofluorescence testing.
HCoV-NL63 in Livestock Samples
In order to determine the livestock ELISA reactivity patterns and the most reactive sheep, goat, and swine samples, these livestock species were subjected to screening with the developed ELISA. The optical density values for the livestock testing ranged from 0.0 to 0.32 for sheep, 0.02 to 0.68 for goats, and 0.04 to 0.74 for swine ( Figure 8). None of the most reactive swine, sheep, and goat sera as determined by the 95th percentile OD cut point tested positive by rIFA. Donkeys (n = 19) and cattle (n = 163) that were tested for HCoV-NL63 by rIFA were all negative as well (Table 4). Given the relatively large number of samples tested across species and the lack of positivity, these livestock species sampled in Ghana are not likely to be intermediate hosts for HCoV-NL63.
The optical density values for the livestock testing ranged from 0.0 to 0.32 for sheep, 0.02 to 0.68 for goats, and 0.04 to 0.74 for swine ( Figure 8). None of the most reactive swine, sheep, and goat sera as determined by the 95th percentile OD cut point tested positive by rIFA. Donkeys (n = 19) and cattle (n = 163) that were tested for HCoV-NL63 by rIFA were all negative as well (Table 4). Given the relatively large number of samples tested across species and the lack of positivity, these livestock species sampled in Ghana are not likely to be intermediate hosts for HCoV-NL63.
Discussion
For the purpose of sero-surveillance in an effort to detect antibodies from Ghanaian cattle, sheep, goats, swine, and donkeys against HCoV-NL63-related viruses, an indirect, whole-virus ELISA was developed in this study as part of a two-stage testing algorithm. This was done in order to assess the possibility of any of the previously mentioned species being an intermediate host for this virus. The recombinant immunofluorescence assay described in this study is a robust, sensitive, and specific assay [32]. The assay is, however, time consuming and requires an experienced person to interpret results, and as such, is not suitable for use on a large scale. Purification of HCoV-NL63 antigen through the sucrose medium is a method that has previously been used for HCoV-NL63 [24] and other coronaviruses [33,34] and has been found to be an effective method of antigen purification and concentration as was seen in this study as well. Although the sucrose cushion is not as effective as the density gradient for the separation of complete from incompletely assembled virion particles [35], the sucrose cushion used in this study appeared to be sufficiently effective for this purpose.
The signal comparison for the positive and negative test samples was adequate for discrimination despite the negative sample being positive for HCoV-229E; which belongs to the same serologic group as HCoV-NL63, and for HCoV-OC43 which belongs to the other group of the two serologic groups of human coronaviruses. Lack of cross-reactivity between HCoV-NL63 and the more closely related HCoV-229E as well as with HCoV-OC43 has been reported by other studies that employed recombinant ELISAs targeting the nucleocapsid protein [12,36]. Despite the fact discrimination was possible in the present study using the whole virus antigen, some degree of cross-reactivity was also observed.
Apart from cross-reactivity with the other coronaviruses, antibodies may cross-react with other unrelated proteins in the serum. The sera used in optimizing the ELISA and the tested samples had different demographic characteristics and as such the level of reactivity may differ in the tested samples compared to the samples used for optimization. This is however useful given the assay was to be eventually used for testing sera from different species to the one used for optimization and evaluation. The highly reactive samples, however, are more likely to be positive for the target of interest as was observed in this study and other studies as well [32,37].
The number of known positive and negative samples used in evaluation affects the likely diagnostic sensitivity and specificity of a candidate assay [38]. In the present study, fewer negative than positive samples were obtained for the evaluation of the assay as a result of the sample being taken from a cross-section of the population where the eventual test subjects were also obtained albeit of a different species. The fewer negatives obtained in the cross-section and used in the evaluation is likely to result in a less accurate assessment of diagnostic specificity. The purpose of the present assay did not, however, require a highly accurate measure of specificity, but interest was geared towards sensitivity. These parameters were, however, not estimated due to the lack of a gold standard assay.
Being the main immunogenic structural proteins of coronaviruses, the nucleocapsid, spike, and membrane proteins are important in assay development [39][40][41]. The nucleocapsid protein is produced abundantly during infection and is employed in assay development because it is a potent immunogen [40,42]. One study on SARS-CoV showed the nucleocapsid induced the production of antibody levels comparable to the whole virus and slightly higher than the spike protein [43]. The reactivity pattern observed in the present study with the full virus antigen will comprise a collective effect of specific and non-specific interactions of serum antibodies with the three main immunogenic structural proteins and other protein moieties. Although the nucleocapsid protein is the most abundantly produced during infection with HCoV-NL63 [44,45], the membrane protein is more abundant in the complete virion particle than nucleocapsid protein [46,47]. This was seen in the Western blot analysis after ultracentrifugation with the sucrose cushion which evidently concentrated more complete virion particles. The immune responses observed are likely to be mainly due to the membrane protein because of its abundance in the whole virus antigen.
For simple in-house preparations, the indirect ELISA is a good choice and also provides high sensitivity and flexibility [48,49]. The limitations with this process include possibility of high background signal due to the binding of all proteins to the wells of ELISA plates and non-specific binding of the secondary antibody [50]. The competitive ELISA technique has an added advantage of no requirement for sample clean-up and a high sensitivity to differences in composition of complex mixtures of different antigens even in the presence of relatively small quantities of the specific detection antibody [51,52]. Whole virus antigen preparations like the one used in this study have generally been found to be more sensitive than recombinant antigen targets [53,54] but tends to be less specific as a result of higher likelihood of non-specific binding of co-purified cellular proteins and non-target viral proteins [55,56].
Although several bat species have been found to harbor several alpha and betacoronaviruses believed to be the ancestors of endemic human coronaviruses including HCoV-NL63 [13,23], bats may not have been a direct source of virus transmission to humans given that, CoVs such as SARS-CoV and MERS-CoV both make use of terrestrial mammals which are more likely to have contact with humans instead of bats as transmission hosts. Again, HCoV-229E is more closely related to their relatives in camels as compared to those in bats, indicating a probability of camels being intermediate hosts between bats and humans [14,57]. Human coronavirus NL63 uses the angiotensin-converting enzyme (ACE) 2 receptor for infection of target cells similar to SARS-CoV [20] and has been found to be able to replicate in swine cells in vitro [58]. No antibodies to HCoV-NL63 were found in any of the pigs tested in the present study as evidence of the fact that the ability to replicate in swine cells does not imply capability to infect an actual animal since several other barriers need to be surmounted for this to happen. Based on the results of the present study it may be concluded that cattle, sheep, goats, donkeys, and swine may not be intermediate hosts for HCoV-NL63. However, there have not been any reports of HCoV-NL63-related viruses circulating in Ghanaian bats, and as such, a spillover opportunity may not be present, and hence no likely infection. Surveillance of local livestock populations can also be performed for antibodies in areas where such HCoV-NL63-related viruses have been detected like in Kenya to confirm this [57].
Coronaviruses have been shown to have the potential to mutate and genetically recombine when two viruses infect the same cell [59], as seen for instance with recombination events between canine coronavirus and transmissible gastroenteritis virus and canine coronavirus and feline coronaviruses that have brought about new coronaviruses [60,61]. These new viruses may have a different host range particularly if changes occur in the spike region or different pathology in the same host and as such knowing the possible intermediate hosts of coronaviruses that infect humans is important as these provide information on the evolution of the virus as well as possible mixing vessels for these viruses. | 2019-01-22T22:31:49.446Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "697cf5b78c388d6e79a7a87ac9b82e4911de135a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/11/1/43/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "697cf5b78c388d6e79a7a87ac9b82e4911de135a",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
12143279 | pes2o/s2orc | v3-fos-license | Cloning and characterization of a cDNA for murine macrophage inflammatory protein (MIP), a novel monokine with inflammatory and chemokinetic properties [published erratum appears in J Exp Med 1989 Dec 1;170(6):2189]
In the course of studies on cachectin/TNF being conducted in our laboratory, a novel macrophage product has been detected and characterized. Termed macrophage inflammatory protein or MIP, this protein appears to be an endogenous mediator of the inflammatory events induced by endotoxin. A cDNA cloned probe for this protein has been isolated from a lambda gt10 phage library prepared from poly(A)+ RNA obtained of endotoxin-induced RAW264.7 cells. The sequence codes for a 92 amino acid-long polypeptide, of which 69 amino acids correspond to the mature product. The sequence predicts a molecular weight of 7,889 and structural analysis of the protein indicates a characteristic signal sequence alpha-helix and a hydrophobic core. Sequence data also confirm no sequence similarity to any other protein listed in the Dayhoff data base.
AA#22
#30 GLNPHEILEVALASPTYRPHEGLUTHR Protein Coding FIGURE 1. 512-fold degeneracy pools for MIP. C G G G G C The asterisk below the base indicates a constant base change between the two probe pools.
Probe 2 C G**G G G with [32P] ATP (New England Nuclear, Boston, MA) . After the hybridization, the lifts were washed using the method of Wood et al. (10) . After several rounds of screening, 18 recombinant phage clones were isolated and grown in bulk for DNA isolation .
DNA Sequence Analysis.
The cDNA inserts to be analyzed were subcloned into the M13 phage vectors and DNA sequencing was performed by the dideoxy-chain termination method of Sanger et al. (11).
Blot Hybridization Analysis. Northern blot hybridization was performed according to the method of Lehrach et al. (12). Total RNA of LPS-stimulated and nonstimulated RAW 264 .7 cells were electrophoresed through 1.2 % agarose gels and transferred to nitrocellulose filters. Primer Extension.
The synthetic oligonucleotide primer was end labeled using Y-[32 p]ATP (3,000 Ci/mmol, Amersham Corp., Arlington Heights, IL) and the T4 polynucleotide kinase. The primer extension method was a modification ofthat described by Walker et al. (13).
Results and Discussion
To elucidate the molecular structure of murine MIP (MuMIP) a cDNA clone was isolated containing the sequence coding for MuMIP As a first step, the mouse macrophage cell line RAW264 .7 was stimulated with LPS . Since RAW264 .7 cells have been shown to be a source of MIP protein after LPS stimulation, the MIP mRNA was expected to be highly reiterated in these cells 2 h after LPS induction of these cells . Poly(A)+ RNA was prepared from total RNA by two cycles of oligo-dT-chromatography and a cDNA library was constructed in Xgt10 . The cloning efficiency was 106 clones/gg of poly(A)+ . The library was amplified and shown to contain >1,000 by inserts in -60% of recombination plaques . Nitrocellulose filter lifts of a low-density plating of the library were screened using two synthetic oligonucleotide pools that were based on a partial NH2-terminal amino acid sequence of purified protein (4). Each pool consisted of a 512-fold degeneracy pool of 26 nucleotides in length (Fig. 1). After the initial library screening, positive plaques were streaked onto fresh bacterial lawns and a secondary screening was performed by differential plaque hybridization . Two replicate lifts ofthe secondary streaks were hybridized to either 32P-labeled pool 1 or pool 2 . Since the melting temperature (T,n) The complete nucleotide sequence of the cDNA clone for MIP is shown . The underlined sequence indicates the complementary sequence of oligonucleotide used in the primer extension experiments . The predicted translated molecular weight is 10,346 . The mature protein sequence, starting at position one, is 69 amino acids in length and has a predicted molecular weight of 7,889 .
of DNA/DNA hybrids can be approximated by the empirical formula : Tm = 16.6(log[Na+] +0.41(%[G+C])+81 .5-500/number ofby in homology, we could effectively eliminate one ofthe probe pools through the differential melting temperatures ofthe hybrids based on a 26-bp homology. By using the tetramethylammonium chloride washing technique of Wood et al. (10), which abolishes the preferential melting of AT vs. G-C base pairs, the Tm becomes dependent simply on the length of the hybrid .
After several rounds of differential hybridization, probe pool 1 yielded 18 recombinant phage clones out of 104 screened that hybridized under maximally stringent conditions for MIP All of the plaques were purified and DNA was prepared from each. The recombinant phage clone 52 appeared to contain the largest cDNA Eco RI insert of -750 by and was chosen for further characterization . The complete nucleotide sequence of cDNA clone 52, as well as 262 by of 5' sequence of another, partially overlapping clone, 32, have been determined and are shown in Fig. 2. The latter clone, which was isolated in a later screening, had a smaller Eco RI insert than clone 52, but a larger 5'-end fragment and therefore presumably less poly(A)+ tail. The MIP nucleotide sequence of 763 by predicts a single open reading frame starting at nucleotide 2 . The mature protein sequence, starting at position one, is 69 amino acids in length and encodes the major sequence previously defined by NH2-terminal analysis of the purified protein (4) .
The first methionine present in the sequence is found at position -23 . We postulate this to be the initiating methionine for the MIP precursor based on the following observations. Structural analysis of the putative presequence (-23 to -1) indicates that it has features characteristic of signal sequences (i.e., a-helix and a hydrophobic core [14]) . The predicted initiating ATG has a purine at position -3, which has been shown by Kozak (15) to have a dominant effect on translation initiation efficiency. Furthermore, in a survey of the frequency of A,C,TG around the translation start site of 699 vertebrate mRNAs, 97% had a purine at position -3, 61% having an A at that position (16) .
We have also performed a primer extension analysis to determine the amount of 5' sequence lacking from the cDNA clone . A labeled oligonucleotide primer (Fig . 2) was hybridized to LPS-stimulated RAW264 .7 poly(A)+ RNA and elongated with reverse transcriptase . After hybridization, an extended primer of 98 ± 2 nucleotides was obtained (data not shown). After subtracting out the primer length of 25 nucleotides and the sequence 5' to the primer we had previously determined (61 nucleotides), we can conclude that our known sequence is 10-14 nucleotides short of a fulllength cDNA . While it is possible that an in-frame AUG is present in this unknown region, it seems highly unlikely given that only 14 of 346 sequenced vertebrate mRNAs have 5' noncoding sequences of <19 nucleotides in length (16). We would thus estimate the 5' untranslated sequence to be -82 nucleotides, well within the 20-100 nucleotide length of most vertebrate 5' noncoding sequenced to date .
The proposed preMIP is 92 amino acids in length . There are no Asn-X-Ser,Thr sites for N-linked glycosylation evident in the molecule. There are 7 cysteines, 3 in the presequence and 4 in the mature sequence . The codon usage of the putative pre-MIP agrees well with Lhat determined for 66 other sequenced murine genes (17) . The protein has no significant sequence similarity to any protein as defined to date by the dfast-p program homology search (18) of the Dayhoff protein data base. The DNA sequence was also compared against the GenBank genetic sequence data with a similar result .
In the 3'-untranslated region there is a single consensus polyadenylation site at by 711-716 . There are also 4 sequences that have only one mismatch to the cytokine consensus 3'-untranslated sequence defined by Caput et al. (19) . The 3'-untranslated consensus cytokine sequence (TATT) defined by Reeves et al . (20) is also present . When n = 2 and one mismatch is allowed, four of these sequences are found. There is an overlap in three of these between the sequence defined by Caput et al . (19) and that defined by Reeves et al . (20) .
Since MIP is an inducible polypeptide, we have also studied the expression of murine MIP mRNA by Northern blot hybridization in RAW264 .7 cells . As shown in Fig. 3, total RNA from LPS-induced cells exhibited a positive hybridization band with an estimated size of 800 bp, while total RNA from uninduced cells showed very little of a positive signal at that or any other size. In a time-course study on the induction of MuMIP mRNA with endotoxin in these same cells (Fig . 4), MuMIP mRNA exhibits detectable levels of mRNA within 1 h after LPS stimulation and peaks between 8-16 h after LPS stimulation . This time course is quite different from either MuTNFa/cachectin or MuIL-la mRNAs when their respective plasmid probes were hybridized to the same blot . Further analysis is currently underway to examine how MIP is regulated at the molecular level and how it relates to other known inflammatory mediators .
Summary
In the course of studies on cachectin/TNF being conducted in our laboratory, a novel macrophage product has been detected and characterized . Termed macrophage inflammatory protein or MIP, this protein appears to be an endogenous mediator of the inflammatory events induced by endotoxin . A cDNA cloned probe for this protein has been isolated from a Xgt10 phage library prepared from poly(A)+ RNA obtained of endotoxin-induced RAW264 .7 cells . The sequence codes for a 92 amino acid-long polypeptide, of which 69 amino acids correspond to the mature product . The sequence predicts a molecular weight of 7,889 and structural analysis of the protein indicates a characteristic signal sequence a-helix and a hydrophobic core . Sequence data also confirm no sequence similarity to any other protein listed in the Dayhoff data base . | 2014-10-01T00:00:00.000Z | 1988-06-01T00:00:00.000 | {
"year": 1988,
"sha1": "14fdcbcf6f85baade9d6538d6c80d4d198199ad5",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/167/6/1939.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "97ffb05b643e90d112564cbd58a2209268345dcb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
233398399 | pes2o/s2orc | v3-fos-license | Effectiveness of telemonitoring on self-care behaviors among community-dwelling adults with heart failure: a quantitative systematic review
ABSTRACT Objective: This review examined the effectiveness of telemonitoring versus usual care on self-care behaviors among community-dwelling adults with heart failure. Introduction: Heart failure is a global health crisis. There is a body of high-level evidence demonstrating that telemonitoring is an appropriate and effective therapy for many chronic conditions, including heart failure. The focus has been on traditional measures such as rehospitalizations, length of stay, cost analyses, patient satisfaction, quality of life, and death rates. What has not been systematically evaluated is the effectiveness of telemonitoring on self-care behaviors. Involving patients in self-care is an important heart failure management strategy. Inclusion criteria: This review included studies on adult participants (18 years and older), diagnosed with heart failure (New York Heart Association Class I – IV), who used telemonitoring in the ambulatory setting. Studies among pediatric patients with heart failure, adult patients with heart failure in acute care settings, or those residing in a care facility were excluded. Methods: Eight databases, including CINAHL, Cochrane Central Register of Controlled Trials, Embase, MEDLINE, Epistemonikos, ProQuest Dissertations and Theses, PsycINFO, and Web of Science were systematically searched for English-language studies between 1997 and 2019. Studies selected for retrieval were assessed by two independent reviewers for methodological quality using critical appraisal checklists appropriate to the study design. Those meeting a priori quality standards of medium or high quality were included in the review. Results: Twelve publications were included in this review (N = 1923). Nine of the 12 studies were randomized controlled trials and three were quasi-experimental studies. Based on appropriate JBI critical appraisal tools, the quality of included studies was deemed moderate to high. In a majority of the studies, a potential source of bias was related to lack of blinding of treatment assignment. Telemonitoring programs ranged from telephone-based support, interactive websites, and mobile apps to remote monitoring systems and devices. Self-care outcomes were measured with the European Heart Failure Self-care Behaviour Scale in nine studies and with the Self-care of Heart Failure Index in three studies. Telemonitoring improved self-care behaviors across 10 of these studies, achieving statistical significance. Clinical significance was also observed in nine of the 12 studies. All studies utilized one of two validated instruments that specifically measure self-care behaviors among patients with heart failure. However, in some studies, variation in interpretation and reporting was observed in the use of one instrument. Conclusions: Overall, telemonitoring had a positive effect on self-care behavior among adult, community-dwelling patients with heart failure; however, there is insufficient and conflicting evidence to determine how long the effectiveness lasts. Longitudinal studies are needed to determine the sustained effect of telemonitoring on self-care behaviors. In addition, the limitations of the current studies (eg, inadequate sample size, study design, incomplete statistical reporting, self-report bias) should be taken into account when designing future studies. This review provides evidence for the use of telemonitoring, which is poised for dramatic expansion given the current clinical environment encouraging reduced face-to-face visits. Systematic review registration number: PROSPERO CRD42019131852
Summary of Findings
EffecƟveness of telemonitoring on self-care behaviors among community-dwelling adults with heart failure Bibliography: Nick JM, Roberts LR, Petersen AB. EffecƟveness of telemonitoring on self-care behaviors among community-dwelling adults with heart failure: a quanƟtaƟve systemaƟc review. JBI Evid Synth 2021;19 (10):2659-2694.
Outcomes
Impact (
Introduction
H eart failure (HF) represents a global public health burden that has increased dramatically and is associated with decreased quality of life due to excess morbidity and a high mortality rate. 1,2 In high-resource countries, HF is the leading cause of hospitalization among adults and older persons. 2 With this chronic, progressive disease, patients experience numerous symptoms including dyspnea, fatigue, activity intolerance, and fluid retention. Evidence-based recommendations for treatment of HF include interventions that aim to enhance selfcare adherence, promote a partnership between patient and provider, and individualize treatment plans that take into consideration disease progression. 3 Involving clients in self-care shows that timing is critical to this success; there is a window of opportunity when patients are symptomatic and are motivated and able to work with their provider, but once the patient becomes more debilitated, motivation decreases and so does self-care. 4 Telemonitoring interventions have been associated with improvements in HF outcomes, including reduced mortality and hospitalization, increased healthrelated quality of life, and increased HF knowledge. 5,6 A number of qualitative and quantitative studies have examined the effect of telemonitoring on self-care behaviors among adult HF populations using validated scales specifically developed to measure self-care among patients living with HF. [7][8][9][10][11][12][13][14][15][16][17] Since the late 1990s, original research reports have been documenting the impact telemedicine has on a wide range of physical, social, and mental disorders. For example, since 2016, over 350 systematic reviews have been published summarizing the effectiveness of telemonitoring (TM). This intervention has been shown to improve disease management, reduce costs, enhance the patient experience, and increase satisfaction, indicating feasibility, acceptability, and effectiveness. [18][19][20][21][22][23][24][25][26][27][28][29] At the same time, it is important to note that the evidence on TM also showed under-representation of African Americans, [30][31][32] underscoring the need for greater representation. Under-representation in research may contribute to identified health disparities; therefore, sensitivity to this social issue and responding with a commitment to increase ethnic/racial representation in future research is essential. This caveat notwithstanding, there were sufficient studies to inform this review.
Effect of telemonitoring programs on managing heart failure
In the literature, telemonitoring, mHealth, telecare, and telehealth are often used interchangeably. For the purpose of this review, the definition of TM is the transmission of information related to patient health status (eg, physiological data and symptom scores) from home to a respective health care setting via automated invasive or non-invasive electronic devices or by web-based or telephone-based data entry. [33][34][35] Telemonitoring modalities differ in terms of whether they send the information in real time or at specified intervals. 34 Telemonitoring programs have historically utilized information and communication technologies to extend access to health care for HF patients, with the intent of decreasing the frequency of travel to facilities, patient burden, and associated costs, while simultaneously supporting self-care. 10,36 These TM systems fall on a spectrum of patient interactivity -from completely passive to highly interactive. For example, telephone support used to monitor and manage HF symptoms may be as simple as telephone calls that provide educational support and symptom management. [37][38][39] Smartphone apps constitute TM when used by patients to enter physiological data and symptoms that are reviewed by the provider. 11 More advanced technology has also been used successfully, such as an interactive voice response system, in which patients respond to questions or input physiological data, which prompts tailored self-management advice and coaching. [12][13][14][15][16] Some technology systems depend on patients self-reporting. The information may be transmitted wirelessly or over a landline connection to the device, allowing for access in both rural and urban locations. 13,37 Computer support for HF patients in the community has also included automated emails that provide information on a patient's health status and resources to support disease management, 14 as well as more sophisticated virtual visits. Using a laptop to connect, patients can confer with their health care provider via video conferencing, a system that has gained popularity in recent months. With the assistance of the home health nurse, a peripheral stethoscope or portable imaging equipment can be connected to computers and used to monitor and document data. 36,37 Other devices used for TM that support HF patients include wireless remote monitoring systems, which are software platforms linked via Wi-Fi to other devices. Commonly linked devices include stationary electronic devices external to the patient, such as an electronic blood pressure monitor, weighing scale, pulse oximeter, heart rate monitor, and electrocardiogram (ECG) recorder. The wireless sensor collects and uploads data to the remote monitoring system, which transmits the information to the health care system. The health care provider views the data using a web-based application. 10,17,37 Invasive monitoring, such as implanted devices, can be used with a remote monitoring system for patients in their homes. Implanted devices transmit physiological measurements such as intrathoracic impedance, right-sided cardiac pressures, left atrial pressure, and pulmonary artery pressure. 37 Effect of telemonitoring programs on self-care behavior Systematic reviews of studies conducted among the adult population show that TM improves self-care for common chronic disorders. The synthesized research on TM programs for mental health conditions has indicated varying degrees of effectiveness. [23][24][25][26] There is, however, strong evidence demonstrating the effectiveness of TM programs for improving self-care for social issues such as substance abuse, 27,28 workplace stress reduction, 29 and social isolation. 40 However, to date, systematic reviews have not documented the effect of TM on self-care among the adult HF population. [41][42][43][44][45] Telemonitoring improves self-care in other conditions and other HF outcomes; therefore, it is important to determine the impact of TM on selfcare among adults with HF.
Self-care for adults with heart failure
In general, self-care in the HF population is defined as a ''decision making process involving the choice of behaviors that maintain physiologic stability (maintenance) and the response to symptoms when they occur (management).'' 46(p.486) It is a process whereby patients recognize a change, evaluate the change, decide to take action, implement a treatment strategy, and evaluate the response to the treatment. According to the definition, adherence or compliance to prescribed treatment is a component of selfcare but is not synonymous. The literature fairly consistently includes HF outcomes related to the following treatment recommendations: low-salt intake, healthy diet, daily weight monitoring, diuretic and other medication adherence, monitoring blood pressure, patient education to identify early warning signs, stress reduction, physical exercise, and consistent use of TM equipment. [9][10][11][14][15][16][17]41 While TM has improved HF outcomes, synthesized evidence regarding the effects of TM on self-care is lacking. Fortunately, there are theoretical constructs and validated tools to measure self-care behaviors among adults with HF.
To illustrate, the well-established Situation-Specific Theory of Heart Failure Self-Care provides an explanation of the impact and role of self-care behaviors in the context of HF and, as such, lends itself to the review question. This theory was first published by Riegel et al. 46 in 2009 and recently revised in 2016. 47 There are three constructs included in self-care in HF patients. A ''naturalistic process'' creates a situational awareness and influences self-care maintenance, symptom perception, and self-management of symptoms; each construct includes autonomous and consultative elements of providers and caregivers. In terms of patient behavior, TM could potentially influence all three self-care processes. For example, perhaps self-care maintenance may be enhanced by engagement with the TM therapy resulting in an autonomous decision-making process to follow the prescribed treatment. At the same time, it is possible that TM helps the patient recognize and evaluate changes (symptom perception) and respond to symptoms, thus affecting selfcare. It is also plausible that TM may improve outcomes by enhancing self-care indirectly through the consultative contribution of the provider who is changing the treatment regime, based on the data received via the TM. Validated tools that specifically measure self-care behaviors in the HF population have been developed. One, the Self-care of Heart Failure Index (SCHFI), first published in 2001, is a self-report scale that measures self-care maintenance, self-care management, and self-care confidence. It has undergone six revisions and has been translated and psychometrically tested in additional languages. 46,47 Another major tool, The European Heart Failure Self-Care Behaviour Scale (EHFScBS) has been in existence since 2003 and has also been translated and psychometrically validated in multiple languages. [48][49][50][51][52][53][54][55][56][57][58][59][60] This tool provides a measure of health maintenance behaviors that mature over time. In 2009, the instrument underwent revision and reduction, which resulted in a 9-item tool that carries the same validity as the original 12-item instrument. 61 In an integrative review of the psychometric properties, both versions of the EHFScBS were shown to be psychometrically sound in multiple languages. 62 As stated, both of these validated tools have been in use for almost two decades in primary research, and these studies were a rich source of information when searching literature for this review.
A search for systematic reviews and umbrella reviews using TM devices on this population, with the primary outcome of self-care behaviors, was performed. Two systematic reviews published in 2012 using the same population, intervention, and outcome as the current review were found; however, both the Ciere et al. 63 and the Radhakrishnan et al. 9 only included randomized controlled trials (RCTs) and are not current. Additional systematic reviews by Maric et al., 64 Inglis et al., 5 and Son 65 explored multiple HF outcomes and included self-care as a secondary outcome with limited evidence provided. Two umbrella reviews were identified; one evaluated the effects of TM on chronic conditions, 7 while the other specifically synthesized the effects of TM on HF patients. 66 These umbrella reviews captured the systematic reviews above; however, neither had self-care as the primary outcome of interest. Therefore, we see a need to synthesize current primary evidence, and include various study designs in addition to RCTs, to determine the effectiveness of TM among adults with HF, with self-care behaviors as the primary outcome.
Review question
What is the effectiveness of TM versus usual care on self-care behaviors among community-dwelling adults with HF?
Inclusion criteria
The inclusion criteria were developed according to JBI guidance. Five considerations were used in the search strings to define the inclusion criteria for studies: i) type of participants, ii) type of intervention, iii) possible comparators, iv) the outcome of interest, and v) the type of primary studies.
Participants
This review considered studies that included adult participants (male and female; 18 years and older) with a diagnosis of HF. Studies needed to provide interventions in the community setting. Studies using pediatric patients with HF, adult patients with HF in acute care settings, or those residing in a care facility were excluded.
Interventions
This review considered studies that evaluated various word iterations of TM, such as telemedicine, mHealth, telehealth, and e-Health systems, that used technologies that remotely monitor and manage patients with HF in the community setting. Telemonitoring can monitor the condition, inform and educate the patient, and communicate physiological parameters and symptoms to the health care provider. There are four main approaches to TM in HF: i) structured telephone support, ii) stand-alone TM devices, iii) implantable/invasive remote monitoring systems, and iv) wearables. 67 Studies reporting use of any of these TM systems were reviewed. Examples of stand-alone TM devices include smart apps, interactive voice-response systems, or web-based programs, and may or may not include physiological measurement tools (eg, blood pressure, heart rate, oxygen saturation, ECG). In addition to monitoring physiological data, implantable/invasive monitoring devices can monitor intrathoracic impedance, right-sided cardiac pressures, left atrial pressure, and pulmonary artery pressure. Wearables such as patches, watches, and textiles can monitor physiological parameters. 67
Comparators
This review considered studies using comparators of usual or standard care, alternative treatments, or no intervention.
Outcomes
This review considered studies that measured selfcare behaviors as the primary outcome with validated tools such as the EHFScBS or SCHFI. In addition, studies that measured specific self-care behaviors (eg, monitoring weight and blood pressure, modifying diet and self-managing diuretics, identifying early warning signs and reporting symptoms, engaging in stress reduction and physical activity) as a proxy for self-care were considered. To clarify, in this review the question is not whether a specific physiological parameter changed, but rather whether self-care behaviors changed as a result of TM; for example, did the patient measure their blood pressure more regularly rather than did their blood pressure improve. Therefore, studies that focused on changes in physiological parameters but did not use them as a proxy for self-care outcomes were not included in this review.
Types of studies
This review considered all data derived from both experimental and quasi-experimental study designs including blinded RCTs, RCTs, and non-randomized controlled studies (controlled clinical trials) that employed TM as the intervention and measured the primary outcome of self-care in the HF population. Observational studies, including prospective and retrospective cohort studies, case-control studies, and analytical cross-sectional studies, were also considered.
Methods
This quantitative systematic review was conducted in accordance with JBI methodology for systematic reviews of effectiveness. 68 This review followed an a priori protocol. 69 The title of this review is registered in PROSPERO (CRD42019131852).
Search strategy
The search strategy for the review aimed to locate published and unpublished studies. Words contained in titles and abstracts were analyzed to find alternate terms for the different elements on the topic. The searches used both MeSH and title/abstract for each element, and coupled expanded terms with Boolean operators, force phrasing, and truncation. The wildcard replacement strategy to find American and British spellings for self-care behavior was unnecessary as truncation yielded the same results. Whilst attempting to maintain consistency in search terms between databases, the strategy was tailored for individual databases during the review using the advanced search feature in each database.
The databases searched included: CINAHL (EBSCO), Cochrane Central Register of Controlled Trials, Embase (Elsevier), Epistemonikos, ProQuest Dissertations & Theses A&I: Health and Medicine, PsycINFO (EBSCO), MEDLINE (PubMed), and Web of Science. The reference lists of all reports and articles selected for critical appraisal were examined for additional salient studies and any reports included were added to ''other data sources.'' The search for gray literature included: conference proceedings, World Health Organization, Clinicaltrials.gov, and National Institute for Health and Care Excellence. The complete search strategy is displayed in Appendix I.
The date range for studies was from 1997 to 2019, as TM studies first emerged in the literature in 1997. The current review considered studies only in English due to lack of additional language skills among researchers who conducted this review.
Study selection
Studies obtained from the eight databases were imported into EndNote X9.0 (Clarivate Analytics, PA, USA), grouped by their database, and duplicates were removed. The Clinicaltrials.gov website and the National Institute for Health and Care Excellence did not provide results. The World Health Organization International Clinical Trials Registry Platform resulted in one applicable study, but the study was ongoing and had not published any results from patient data. The reports underwent three stages of screening that resulted in the final inclusion for the review.
In stage 1 screening, two reviewers examined titles and abstracts independently against the inclusion and exclusion criteria and made separate recommendations to retain or discard. Titles were divided into three groups so that two reviewers worked on a group: JN & LR; LR & ABP; ABP & JN. The two reviewers then discussed recommendations and made a final decision to retain or discard articles. Any disagreements that arose between the reviewers were resolved through discussion with the third reviewer.
Stage 2 screening involved importing the retained studies obtained from stage 1 into the JBI System for the Unified Management, Assessment, and Review of Information (JBI SUMARI; JBI, Adelaide, Australia). Two reviewers (pairs of either JN & LR; LR & ABP; or ABP & JN) independently assessed the full-text citations against the a priori inclusion and exclusion criteria. The two reviewers discussed recommendations and made a final decision to retain or discard articles. Any disagreements that arose between the reviewers were resolved through discussion with the third reviewer. Reasons for exclusion of full-text studies were recorded in JBI SUMARI, and are reported in Appendix II.
Stage 3 screening involved assessment of methodological quality of each study. Results from this process of screening and selecting studies are reported in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram 70 in Figure 1.
Assessment of methodological quality
Independently, two reviewers (pairs of either JN & LR; LR & ABP; or ABP & JN) critically appraised 24 studies for methodological quality using JBI critical appraisal tools for RCTs and quasi-experimental studies, analytic cross sectional, and cohort designs. 68 The team sought clarification of study details on three studies; however, no replies were received from the authors. Therefore, the team evaluated the articles based on the information provided in the published reports and decisions were made to retain or discard. Any disagreements that arose between the two reviewers after critical appraisal of the articles were resolved by consulting the third reviewer, and consensus was reached through team discussion.
Once the team completed the critical appraisal of methodological quality on each study, a grading system was applied to determine the final inclusion or exclusion of individual studies. The three grades of study quality were: low quality (0% to 33% of criteria met), medium quality (34% to 66% of criteria met), or high quality (67% or more of criteria met). Studies reaching medium or high quality were included. The results of critical appraisals are reported in narrative form, and noted numerically in Tables 1 and 2.
Data extraction
After finalizing the list of studies included in the review, the team extracted the data using the standardized data extraction tool in JBI SUMARI. The team extracted information separately, then came together and standardized the specific details about the study population, design and methods, sample size, interventions, statistical significance, time frames for single and repeated measures, and outcomes significant to the review question and specific objectives. The extracted data were checked and refined throughout the writing process by all three team members. Detailed characteristics of included studies are presented in Appendix III.
Data synthesis
The team calculated statistical significance for each study to determine the effectiveness of the outcome of interest for self-care behaviors. Statistical pooling of quantitative data and meta-analysis was not possible due to heterogeneity of the severity of HF in the population, the type and duration of intervention, varied time points of data collection, and variability in reporting of outcomes. Since the included studies used a range of statistical measures, when possible, the team converted these measures into standard deviations (SD) for the EHFScBS or SCHIFI scores to determine clinically significant changes in selfcare behaviors. Clinical significance was defined in this review as ! 0.5 change in SD in the score from baseline to follow-up. 71 New York Heart Association (NYHA) classification and age were also analyzed across studies. The findings are presented in narrative form and in tables and figures to aid in the data presentation where appropriate.
Assessing certainty in the findings
The Grading of Recommendations, Assessment, Development and Evaluation (GRADE) approach for determining the certainty of evidence was followed and a Summary of Findings (SoF) was created using GRADEPro GDT v.2015 (McMaster University, ON, Canada). The SoF provides a ranking of the quality of evidence based on absolute risks for the treatment and control, estimates of relative risk, study limitations, directness, consistency, heterogeneity, precision, and risk of publication bias. The outcome reported in the SoF includes self-care behaviors as measured by the EHFScBS and SCHFI instruments among community-dwelling adult patients with HF.
Study inclusion
A comprehensive search of the literature returned 84 identified records, and 12 additional records were identified through other sources for a total sample size of 96 studies. Duplicates were removed, leaving 69 records, which were reviewed by title and abstract. After 30 records were excluded based on title and abstract, 39 remained. Fifteen of the 39 articles were excluded based on full-text review. Reasons for exclusion are detailed in Appendix II. Thus, a total of 24 articles required critical appraisal of methodological quality. Once critical appraisal was completed, 12 articles were excluded because, although they focused on the population of interest and used a TM intervention, the articles were either feasibility studies without sufficient data, did not report outcome data for self-care behaviors, or did not meet methodological rigor > 34% of study quality. The final analysis yielded 12 studies that were included in the systematic review; nine RCTs 10,11,38,72-77 and three quasi-experimental studies. 15,17,78 Figure 1 depicts the PRISMA flowchart search and review process for study selection and inclusion.
SYSTEMATIC REVIEW
Of note, two studies report on the same sample and intervention; findings, however, focused on different time points, specifically, three months 73 and six months. 74 Therefore, we included both publications to evaluate the ability of the intervention to sustain self-care behaviors. High quality. JBI critical appraisal checklist for randomized controlled trials (RCTs): Q1 ¼ was true randomization used for assignment of participants to treatment groups? Q2 ¼ was allocation to treatment groups concealed? Q3 ¼ were treatment groups similar at baseline? Q4 ¼ were participants blind to treatment assignment? Q5 ¼ were those delivering treatment blind to treatment assignment? Q6 ¼ were outcome assessors blind to treatment assignment? Q7 ¼ were treatment groups treated identically other than the intervention of interest? Q8 ¼ was follow-up complete, and if not, were strategies to address incomplete follow-up utilized? Q9 ¼ were participants analyzed in the groups to which they were randomized? Q10 ¼ were outcomes measured in the same way for treatment groups? Q11 ¼ were outcomes measured in a reliable way? Q12 ¼ was appropriate statistical analysis used? Q13 ¼ was the trial design appropriate, and any deviations from the standard RCT design (individual randomization, parallel groups) accounted for in the conduct and analysis of the trial? Table 2: Critical appraisal of eligible quasi-experimental studies (non-randomized experimental studies)
Methodological quality
The 12 articles included in this review ranged from moderate to high quality, with low to moderate risk of bias. Sample size varied greatly across studies and results were not weighted, which could bias results. Out of nine RCTs (Table 1), three RCTs achieved high quality (! 67%) on critical appraisal, 38,74,77 and six achieved between 54% and 62%, indicating moderate quality assessment. 10,11,38,73,75,76 In the RCTs, true randomization was unclear in 44% (Q1), 11,[73][74][75] and only two studies clearly documented concealment of the allocation procedure (Q2). 38,77 In all the RCTs, blinding of participants (Q4), those delivering the treatment (Q5), or outcome assessors (Q6) was either not possible due to the nature of the intervention, or not documented. Four of the RCTs did not clearly indicate follow-up procedures (Q8). 10,72,73,75 All the quasi-experimental studies (Table 2) were of high quality (77% to 89%). In two studies it was not possible to determine follow-up strategies (Q6). 17,78 A strength of all included studies was the use of validated scales.
Characteristics of included studies
This review considered studies that investigated the effectiveness of TM on self-care behaviors among community-dwelling HF patients. The review aimed to compare the use of TM equipment and/or programs with usual care for the effect on self-care behaviors. Appendix III presents a detailed summary of salient characteristics of the research studies included in this review that used TM to influence self-care behaviors. Characteristics included setting, participants, description of the intervention, sample size, outcomes measured (specific instrument used), and main statistical results to indicate statistical significance or non-significance.
Country/setting
Of the 12 studies, six were conducted at a single site in the following countries: Canada, 77 Finland, 11 Korea, 78 Thailand, 38 the United Kingdom, 75 and the USA. 17 The other six studies were multi-center studies. One study 15 was conducted in three countries (the United Kingdom, Germany, and Spain), while the remaining five studies were conducted across multi-center sites within single countries in either Netherlands 10,72,76 or Sweden. 73,74 Participants The 12 studies considered in this review presented data obtained from 1923 participants. The total sample size for the nine studies using the EHFScBS was 1687 and for the three studies using the SCHFI tool was 236. The number of participants per study ranged from 36 to 450. Two studies had fewer than 50 participants, 17,78 three studies had between 50 and 100, 11,73,74 and seven studies had more than 100. 10,15,38,72,[75][76][77] One of the studies included a dyad interaction of a participant and a caregiver and measured the patient's perception of self-care as well as the caregiver's perception of the patient's self-care behaviors. 38 All studies reported participants' gender. Upon analysis, two studies included slightly more women (52% to 53%), 17,38 while the other 10 studies showed a predominance of male participants ranging from 54% to 83% of the sample. Additionally, as HF is a chronic, progressive disease, severity of HF and age are important confounders in HF. Therefore, we considered the NYHA classification and age across studies to determine how variability in participant population may have affected self-care outcomes.
The included studies reported the NYHA classification to describe their participants. Ten studies provided detail as to the percentage of participants in each classification, while two studies simply stated the inclusion criteria was Class II or III, 78 or Class III or IV 75 without specifying the distribution. Class II and III constituted the majority of participants. Additionally, two studies included Class I, 38,76 and five studies included Class IV. 10,11,72,76,77 Table 3 provides the breakdown of distributions in HF classification for each study.
All studies included adult participants (age 18 years or older). The mean age across studies ranged from 54 to 75 years (Table 4). When mean ages between studies were analyzed, there were some notable differences. For two studies the mean age was less than 60 years old, six had a mean age between 60 and 70 years, and four studies had a mean age of 71 years or greater. Five studies provided the age range of participants: three studies had a narrow range 59 to 65, 38 age distribution was the same. 73,74 The variation in the participants' demographics was one factor that impacted the appropriateness of conducting a metaanalysis.
Interventions
The types of TM interventions included in this review fell into three broad categories: i) telephone or videoconference support, ii) interactive TM devices with physiological data collection, and iii) interactive TM devices without physiological data collection. The interactive TM devices that collected physiological data were the largest category. Several interventions combined modalities. Two interventions utilized telephone consultations as the primary modality: telephone consultation and educational material in the form of HF manual and DVD 38 ; telephone consultation after an initial 30-minute face-to-face session. 78 Eight different interventions involved the introduction of interactive TM devices that also collected physiological data. In these studies, treatment groups were provided devices that connected assessment equipment and facilitated electronic transmission of data. 10,11,15,17,[72][73][74][75][76][77] All data collected by the devices were automatically collected and transferred to the main system except for the mobile app utilized by Vuorinen et al., 11 which required manual input of physiological readings that were then uploaded and transferred. These interactive TM systems included: a TM device that measured weight, blood pressure, and ECG 10 ; OPTILOGG health information system installed in the home via a tablet connected to weight scale, symptom monitor, medication guide, and lifestyle advice 73,74 ; Doc@Home unit measuring blood pressure, pulse, oxygen saturation, and weight 75 ; Motiva platform with interactive telehealth, access to personal health care channel, educational information, and measurement of blood pressure, pulse, oxygen saturation, and weight 15,75 ; a remote monitoring system that instructed participants to take weight, blood pressure, and heart rate daily, and provided feedback and alerts 17 ; access to Heartfailurematters.org website along with use of the e-Vita platform, which recorded weight, blood pressure, and heart rate 76 ; wireless uploading of weight, blood pressure, ECG recordings, and symptom questions on a mobile phone 77 ; a TM home-care package that included a weight scale, blood pressure monitor, a mobile phone with a pre-installed app, and self-care instructions, plus telephone follow-up from the HF nurse. 11 One TM intervention provided an interactive HealthBuddy TM device that utilized pre-set dialogues and prompted responses to symptom and knowledge questions, 72 but did not collect physiological data. In response to TM data, participants received feedback in a variety of formats, including autogenerated recommendations or a real-time telephone consultation with a health care provider. Most interventions that introduced a device utilized teleconferencing to allow provider response to data.
Other differences between intervention modalities included the level of access to a nurse or other provider, frequency of user interface with device (twice daily to weekly), and level of individual tailoring (pre-set dialogues vs tailored messaging) based on patient self-reported measurements and assessment of symptoms. The variation in the type of intervention and timing of follow-up prevented construction of a suitable meta-analysis.
Review findings
Outcomes and instruments Self-care behavior was the outcome of interest in this review. Measures, outcomes, and data collection intervals of each study are presented in Appendix III. Two validated instruments measured self-care behaviors using participants' self-report. In studies using the EHFScBS-care Behavior scale, self-care behavior was denoted as a composite score in which the 12-item version encompasses adherence to regimen, consulting behaviors, and adaptation of behaviors, 62 and in the 9-item version represents only adherence to regimen and consulting behaviors. 79 Originally the EHFScBS (either 12-or 9-item version) was scored with a lower composite score indicating better self-care behaviors (possible score 12 to 60 for the 12-item version, and 9 to 45 for the 9-item version); however, some literature utilizes a standardized score of 0 to 100, with a higher composite score indicating better self-care behaviors. In this review, two of the included studies used the standardized scoring, as noted in Appendix III. 76,78 In the three studies that used the SCHFI, 17,38,77 selfcare behavior was denoted in three componentsself-care maintenance, self-care management, and self-care confidence -and results were reported for each subscale with higher scores indicating greater self-care behaviors. The variation in the scales and versions used, as well as scoring issues (eg, anchor reversal, standardized scores) precluded meta-analysis.
Measurement intervals
Eight studies conducted a one-time measurement of self-care behaviors at either five weeks, six weeks, three months, six months, or nine months. 10,11,17,[73][74][75]77,78 In the study by Stut et al. 15 intervention exposure ranging from two to 18 months. The remaining three studies conducted follow-up at multiple intervals (eg, three, six, and/or 12 months). 38,72,76 Self-care behavior outcomes In general, despite varied time frames and differences in intervention characteristics, studies indicated short-term effectiveness of TM for improved selfcare, but long-term effectiveness was unclear. Ten 15,17,38,[72][73][74][75][76][77][78] of the 12 studies reported statistical significance for improved self-care behavior using EHFScBS and SCHFI instruments as shown in Appendix III and detailed below.
Self-care behavior measured with EHFScBS
Six studies used the 9-item version of EHFScBS. Three studies interpreted lower scores as improved self-care and measurements; all intervention group (IG) change scores improved and ranged from À2.4 points to À9 points lower, with an average of À6.8. 10,73,74 Conversely, the control group (CG) change of scores in these studies showed little improvement (À0.75 lower range) or deterioration (þ2.5 upper range); however, inconsistencies in one of these studies 10 were observed between stated interpretation and reporting of results. Of the remaining three studies in this category, two used reverse anchoring and standardized the scoring, interpreting higher scores as improved self-care. 76,78 In these studies, IG scores improved from 4.2 to 7.56 points higher. For Moon et al., 78 CG participants had worse self-care (À0.75). The study by Wagenaar et al. 76 evaluated two IGs and self-care behavior improved similarly in both intervention groups. Results of the CG were not reported on follow-up. The final study using the EHFScBS-9 reported percentage change in five self-care behaviors rather than provide a score. 15 In this study, all areas showed statistically improved self-care except for medication intake; there was no CG.
Three studies used the 12-item version of the EHFScBS 11,72,75 and all three interpreted improved self-care with lower scores. The IG change scores ranged from À1.8 to À5.2 lower, indicating improved self-care behaviors. The final CG change score for Boyne et al. 72 decreased by 0.1. The second study reported on the change scores for the two IGs but did not report on CG scores since they used a prepost design. 75 When testing one TM intervention against another TM system, there were no significant differences in the self-care improvement, as both interventions had a positive effect on self-care behaviors. In the third study, 11 the IG and CG change score indicated similar improvement (À3.8 points lower).
When analyzing the studies using the EHFScBS by study design, there were seven RCTs and two quasiexperimental studies. Five RCTs were of moderate to high quality and showed significant improvement in self-care behaviors. [72][73][74][75][76] Two RCTs, which were of moderate quality, showed no statistical difference in self-care behaviors. 10,11 In actuality, in the study by Vuorinen et al., 11 both IG and CG groups showed improvement in self-care behaviors, diminishing statistical significance. In summary, of seven RCTs, only one study 10 showed no improvement in selfcare. The two quasi-experimental studies using the EHFScBS-9 version indicating improved self-care behavior were of high quality. 15,78 Self-care behavior measured with SCHFI Three studies 17,38,77 measured self-care behavior change using the SCHFI instrument, which has three sub-scales: self-care maintenance, management, and confidence, with higher change scores interpreted as improvement. One study measured self-care at three months, 17 one study measured at three and six months, 38 and one at six months only. 77 At three months all IG subscales showed improvement. Selfcare maintenance change scores ranged from þ11.5 to 12.2; self-care management change scores ranged from þ6.4 to 8; and self-care confidence change scores ranged from þ7 to 13.7. At six months, self-care maintenance showed an 8.2 to 9.3 point positive change score; self-care management showed a 10.5 to 12 point positive change score; and selfcare confidence showed a positive change score of 0.3 to 15.5.
For two studies, 17,38 CG change scores at three months, namely self-care maintenance and management, changed negligibly; however, self-care confidence improved with a range of 2.6 to 5.9. At sixmonths CG change scores improved for both selfcare management (þ2.2 to 11.4) and confidence (þ0.4 to 7.4) in both studies, 38,77 whereas maintenance improved in one study (þ6.6) 77 but decreased (À1.4) in the other. 38 Of the three studies using the SCHFI instrument, two were RCTs 38,77 and one was a quasi-experimental study, 17 quality. All three studies reported significant improvement in self-care management and maintenance, and one study reported significant improvement in self-care confidence at three months. 17 Clinical significance We analyzed change in SD for all included studies except for the study conducted by Stut et al. 15 which reported only percentage improvement; for this study, we used the end point percentage change to evaluate clinical significance (see Table 5). If change in SD was not reported in the study, the team calculated SDs from the other statistical parameters provided such as interquartile ranges, standard errors, box plot and whiskers, and confidence intervals. Different equations were required depending on the original statistic reported in the study (equations used for each calculation of change in SD are included in Table 5). When we analyzed within intervention group changes, nine studies achieved clinical significance with change in SD 10,17,38,[72][73][74][75][76][77] and one study demonstrated clinical improvement with percentage change. 15 However, when comparing between the IG and the CG, clinical significance was demonstrated in seven studies. 11,17,38,72,74,77,78 Three studies did not report CG results post intervention, 15,75,76 precluding evaluation of clinical significance, and two studies 10,73 did not demonstrate clinical significance. Table 5 displays the change in SDs at study end points and indicates which studies achieved clinical significance.
Discussion
This review included 12 studies (nine RCTs and three quasi-experimental) of moderate to strong evidence. There is strong evidence showing both statistical and clinical significance with 11 different interventions tested. Overall, the findings of this review can be generalized to adult HF patients (age 18 years and above) and NYHA Class I-IV.
Study outcomes were measured with validated scales (two versions of the EHFScBS and the SCHFI instrument) to determine statistical significance of self-care behavior change. Not only did TM interventions and the type of follow-up vary, length of study time frames also varied from five weeks to 12 months, which has important implications for interpreting statistical and clinical significance. Ten of the 12 studies showed statistically significant improvement in self-care behaviors as a result of TM interventions. 15,17,38,[72][73][74][75][76][77][78] Additionally, clinicians should consider clinical significance as well as statistical significance 71 in determining TM effectiveness. Seven of the 12 studies demonstrated TM effectiveness with clinical significance in improved self-care behaviors. 11,17,38,72,74,77,78 We identified three categories of interventions: telephone-based consultation, interactive TM with physiological measurements, and interactive TM without physiological measurement.
Two TM interventions were primarily telephonebased consultations. 38,78 The study by Moon et al. 78 had a small sample size but still achieved statistical significance. It was difficult to analyze clinical significance due to the reverse scoring of EHFScBS-9, and based on the pre-and post-SDs presented, the reported change in SD was incorrect. Our calculation of change in SD, using the presented pre-and post-SDs, indicated no clinically significant change within IG, but did show between-group clinical significance.
Srisuk et al. 38 found both statistical and clinically significant improvement was achieved within the IG. At both three months and six months, self-care behaviors improved; at three months, two SCHFI subscales statistically improved, and at six months all three SCHFI subscales statistically improved with the intervention when comparing IG and CG. At six months, clinical significance was partially demonstrated; only self-care management and confidence improved between groups. The inclusion of dyads was a unique component of this study, which could have affected outcomes as patients may have been prompted to complete self-care behaviors by their caregivers. This study was one of only two studies including patients from NYHA Class I in addition to Class II and III. With most patients classified as NYHA II or III at the start of the study and given the progressive nature of HF, continuing their level of self-care maintenance six months out could be considered clinically important.
While using different validated tools, these two telephone-based TM interventions demonstrated clinical effectiveness in improving self-care behaviors. Effectiveness is further substantiated by statistical significance demonstrated to varying degrees. Given both clinical and statistical significance, it appears that telephone-based consultation is an effective TM to improve self-care behaviors.
Eight interventions utilized interactive TM devices that either automatically uploaded physiological measurements or required the patient to enter physiological values. 11,17,38,72,74,77,78 Despite small group size and quasi-experimental design, the study by Evangelista et al. 17 found TM to be effective, showing both statistical and clinical significance. The engagement aspect of daily interaction with TM and additional interaction with a research nurse may have kept participants motivated and more attuned to their symptoms, which may explain why self-care maintenance and self-care confidence subscales were impacted the most. Because of the non-randomization in the design, the participants in the two groups were slightly different in terms of NYHA classification. However, the class IV comprised only 4% of the CG (ie, one patient more with severe HF than in the IG), and most likely did not affect results. Even with the above design challenges, the combination SYSTEMATIC REVIEW J.M. Nick et al.
remote monitoring system plus alerts and feedback must have created enough magnitude in the TM intervention to make it effective.
Two related studies, Hä gglund et al. 73 and Melin et al., 74 used the same intervention and sample but reported outcomes at two different end points. Both studies found statistical and clinical significance with their intervention. While the TM device was similar to that used in other studies of this category of interventions, there was no additional human interaction. Having a sufficient and similar sample size between IG and CG, plus randomizing participants into the two groups, strengthened the validity of the results. The 9-point within-group improvement in the IG score at three months, 73 74 Lycholip et al. 10 found neither statistical nor clinical significance. Study design limitations may have precluded obtaining statistically significant improvement. Despite using an RCT design, there were significant between-group differences at baseline. Additionally, conflicting information was reported; in the abstract, the authors reported better self-care in the TM group at baseline, yet the results section reported that the control group had significantly better self-care behaviors at baseline. The authors report that over the course of the study, CG self-care behaviors improved, while the TM group did not significantly change. Thus, the authors report no difference between groups at the end of the study. However, the text of the abstract does not match the text of the results, and the pre-post bar graph in the results section does not match results text but does agree with the abstract. The discrepancies suggest that there was confusion regarding EHFScBS since interpretation is not intuitive as lower scores indicate better self-care. These inconsistencies preclude sound evaluation of TM effectiveness in this study.
Seto et al. 77 conducted an RCT using SCHFI to measure self-care behavior. At six months, both statistical and clinical significance was demonstrated, indicating TM effectiveness. More specifically, only the self-care maintenance subscale was statistically improved between-groups. At the same time, clinical significance was demonstrated for all three subscales at six months. There are a number of features of this TM intervention that may have contributed to the observed statistical and clinical improvements in self-care behaviors. The intervention involved very close monitoring of the patients (eg, data were sent immediately and, when indicated, the cardiologist responded to alerts within minutes), which authors postulate led to improved ability to optimize patients' medication regimen, which may have supported self-care maintenance. Furthermore, the real-time immediacy of the feedback capitalized on the ''teachable moment'' to help patients modify their lifestyle behaviors, or reinforce instructions they received in the clinic (eg, increase diuretics, decrease salt intake). In contrast, other interventions in this review provided delayed automated feedback at varied intervals. These findings further underscore the difficulty in distinguishing between the impact of changes in the patients' self-care behaviors versus changes in clinical (provider) management. In addition, it is noted that the authors engaged in an extensive user-centered design process to develop and beta test the app, which is likely to have contributed to the higher rates of adherence observed in this study and, in turn, resulted in higher levels of engagement in self-care.
The study reported by Stut et al. 15 was a quasiexperimental study using the Motiva TM device and measured self-care behavior using the EHFScBS-9. Despite using the EHFScBS to measure self-care behaviors, the authors only reported percentage changes on five domains of self-care, rather than EHFScBS scores. Statistically significant change was reported, and the study also appears to represent clinical significance by percent change. In this study, the percentage of adherence to self-care behaviors improved in four of the five areas. While there is no set threshold for percent change to indicate minimal clinically important difference specifically mentioned in the literature, we would argue that these improvements point to clinically significant change in self-care behavior, which aligns with other findings in this review. Medication intake was the only behavior that did not statistically or clinically change significantly. Outcomes may have been affected by inconsistent dose of the intervention since participant enrollment occurred over a period of time, but the study concluded on a set date. Therefore, the Varon et al. 75 tested two different TM devices (Motiva and Doc@Home) with a pre-post design that allowed participants to serve as their own control, comparing TM to usual care. The authors reported combined results from both groups; therefore, we are unable to determine if one intervention was better than the other. When analyzing the combined findings, both statistical and clinical significance (within-group) were observed in self-care behavior improvements. Interestingly, with both interventions, the authors reported general improvement in self-awareness, self-management, and assistance-seeking, all of which contribute to increases in perceived importance of self-management and selfcare. The intensity of these interventions was greater than other studies included in this review in that it required measurement of four (vs one, two, or three) physiological parameters twice per day (vs daily, weekly, or biweekly). However, the study period was limited to six weeks, which was among the shortest. A longer trial period may have allowed for stronger conclusions on the effects of these telehealth platforms on self-care behaviors and other patient outcomes. Both this study and the previous study by Stut et al. 15 using Motiva showed improvements in self-care behaviors.
Surprisingly, while the six-month study by Vuorinen et al. 11 demonstrated no between-group statistical significance, it did demonstrate clinical significance in self-care behavior change, which is more important to the clinician. These disparate results may be explained by the fact that this study had a young age distribution of patients. As the authors note, this younger, more stable population may have derived less benefit from TM than older patients with worsening HF. Another factor to consider is that participants in the CG also showed increased interest in their health, which has positive implications for the maintenance of self-care behaviors. It is possible that a longer exposure to the TM would have resulted in statistical difference in selfcare behaviors. Finally, achieving statistical significance may have been hampered by the Hawthorne effect, as the study nurse observed that the CG was more active in their care after study enrollment.
Wagenaar et al. 76 compared a combined education website and a TM platform to usual care and demonstrated statistical significance at three months and clinically significant within-group differences in self-care behaviors at 12 months. However, the lack of confidence intervals prevented calculation of between-group clinical significance. In this study, attenuation of TM effect on self-care behaviors over time was of interest as there was no statistically significant difference observed at 12 months. Factors potentially impacting the longitudinal results may include the lower level of HF severity. The majority of participants were classified as NYHA Class I and II; it is possible that the effect of the TM intervention on self-care behaviors would be larger within a population experiencing more severe HF-related symptoms. 4 At the same time, TM may hold the most promise among stable HF patients as it has the potential to decrease the need for, or replace, routine face-to-face clinic visits. This study, therefore, underscores the need to consider ways to sustain effect of TM. 80,81 In summary, interactive TM devices that also collect physiological measurements were effective in improving self-care behaviors. Seven of the eight studies in this intervention category demonstrated statistical significance for improvement in self-care behaviors. Five studies showed between-group clinical significance, and a sixth, while only reporting percentages, also showed clinical improvement.
In a separate category, one TM, although interactive, did not collect physiological data. 72 Using a commonly known smart app system, the Health-Buddy, Boyne et al. 72 demonstrated both statistical and clinical significance at three, six, and 12 months' measurement times. Three factors may have impacted the results. First, accessibility to the TM device may have promoted self-care behaviors since HealthBuddy is downloaded on to a mobile phone. Having ready access to the cell phone may have facilitated patient adherence. Secondly, using HealthBuddy for a long period of time (a year) also provided a large dose response. Finally, the impact of sample size must be considered, as it was quite large, and larger sample sizes can detect smaller effects. It appears that a TM intervention that simply increases awareness of HF symptoms and supports knowledge is also effective for improving self-care behaviors.
Overall, despite challenges such as unequal baseline groups, small sample sizes, and short study time frames, there is strong evidence showing both statistical and clinical significance with various forms of TM. The majority of patients in this review were NYHA Class II or III (ie, symptomatic but stable), which suggests that telemonitoring interventions may be more effective among patients within these classes rather than asymptomatic or unstable or with worsening HF disease. Like previous studies, we were unable to determine which TM is most effective. 82 Systematic reviews have indicated TM effectiveness across various populations and disease conditions outcomes, including reduced rehospitalization, length of stay, and costs, as well as improved patient satisfaction and health-related quality of life. [41][42][43][44][45] The current systematic review is unique in summarizing TM effectiveness on HF patients' self-care behavior outcomes. The main findings support TM as an effective intervention to improve selfcare behaviors as measured by either the EHFScBS or SCHFI. This review adds to the body of knowledge regarding the effective use of TM for chronic health conditions, including HF. This is encouraging as TM systems can be expensive initially, but may reduce costs in the long run.
Limitations of the included studies
The main limitations of the studies included in the review relate to the heterogeneity of the TM interventions, the length of intervention, the subjective nature of self-reporting, variation in sample sizes, and other design flaws. Telemonitoring systems included a range from simple telephone visits to more complex TM devices. Furthermore, each study reported data collected at variable times, thus hindering evaluation of dose response. All studies relied on self-reporting of self-care behavior, potentially introducing subjective recall bias. Additionally, some of the study designs included in the review were of medium quality and introduced risk of bias, thereby creating the possibility of alternate explanations for the conclusions. 10,11,72,73,75,76 Two studies had a minimal sample size of just 18 to 21 participants per group, 17,78 limiting generalizability despite apparent effectiveness. All of the RCTs either lacked blinding or a clear description of a blinding procedure (for participants or those delivering treatment), 10,11,38,[72][73][74][75][76][77] which could introduce bias. Two quasi-experimental studies lacked sufficient information on follow-up. 17,78 Many studies did not report power analysis, effect size, or adequate information to determine appropriateness of sample size; therefore, questions remain regarding underpowered studies. Yet despite study design limitations and variability of treatment protocols, overall, there was a favorable effect of TM on self-care behaviors, thereby increasing generalizability. Moreover, the heterogeneity of the multinational study samples enhances clinical applicability to populations with varied characteristics.
Limitations of the review
A major limitation of this review is that meta-analysis was not possible, thus limiting the strength of conclusions. Factors that prohibited meta-analyses included inconsistent use of anchor scores resulting in disparate interpretations of high and low outcome score; outcomes reported with various statistics (mean score with SDs, interquartile ranges, standard errors, or confidence intervals, or percentage changes); different versions of the EHFScBS scale used; two different instruments measuring self-care behaviors; and different data collection periods. Without being able to determine overall effect size for TM, it is not possible to speak to the strength of the positive effect. As a result, this review cannot determine whether TM is superior to other interventions.
Heart failure is a progressive disease and as comorbidities increase with age, this variable could be a factor that impacts effectiveness of therapy. Comorbidities can negatively affect the ability to perform self-care behaviors. 4 This team was unable to determine the confounding impact of comorbidities across studies as there was inconsistency in reporting. Some did not report presence of comorbidities, 10,72,75,77 another only reported the presence of a comorbidity as a dichotomous variable (yes/ no), 78 while others listed frequency of specific diseases or risk behaviors (eg, smoking). 11,15,17,73,74,76 However, the number and type of comorbidities reported varied considerably. One study reported comorbidities using an index score that reported either low, medium, or high. 38 Without homogeneity of any of the variables, new statistical results could not be achieved through meta-analyses. Studies with older age ranges could be expected to have more comorbidities; however, in this review, age did not appear to negatively impact self-care outcomes.
Review findings may not be generalizable to all populations due to under-representation of subgroups, other forms of TM not reviewed, and differences in health care systems. Additionally, variation in personal and community characteristics (eg, social support and living conditions), which were not reflected in these studies, could impact TM effectiveness. While self-care behaviors improved with TM, it does not ensure better HF outcomes and was not the focus of this review.
Search limitations must be noted because search terms may have prevented the capture of additional publications. Due to language constraints, important studies were potentially excluded that would have helped broaden the extant literature for inclusion. Since both the EHFScBS and SCHFI instruments have been translated into multiple languages and have undergone psychometric validation, there are likely studies using either of the two instruments that were published in other languages. While we used the broadest MeSH terms for TM and self-care, if published studies did not link their definition to the MeSH terms we used, this could also have impacted the results. For instance, authors may not have identified their telephone intervention as a TM. Finally, the searches were completed in November 2019, and there may be new primary research studies published that have not been included in this review.
Conclusions
The evidence from this review provides support for using TM as an effective therapy for increasing selfcare management in adult community-dwelling HF populations. Effective TM interventions ranged from simple telephone-based support to sophisticated remote monitoring devices. Heart failure is a prevalent chronic condition globally; therefore, it was important to determine effectiveness of this specific intervention on increasing patients' involvement in their own care. The importance of improving self-care behaviors cannot be underestimated as involving patients in their own care improves disease outcomes.
Recommendations for practice
Given the moderate to strong quality of the studies, and the largely consistent findings of statistical and clinical significance, TM should be considered a valid intervention to improve self-care behavior among adult community-dwelling HF patients. Care providers can choose from a variety of TM options to enhance patients' engagement in self-care behaviors. However, due to the possibility of attenuated effect over time, health care providers must be alert to the possibility of declining self-care. Re-motivation strategies may be needed to sustain benefits gained during early periods of TM.
Originally TM was used to extend health care access to rural settings, with limited expansion to specific settings lacking specialists. 80 Insurance reimbursement was also a limiting factor in the use of TM (eg, Medicare limited to use in US rural settings, poor reimbursement rates). However, the public health emergency from the COVID-19 pandemic and resultant social sequestration has forced expansion of TM use and temporary full reimbursement across all areas of health care. 81 Local, state, and federal policy-makers can use the results of this systematic review to refine reimbursement policies and procedures to maintain the expanded use and level of remuneration for TM. Greater health care access and achievement of primary care milestones through use of TM may help mitigate the gap caused by social determinants of health. Professional organizations can also use this systematic review to increase support for this practice.
Recommendations for research
As the use of TM increases, there is great potential to reduce gaps in the science of TM. Research is needed to determine which components of TM are most critical and whether there is value-added with layered or multi-modal approaches. Since there were a variety of TM systems included in this review, replication studies are needed to compare multiple systems to determine specificity and sensitivity in the science of TM as therapy. In the current culture of cost containment, head-to-head comparisons and cost-effectiveness analyses are needed. Despite strict security and privacy regulations, the explosive use of telehealth was made possible through temporary relaxation of government restrictions and additional funding. 81 Given the current climate, measuring the effect of telehealth office visits in conjunction with TM versus face-to-face office visits will be a popular and useful topic to study.
In general, diverse populations are under-represented in research, and telehealth studies are no exception. A priority would be to test effectiveness of TM equipment among participants with demographics mirroring the HF patient population. A pivotal question remains: Is the effectiveness of TM due to increased provider interaction in response to alerts or due to patients' increased ability to adjust self-care behaviors, or both? There is still a need for SYSTEMATIC REVIEW J.M. Nick et al. greater understanding of the mechanism by which TM improves outcomes. 83 Existing theoretical models may provide a structure to test hypotheses and could explain the mechanism of self-care behavior in HF patients. 37,46,47 Additionally, since the two instruments used by the study authors in this review rely on self-report of behaviors, subjectivity is built in to results. Future studies using the EHFScBS or SCHFI with parallel objective measures would provide new knowledge on the impact of self-care on disease outcome. Finally, we are unable to make a statement regarding sustained effectiveness of this therapy. All studies administered treatment and measured the effect at short intervals. We now know short-term TM is effective; however, the use of TM for 12 months had conflicting results. Studies of longer duration must be conducted to see if TM provides a sustained effect or whether other factors, such as novelty of equipment wearing off, increased self-care burden, or progressively worsening HF, attenuate effects. The possibility of long-term effectiveness for TM is an exciting prospect. Patients were asked to measure 5 physiologic parameters and answer symptom questions daily, n ¼ 123 Control: Using cross-over design, participants were compared to patients with a similar level of control of symptoms and signs at end of Phase 1 (well-vs poorly-controlled) and usual care with a diuretic minimization or optimization, respectively. 84 However, no CG findings reported.
EHFScBS-9
Note: EHFScBS-9 used but combined score not reported. Reported change in individual SC behaviors including, daily weighing, fluid restriction, low-salt diet, medication intake, and physical activity.
Percentage of patients that were adherent to individual self-care behaviors at baseline and study end, and relative increase (not absolute increase CG, control group; CI, confidence interval; ECG, electrocardiogram; EHFScBS-9, 9-item European Heart Failure Self-care Behaviour Scale; EHFScBS-12, 12-item European Heart Failure Self-care Behaviour Scale; HF, heart failure; ICT, information and communication technology; IG, intervention group; IQR, interquartile range; ns, non-significant; NYHA, New York Heart Association; RCT, randomized controlled trial; SC, self-care; SCHFI, Self-Care of Heart Failure Index; SD, standard deviation; SE, standard error; TM, telemonitoring. | 2021-04-27T06:16:31.445Z | 2021-04-23T00:00:00.000 | {
"year": 2021,
"sha1": "ce4ce73c40d258faf95c3a860af65ccfe8713e74",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.lww.com/jbisrir/Abstract/9000/Effectiveness_of_telemonitoring_on_self_care.99685.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe4c27ac0b8d32267c80d904112269bff106c3ff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264146814 | pes2o/s2orc | v3-fos-license | Sectorial covers over fanifolds
For the stopped Weinstein sector associated with any fanifold recently introduced by Gammage--Shende, we construct a Weinstein sectorial cover which allows us to describe homological mirror symmetry over the fanifold as an isomorphism of cosheaves of categories. In a special case, our Weinstein sectorial cover gives a lift of the open cover for the global skeleton of a very affine hypersurface computed in their previous work.
Introduction
The Fukaya category of a symplectic manifold is a highly nontrivial invariant.It is an A ∞category whose homotopy category has Lagrangians as objects and morphisms between two Lagrangians is the linear span of their intersection points up to suitable perturbation.The A ∞structure is controlled by counts of pseudoholomorphic disks with Lagrangian boundary conditions, which depend on the global geometry of the symplectic manifold.Usually, computing Fukaya categories is extremely hard because of this global nature.However, under suitable assumptions, Fukaya categories exhibit good local-to-global behaviors which allows us to reduce complicated analysis on the global geometry of symplectic manifolds to more tractable one.
For instance, via sectorial descent recently established in [GPS2] by Ganatra-Pardon-Shende, the wrapped Fukaya category W(X) of a Weinstein sector X can be computed by gluing that of local pieces, provided its Weinstein sectorial cover X = n i=1 X i .More precisely, the canonical functor colim ∅ I⊂{1,...,n} W( i∈I X i ) → W(X) induced by the pushforward functors along inclusions of Weinstein sectors from [GPS1] is a pretriangulated equivalence.In other words, wrapped Fukaya categories are cosheaves with respect to Weinstein sectorial covers.The result is valid also for partially wrapped Fukaya categories of stopped Weinstein sectors with mostly Legendrian stops away from boundaries.
Considering its computational power, given a Weinstein sector, it is natural to seek its Weinstein sectorial cover.Moreover, when working with homological mirror symmetry(HMS), one often needs such a cover to be compatible with that of its mirror.For instance, if the latter cover corresponds to an open cover of a relative skeleton, then a lift of it to a Weinstein sectorial cover will have the desired compatibility.So far, there are not so many interesting examples of Weinstein sectorial covers [CKL,GJ,GL22].One reason for this might be easier access to other local-to-global behaviors [GS1, GS2, Lee16, MSZ, PS1, PS2, PS3], which requires us to compute at least locally a skeleton.Although they might work to solve a given problem, construction of Weinstein sectorial covers still should be a fundamental geometric question.
In this paper, we give a Weinstein sectorial cover of the stopped Weinstein sector associated with a fanifold introduced in [GS1] by Gammage-Shende to discuss HMS for more general spaces than toric stacks.It is a stratified space obtained by gluing rational polyhedral fans of cones, which provides the organizing topological and discrete data for HMS at large volume.To a fanifold Φ they associated an algebraic space T(Φ) obtained as the gluing of the toric varieties T Σ associated with the fans Σ.Based on an idea from SYZ fibrations, they constructed its mirror W(Φ) by inductive Weinstein handle attachements.The guiding principle behind their construction of the stopped Weinstein sector W(Φ) was that the FLTZ Lagrangians L(Σ) and the projections L(Σ) → Σ should glue to yield its relative skeleton L(Φ) and a map π : L(Φ) → Φ.
As predicted by Gammage-Shende [GS1, Remark 4.5], the canonical lifts of the projections L(Σ) → Σ glue to yield a version of A-side SYZ fibration W(Φ) → Φ, after some modifications.Theorem 1.1 ([Mor, Theorem 1.2]).There is a filtered stratified fibration π : W(Φ) → Φ restricting to π, which is homotopic to a filtered stratified integrable system with noncompact fibers.If the fan Σ S associated to any stratum S ⊂ Φ is proper, then the homotopy becomes trivial.
The lift π completes a global combinatorial duality for the mirror pair ( W(Φ), T(Φ)) over Φ.Moreover, it enables us to lift any cover of Φ to W(Φ).Inspired by the idea in [GPS2, Example 1.34] and using this aspect of π, we construct a Weinstein sectorial cover of W(Φ) with the desired compatibility in Section 4.
Theorem 1.2.There exists a Weinstein sectorial cover of W(Φ).Moreover, it restricts to an open cover of the relative skeleton L(Φ) compatible with a cover of T(Φ).
When Φ is the fanifold from [GS1, Example 4.23], L(Φ) coincides with the skeleton Core(H) [GS2, Theorem 6.2.4] of a very affine hypersurface H.In Section 5, we show that our cover gives a lift of the open cover [GS2, Corollary 4.3.2] of Core(H), giving an affirmative answer to the conjecture raised in [GS2, Section 2.4] and making the discussion of [GS2, Section 2.1] completely rigorous.
Via sectorial descent we obtain a description of HMS for ( W(Φ), T(Φ)) as an isomorphism of cosheaves of categories.See Section 4.3 for the precise definitions of the associated cosheaves.The isomorphism can be regarded as a dual to [GS1, Theorem 5.3] and encodes the gluing of coherent constructible correspondence [Kuw20] for local pieces.Our construction of the cover is close to the more general strategy suggested in [BC23] by Bai-Côté.When X = T * M, they triangulate M and cover X by the cotangent bundles T * star(v α ) of the stars of the vertices v α .When X is a polarizable Weinstein manifold, they deform X to another Weinstein manifold with arborial skeleton which by [BC23, Section 4.2] admits a canonical Whitney stratification, in particular, a triangulation due to Goresky.Cover the deformation of X by arboreal thickenings of the stars of the vertices.Unfortunately, there are obstructions to making their strategy rigorous which involves developing a good theory of arboreal sectors.We would like to try to resolve these obstructions in the future.
Another general strategy might come from simplicial decompositions introduced in [Asp23] by Asplund.By [Asp23, Theorem 1.4] a simplicial decomposition of a Weinstein manifold X bijectively corresponds to its sectorial cover which is good in the sense of [Asp23, Section 1].Since the convex completion of W(Φ) is an ordinary Weinstein manifold, by [Asp23, Lemma 3.13] one could refine our Weinstein sectorial cover to be a good sectorial cover of the convex completion.It would be an interesting problem to find a simplicial decomposition of a Weinstein manifold corresponding to a Weinstein sectorial cover and compatible with the cover of its given mirror.
Gluing of stopped Weinstein sectors
In this section, we review some basic properties of Liouville sectors introduced in [GPS1, Section 2] by Ganatra-Pardon-Shende and how to glue them together.Throughout the paper, we work with only Liouville manifolds of finite type so that any Liouville manifold X will be the completion of some Liouville domain X 0 .Then its skeleton Core(X) will always be compact.We denote by ∂ ∞ X 0 the actual boundary of X 0 , which can be identified with the imaginary boundary ∂ ∞ X of X to preserve the symbols ∂X 0 , ∂X.
2.1.Stopped Weinstein sectors.All Liouville manifolds of our interest are Weinstein.Namely, the Liouville vector field Z of each X is gradient-like for a Morse-Bott function ϕ : X → R which is constant on the cylindrical end (R t≥0 × ∂ ∞ X, e t (λ| ∂ ∞ X )).Hence Core(X) will also be isotropic by [CE12,Lemma 11.13(a)].
Definition 2.1 ([GPS1, Definition 2.4]).A Liouville sector is a Liouville manifold-with-boundary satisfying the following equivalent conditions: • For some α > 0 there exists an α-defining function I : ∂X → R with ZI = αI near infinity and dI| C > 0 for the characteristic foliation of the hypersurface ∂X ⊂ X, where C is oriented so that ω(C, N) > 0 for any inward pointing vector N. • The boundary of the contact hypersurface ∂ ∞ X ⊂ X is convex and there is a diffeomorphism ∂X F × R sending C to the foliation of F × R by leaves {pt} × R. We call the symplectic reduction F, which is a Liouville manifold, the symplectic boundary of X.An inclusion i : X → X ′ of Liouville sectors is a proper map which is a diffeomorphism onto its image, satisfying i * λ ′ = λ + d f for compactly supported f .A trivial inclusion of Liouville sectors is one for which i(X) may be deformed into X ′ through Liouville sectors included in X.
Remark 2.2.The inequality dI| C > 0 is equivalent to the Hamiltonian vector field X I being outward pointing along ∂X.Small deformations of X within the class of Liouville manifoldswith-boundary are again Liouville sectors.
Definition 2.3 ([GPS1, Remark 2.8]
).An open Liouville sector is a pair (X, ∂ ∞ X) of an exact symplectic manifold X and a contact manifold ∂ ∞ X together with a germ near +∞ of codimension 0 embedding of the symplectization S ∂ ∞ X into X.Here, the embedding strictly respects Liouville forms and the pair (X, ∂ ∞ X) is exhausted by Liouville sectors.Being exhausted by Liouville sectors means that any subset of X, which away from a compact subset of X is the cone over a compact subset of ∂ ∞ X, is contained in some Liouville sector Example 2.4.For any manifold M the cotangent bundle T * M is an open Liouville sector.An exhaustion is given by the family of Liouville subsectors T * M 0 for compact codimension 0 submanifolds-with-boundary M 0 ⊂ M. For any Liouville sector X its interior For a Liouville sector X we denote by W(X) the wrapped Fukaya category [GPS1, Definition 3.36].It is an A ∞ -category whose objects are exact cylindrical at infinity Lagrangians inside X and whose morphisms are calculated by wrapping Lagrangians.According to [GPS1, Section 3.6], any inclusion X → X ′ of Liouville sectors induces the pushforward functor W(X) → W(X ′ ) which makes the assignment X → W(X) covariantly functorial.We use the same symbols to denote the counterparts of open Liouville sectors.
Wrapped Fukaya categories are invariant under deformations in the following sense.
When f = ∅, the above definition specializes to Definition 2.1.For a stopped Liouville sector (X, f) we denote by W(X, f) the partially wrapped Fukaya category [GPS2, definition 2.31].It is an A ∞ -category whose objects are exact cylindrical at infinity Lagrangians inside X disjoint at infinity from f and whose morphisms are calculated by wrapping Lagrangians in (∂ ∞ X) • \ f.Similar to wrapped Fukaya categories, any inclusion (X, f) → (X ′ , f ′ ) of stopped Liouville sectors induces the pushforward functor W(X, f) → W(X ′ , f ′ ) which makes the assignment (X, f) → W(X, f) covariantly functorial.
Although the partially wrapped Fukaya category is defined for an arbitrary stopped Liouville sector (X, f), the results in [GPS2] are generally sharpest when X is Weinstein and f is mostly Legendrian.
Definition 2.7 ([GPS2, Definition 1.7]).A closed subset f of a contact manifold Y 2n−1 is mostly Legendrian if it admits a decomposition f = f subcrit ∪ f crit for which f subcrit is closed and is contained in the smooth image of a second countable manifold of dimension < n − 1, and f crit ⊂ Y \ f subcrit is a Legendrian submanifold.The notion of a mostly Lagrangian closed subset of a symplectic manifold is defined analogously.
Weinstein pairs.
Lemma 2.8 ([GPS1, Lemma 2.13]).Let X0 be a Liouville domain and A ⊂ ∂ ∞ X0 a codimension 0 submanifold-with-boundary for which there exists a function I : A → R with R λ I > 0 such that the contact vector field V I is outward pointing along ∂A.Then X = X \ (A • × R >0 ) is a Liouvile sector where X denotes the conic completion of X0 .Definition 2.9 ([GPS1, Definition 2.14]).A sutured Liouville domain ( X0 , F 0 ) is a Liouville domain X0 together with a codimension 1 submanifold-with-boundary F 0 ⊂ ∂ ∞ X0 and a contact form λ defined over a neighborhood Nbd ∂ ∞ X0 (F 0 ) of F 0 in ∂ ∞ X0 such that (F 0 , λ| F 0 ) is a Liouville domain.A sutured Liouville manifold is a Liouville manifold X together with a codimension 1 submanifold-with-boundary F 0 ⊂ ∂ ∞ X and a contact form λ defined over a neighborhood Nbd ∂ ∞ X (F 0 ) of F 0 in ∂ ∞ X such that (F 0 , λ| F 0 ) is a Liouville domain.
Remark 2.12.The locus L ⊂ X is necessarily noncompact unless F 0 is empty.When both X, F 0 are Weinstein, L is a singular isotropic spine.In particular, X deforms down to a small regular neighborhood of L.
Given a sutured Liouville domain ( X0 , F 0 ), the Reeb vector field R λ is transverse to F 0 as dλ| F 0 is symplectic and determines a local coordinate charts F 0 × R |t|≤ϵ → ∂ ∞ X0 in which we have λ = λ| F 0 + dt.The contact vector field V t associated with the function t is given by Z| F 0 + t∂ t which is outward pointing along ∂(F 0 × R |t|≤ϵ ).Hence ( X0 , F 0 ) determines a codimension 0 submanifold-with-boundary A = F 0 × R |t|≤ϵ satisfying the hypotheses of Lemma 2.8.In particular, ( X0 , F 0 ) gives rise to a Liouville sector.
Conversely, any Liouville sector arises, up to deformation, from a unique in the homotopical sense sutured Liouville domain.The following lemma plays a role to prove Lemma 2.17 below.
valid over some neighborhoods of the respective boundaries, where I = y is the imaginary part of the coordinate on C, (F, λ F ) is a Liouville manifold and f : F × C Re≥0 → R satisfies the following conditions: • f is supported inside F 0 × C for some Liouville domain F 0 ⊂ F.
• f coincides with some f ±∞ : F → R for sufficiently large |I|.
Definition 2.15 ([GPS1, Section 7]).Let X be a Liouville sector.The convex completion is the pair ( X, λ X ) of an exact symplectic manifold X and satisfies the two bulleted properties from Lemma 2.14, then the convex completion ( X, λ X ) defines a Liouville manifold.
Lemma 2.17 ([GPS1, Lemma 2.32]).Any Liouville sector arises, up to deformation, from a unique in the homotopical sense sutured Liouville manifold.Moreover, the convex completion of the Liouville sector X associated with a sutured Liouville manifold ( X, F 0 ) coincides with X, and the inclusion X → X is the obvious one.
2.3.
Gluing of Weinstein pairs.Given a Weinstein pair ( X0 , F 0 ), by [Eli, Proposition 2.9] one can always modify the Liouville form λ for ω and the Morse-Bott function ϕ F 0 to be adjusted in the following sense.
The above hypersurface (F 0 , ∂ ∞ F 0 ) is called the Weinstein soul for the splitting hypersurface P. Due to Lemma 2.21 below, P is contactomorphic to the contact surrounding U ϵ (F 0 ) of its Weinstein soul.
Definition 2.20 ([Eli, Section 2]).Let F 0 be a closed hypersurface in a (2n − 1)-dimensional manifold and ξ a germ of a contact structure along F 0 which admits a transverse contact vector field V.The invariant extension of the germ ξ is the canonical extension ξ on F 0 × R, which is invariant with respect to translations along the second factor and whose germ along any slice Lemma 2.21 ([Eli, Lemma 2.6]).Let F 0 be a closed (2n − 2)-dimensional manifold and ξ a contact structure on P = F 0 × [0, ∞) which admits a contact vector field V inward transverse to F 0 × {0} such that its trajectories intersecting F 0 × {0} fill the whole manifold P. Then (P, ξ) is contactomorphic to (F 0 × [0, ∞), ξ), where ξ is the invariant extension of the germ of ξ along F 0 × {0}.Moreover, for any compact set C ⊂ P with F 0 × {0} ⊂ Int C there exists a contactomorphism (Y, ξ) → (F 0 × [0, ∞), ξ), which equals the identity on F 0 × {0} and sends V| C to the vector field ∂ t .
By definition we have
Large volume limit fibrations over fanifolds
In this section, after the construction of the stopped Weinstein sector W(Φ) associated with a fanifold Φ introduced in [GS1] by Gammage-Shende, we review that of the filtered stratified fibration π : W(Φ) → Φ from Theorem 1.1 restricting to that π : L(Φ) → Φ of the relative skeleton L(Φ).The stopped Weinstein sector W(Φ) is obtained by inductively attaching products of the cotangent bundles of real tori and strata of Φ.Each step requires us to modify Weinstein structures near gluing regions.
3.1.Fanifolds.Throughout the paper, we will work with only those stratified spaces Φ which satisfy the following conditions: (i) Φ has finitely many strata.
(ii) Φ is conical in the complement of a compact subset.
(iii) Φ is given as a germ of a closed subset in an ambient manifold M.
(iv) The strata of Φ are smooth submanifolds of M.
(v) The strata of Φ are contractible.We will express properties of Φ in terms of M as long as they only depend on the germ of Φ. Taking the normal cone C S Φ ⊂ T S M for each stratum S ⊂ Φ, one obtains a stratification on C S Φ induced by that of a sufficiently small tubular neighborhood T S M → M. Definition 3.1.The stratified space Φ is smoothly normally conical if for each stratum S ⊂ Φ some choice of tubular neighborhood T S M → M induces locally near S a stratified diffeomorphism C S Φ → Φ, which in turn induces the identity C S Φ → C S Φ. Definition 3.2.We write Fan ↠ for the category whose objects are pairs (M, Σ) of a laticce M and a stratified space Σ by finitely many rational polyhedral cones in M R = M ⊗ Z R. For any (M, Σ), (M ′ , Σ ′ ) ∈ Fan ↠ a morphism (M, Σ) → (M ′ , Σ ′ ) is given by the data of a cone σ ∈ Σ and an isomorphism M/⟨σ⟩ M ′ such that Σ ′ = Σ/σ = {τ/⟨σ⟩ ⊂ M R /⟨σ⟩ | τ ∈ Σ, σ ⊂ τ}.We denote by Fan ↠ Σ/ for (M, Σ) ∈ Fan ↠ the full subcategory of objects (M ′ , Σ ′ ) with Σ ′ = Σ/σ for some σ ∈ Σ. Definition 3.3 ([GS1, Definition 2.4]).A fanifold is a smoothly normally conical stratified space Φ ⊂ M equipped with the following data: • A functor Exit(Φ) → Fan ↠ from the exit path category of Φ whose value on each stratum S is a pair (M S , Σ S ) of a lattice M S and a rational polyhedral fan Σ S ⊂ M S ,R called the associated normal fan.• For each stratum S ⊂ Φ a trivialization ϕ S : T S M M S ,R of the normal bundle carrying the induced stratification on C S Φ to the standard stratification induced by Σ S .These data must make the diagram In addition, the normal geometry to σ is the geometry of Σ/σ.Due to the condition (v), Exit S (Φ) is equivalent to the poset Exit(Σ S ), the full subcategory of exit paths starting at S contained inside a sufficiently small neighborhood of S .Lemma 3.5 ([GS1, Remark 2.12]).Any fanifold Φ admits a filtration where Φ k are subfanifolds defined as sufficiently small neighborhoods of k-skeleta Sk k (Φ), the closure of the subset of k-strata.
Proof.Suppose that Φ k−1 has the desired filtration , where Φ 0 is the disjoint union P Σ P for all 0-strata P ⊂ Φ equipped with the canonically induced fanifold structure.The normal geometry to a k-stratum S ⊂ Φ is the geometry of the normal fan Σ S ⊂ M S ,R by definition.The ideal boundary ∂ ∞ S might have some subset ∂ in S which is in the direction to the interior of Φ. Perform the gluing Φ k−1 # (Σ S ×∂ in S ) (Σ S × S ) which equals Σ S × S when Φ k−1 is empty and equals Φ k−1 unless S is an interior k-stratum, i.e., ∂ in S = ∂S • where be the result of such gluings for all k-strata S ⊂ Φ.Then Φ k ⊂ Φ is a subfanifold containing Sk k (Φ) since the products Σ S × S are canonically fanifolds.□ 3.2.Weinstein handle attachments.Let W be a Weinstein domain with a smooth Legendrian which extends to a neighborhood in the Weinstein domain W where the Liouville flow on W gets identified with the translation action on R ≤0 .By [Eli, Proposition 2.9] one can modify the Weinstein structure near L so that the Liouville flow gets identified with the cotangent scaling on T * (L × R ≤0 ).We denote by W its conic completion.Then η induces a neiborhood η : Remark 3.6.From [GPS1, Lemma 2.13] it follows that W is a Weinstein sector.
Given Weinstein domains W, W ′ with smooth Legendrian embeddings which yields a Weinstein sector with skeleton where Core(W, L) is the relative skeleton of (W, λ) associated with L, i.e., the disjoint union Core(W) ⊔ RL ⊂ W of Core(W) and the saturation of L by the Liouville flow.Definition 3.7 ([GS1, Definition 4.9, Lemma 4.10]).Let W be a Weinstein domain with a smooth Legendrian L ⊂ ∂ ∞ W. Then a Lagrangian L ⊂ W is biconic along L if and only if it is conic and for some ξ : Nbd W (L) → T * (L × R ≤0 ), where the Liouville flow on W gets identified with the translation action on R ≤0 .
Given biconic Lagrangians
denote the saturations of L, L ′ under the Liouville flows.Since any biconic subsets in W, W ′ remain conic in W# L W ′ , the gluing L# L L ′ is a conic Lagrangian.
For a closed manifold M and a compact manifold-with-boundary S , consider the Weinstein manifold-with-boundary W ′ = T * M × T * S with a smooth Legendrian L = M × ∂S taken to be a subset of the zero section.Here, we equip T * S the tautological Liouville structure so that the Liouville vector field is the generator of fiberwise radial dilation.Then the Liouville flow on W ′ near L is the cotangent scaling.We write W ′ for a Weinstein domain completing to W ′ .One can check that the above gluing procedure carries over, although L does not belong to ∂ ∞ W ′ .Remark 3.8.As explained in [GPS1, Example 2.3], when S is noncompact T * S equipped with the tautological Liouville form is not even a Liouville manifold.However, if S is the interior of a compact manifold-with-boundary, then T * S becomes a Liouville manifold after a suitable modification of the Liouville form near the boundary making it convex.We use the same symbol T * S to denote the result.Since the modification yields additional zeros of the Liouville vector field on ∂ ∞ T * S , we will regard it as a stopped Weinstein sector.Definition 3.9 ([GS1, Definition 4.11]).Let W be a Weinstein domain with a smooth Legendrian embedding M × ∂S → ∂ ∞ W.
(1) A handle attachement is the gluing W# M×∂S W ′ respecting the product structure.
(2) If L ⊂ W is a biconic subset along M × ∂S which locally factors as for some fixed conic Lagrangian L S , then the extension of L through the handle is the gluing L# M×∂S (L S × S ) respecting the product structure.
Definition 3.10 ([FLTZ12, Section 3.1]).Let Σ ⊂ M R be a rational polyhedral fan and M the real n-torus Hom(M, R/Z) = M ∨ R /M ∨ .The FLTZ Lagrangian is the union with the following properties.
3.3.Construction.The stopped Weinstein sector W(Φ) is constructed inductively together with L(Φ) and π along the filtration (3.1) of Φ.When k = 0, let W(Φ 0 ) = P T * M P , L(Φ 0 ) = P L(Σ P ) where each T * M P is equipped with the canonical Liouville structure.We fix an identification T * M P M P × M P,R to regard each L(Σ P ) as a conic Lagrangian submanifold of T * M P .Let π 0 : L(Φ 0 ) → Φ 0 be the map induced by the projection to cotangent fibers.Then the triple ( W(Φ 0 ), L(Φ 0 ), π 0 ) satisfies the conditions (1), . . ., (4).Define π0 as the composition ret 0 •π 0 , where π0 denotes the disjoint union of the projections to the cotangent fibers T * M P → M P,R and ret 0 : P M P,R → P Σ P denotes the disjoint union of maps induced by retractions which are the canonical extensions of piecewise projections onto facets in ∂Φ from outwards along their normal directions.
Suppose that we have constructed the triple ( W(Φ k−1 ), L(Φ k−1 ), π k−1 ) for the subfanifold Φ k−1 .Let L k be the disjoint union of for all interior l-strata S (l) ⊂ Φ, k ≤ l.There are smooth Legendrian embeddings Due to the chart [GS1, Theorem 4.1(1)] for the previous step and Lemma 3.11(4), we may define W(Φ k ) as the handle attachment There is a standard neighborhood We define L(Φ k ) as the extension through the disjoint union of the handles Let π k : L(Φ k ) → Φ k be the map induced by π k−1 and the projections from T * M S (k) ×T * S (k) • to the cotangent fiber direction in T * M S (k) and the base direction in T * S (k) • .The triple ( W(Φ k ), L(Φ k ), π k ) satisfies the conditions (1), . . ., (4).
Here, we explain how to construct πk when attaching the handle Define πk as the map canonically induced by πk−1 , π0,S (k) and the projection T * S (k) • → S (k) • to the base, where π0,S (k) : k) ,R is the projection to the cotangent fibers.Here, we precompose the contraction of the cylindrical ends along the negative Liouville flow to the part of ∂ ∞ W(Φ k−1 ) intersecting ∂T * S (k) • .Let ret k : M S (k) ,R → Σ S (k) be the map induced by a retraction which is the canonical extension of piecewise projections onto facets in ∂Φ from outwards along their normal directions.Define πk as the map canonically induced by πk−1 , π0,S (k) = ret k •π 0,S (k) and the projection T * S (k) • → S (k) • to the base.
Weinstein sectorial covers
4.1.Sectorial covers.According to [GPS2], given a Liouville sector X, if it admits a Weinstein sectorial cover, then the wrapped Fukaya category W(X) can be computed by gluing that of pieces in the cover.
Definition 4.1 ([GPS2, Definition 12.2]).Let X be a Liouville manifold-with-boundary.A collection of cylindrical hypersurfaces H 1 , . . ., H n ⊂ X is sectorial if their characteristic foliations C 1 , . . ., C n are ω-orthogonal over their intersections and there exist functions We also allow immersed hypersurfaces H → X, with I i defined on Nbd Z X (H i ) regarded as an immersed codimension zero submanifold of X, and the subscripts i in the identities (4.1) indexing the "local branches" of H. Definition 4.2 ([GPS2, Definition 12.14]).A Liouville sector-with-sectorial-corners is a Liouville manifold-with-corners whose boundary, regarded as an immersed hypersurface, is sectorial.
Definition 4.3 ([GPS2, Definition 12.19, Remark 12.20]).Let X be a Liouville manifold-withboundary.Suppose that X admits a cover X = n i=1 X i by Liouville sectors-with-sectorialcorners X i ⊂ X, with precisely two faces ∂ 1 X i = X i ∩ ∂X and the point set topological boundary ∂ 2 X i of X i meeting along the corner locus ∂X ∩ ∂ 2 X i = ∂ 1 X i ∩ ∂ 2 X i .Such a cover is sectorial if the collection of boundaries ∂X, ∂ 2 X 1 , . . ., ∂ 2 X n is sectorial.We also allow X i without corners, requiring ∂ 2 X i to be disjoint from ∂X and ∂X, ∂ 2 X 1 , . . ., ∂ 2 X n to be sectorial.
For any sectorial cover X = n i=1 X i , stratify X by strata ranging over I ⊔ J ⊔ K = {1, . . ., n}.The closure of each X I,J,K is a manifold-with-corners, whose symplectic reduction is a Liouville sector-with-sectorial-corners.
Definition 4.4 ([GPS2, Definition 12.15]).A sectorial cover X = n i=1 X i is Weinstein if the convex completions of all of the symplectic reductions of strata (4.2) are Weinstein up to deformation.
Theorem 4.5 ([GPS2, Theorem 1.35]).For any Weinstein sectorial cover X = n i=1 X i of a Weinstein sector X, the induced functor is a pretriangulated equivalence.Moreover, the same holds for any mostly Legendrian stop r ⊂ (∂ ∞ X) • which is disjoint from each ∂X i and from a neighborhood of ∂X.
4.2.
Construction.We construct a Weinstein sectorial cover of W(Φ) as the inverse image under π of a certain cover of Φ.For expository reason, we will assume that Φ is closed.Since each stratum is closed, one can take its barycenter.Connect by an edge inside Φ the barycenter on each top dimensional stratum with that on 1-dimensional lower strata adjacent to it.Repeat the process inductively until reaching barycenters of 1-strata.There might be some lower dimensional closed strata which are not adjacent to higher dimensional strata.Beginning the process also from such strata, we obtain a partition of Φ defined by the additional edges connecting the barycenters and the barycenters on those 1-strata which are not adjacent to higher dimensional strata.The partition divides Φ into some number of connected components.
For each 0-stratum P, there is a unique connected component containing P. Take a suitable slight enlargement of the rounding the corners of its closure.We denote by W(P) and ∂ 2 W(P) the inverse image under π respectively of the slight enlargement and of its additional boundary.Perturbing the additional boundaries if necessary, we may assume that ∂ 2 W(P), ∂ 2 W(P ′ ) are disjoint or intersect transversely for any pair P, P ′ of 0-strata.Perturb each edge connecting the barycenter of a top dimensional stratum with that of 1-dimensional lower strata so that it becomes locally perpendicular to the 1-dimensional lower strata in their neighborhoods.Perform the same perturbation to each edge starting from the barycenter of a lower dimensional closed stratum which is not adjacent to higher dimensional strata.We use the same symbols W(P), ∂ 2 W(P) to denote the results.
Remark 4.6.Our construction works even when Φ is not closed.Then for each stratum we will take the barycenter of its closure, which makes sense due to the condition (iii).Moreover, for instance, we will have connected components not adjacent to vertices, which only causes irrelevant complication in labeling.
Proof.We adapt the idea in [GPS2, Example 1.33].Recall that we are assuming Φ to be closed.For simplicity, we will further assume that there is no lower dimensional strata which are not adjacent to higher dimensional strata.Since by construction π(∂ 2 W(P)) is away from P, we may assume that ∂ 2 W(P) is disjoint from T * M P ⊂ W(P) ⊂ W(Φ).Let I be a 1stratum which intersects π(∂ 2 W(P)).Due to the above perturbation, locally near I the image π(∂ 2 W(P)) becomes perpendicular to I. Since π kills the base direction of T * M I and the fiber direction of T * I • , over a neighborhood of I the boundary Let F be a 2-stratum adjacent to I which intersects π(∂ 2 W(P)).Due to the above perturbation, locally near F the image π(∂ 2 W(P)) becomes perpendicular to F. Since π kills the base direction of T * M F and the fiber direction of T * F • , over a neighborhood of F the boundary ∂ 2 W(P) is isomorphic to a product of T * M F and the part of ∂T * F • defined by a curve on F. By construction the handle attachment W(Φ 1 )# L 2 (T * M F × T * F • ) glues the above product to the component of T * M I × ∂T * I • in the previous paragraph.
Inductively, let S (k) be a k-stratum adjacent to a (k − 1)-stratum S (k−1) which intersects π(∂ 2 W(P)).Due to the above perturbation, locally near S (k) the image π(∂ 2 W(P)) becomes perpendicular to S (k) .Since π kills the base direction of T * M S (k) and the fiber direction of T * S (k) • , over a neighborhood of S (k) the boundary ∂ 2 W(P) is isomorphic to a product of T * M S (k) and a part of ∂T * S (k) • defined by a hypersurface of S (k) .The handle attachment • ) glues the above product to the gluing of the products in the previous steps.Rewinding the process, one sees that ∂ 2 W(P) is obtained by extending via the handle attachments the possibly disjoint union of the inverse images under π of hypersurfaces in top dimensional strata adjacent to P. Then ∂ 2 W(P) is cylindrical, as so are the newly attached pieces.Consider the function on the possibly disjoint union given by the cotangent scaling.For each handle T * M S (k) × T S (k) • , it canonically descends to a function on the intersection • ) given by the cotangent scaling of T S (k) • .Define a function on ∂ 2 W(P) as the gluing of such functions.Clearly, the function is linear near infinity.Moreover, its Hamiltonian vector field is outward pointing along ∂ 2 W(P), as the characteristic foliation of ∂T S (k) • is tangent to the cotangent scaling.Since W(Φ) is a Weinstein sector before separated by ∂ 2 W(P), there are functions on the other boundaries of W(P) with the same property.Hence the Weinstein manifold-with-boundary W(P) satisfies the condition (ii) in Definition 2.1 to be a Liouville sector.□ Remark 4.8.Suppose that there is a lower dimensional closed stratum which is not adjacent to higher dimensional strata.Let S (k) be such a stratum.Then for each 0-stratum P adjacent to it, ∂ 2 W(P) gets contribution from the extensions via the handle attachments of the inverse images under π of hypersurfaces in S (k) .
Remark 4.9.Suppose that Φ is not closed and, for instance, there is a half-open 1-stratum I adjacent to P. Recall that in this case the handle attachment W(Φ 1 )# L 1 (T * M I × T * I • ) is not precisely a Liouville manifold-with-boundary, as one needs to modify the canonical Liouville form of T * I • near ∂T * I • making it convex.This modification, which is not necessary if I were closed, yields a stop on the newly formed imaginary boundary of the handle attachment.
Proof.We adapt the idea in [GPS2, Example 1.34].Again, assume that Φ is closed and there is no lower dimensional strata which are not adjacent to higher dimensional strata.Finiteness of the cover W(Φ) = P W(P) follows from the condition (i).By construction each W(P) has no corners and ∂ 2 W(P) is disjoint from ∂ W(Φ).Moreover, ∂ 2 W(P), ∂ 2 W(P ′ ) intersect transversely for any pair P, P ′ of 0-strata unless they are disjoint.Note that π(∂ 2 W(P)), π(∂ 2 W(P ′ )) intersect only away from lower dimensional strata.Hence the intersection is given by that of the restricted cotangent bundles of a top dimensional stratum to π(∂ 2 W(P)), π(∂ 2 W(P ′ )).Then by Lemma 4.7 the cover is sectorial.It remains to show that the sectorial cover is Weinstein.Since π(∂ 2 W(P)), π(∂ 2 W(P ′ )) intersect only away from lower dimensional strata, any stratum W(Φ) I,J,K from (4.2) is the cotangent bundle of a manifold-with-corners away from the inverse images of lower dimensional strata.Extending it via the handle attachments as in the proof of Lemma 4.7, one sees that the symplectic reductions of the strata of the sectorial cover W(Φ) = P W(P) are simply the inverse images under π of the strata of the cover Φ = P π( W(P)).In particular, they become Weinstein after deformation.Proof.Over a sufficiently small open neighborhood of each intersection of π( W(P)), the inverse image under π, which is a stopped Weinstein sector, has the inverse image under π as its relative skeleton.Hence by [GPS3, Theorem 1.4] the sections over such open neighborhoods are canonically equivalent.Moreover, the corestriction functors between sections of the left and the pushforward functors between sections of the right intertwine these canonical equivalences.
□ On the B-side, the composition of Coh ! with T from [GS1, Section 3] gives a constructible cosheaves of categories over Φ. Recall that T(S (k) ) is the toric variety associated with the fan Σ S (k) ⊂ M S (k) ,R , where (Σ S (k) , M S (k) ) is the image of S (k) under the functor Exit(Φ) op → (Fan ↠ ) op equipped with Φ. Recall also that the values of Coh !•T are the module categories over the dg category of coherent sheaves on toric varieties and the images of morphisms are the pushforwards along closed immersions.Lemma 4.13 ([GS1, Proposition 3.10]).The colimit exists as an algebraic space.
Lemma 4.14 ([GS1, Proposition 3.14]).The canonical functor is an equivalence.Now, it is straightforward to describe HMS in terms of cosheaves of categories over Φ. T * T d+1 (C * ) d+1 satisfying some assumptions, the toric boundary divisor ∂T Σ gives a mirror [GS2].Here, T Σ is the associated toric stack with a smooth quasiprojective stacky fan Σ ⊂ M R R d+1 .Recall that Σ is determined by the defining polynomial of H.Under the assumptions, its stacky primitives span a convex lattice polytope ∆ ∨ containing the origin and Σ defines an adapted star-shaped triangulation T of ∆ ∨ .Below, we review the proof of the following statement.
(5.1) Due to the assumptions, one can easily construct a global skeleton Core(H) of H. Recall that we restrict to H the canonical Weinstein structure on M ∨ C T * T d+1 .In the sequel, for simplicity we will assume that Σ is an ordinary simplicial fan.When Σ is a smooth quasiprojective stacky fan, we will replace the A-model with its finite cover and the B-model with the associated toric stack.The skeleton Core(H) can be obtained by gluing skeleta of tailored pants Pd computed by Nadler [Nad], after transported to the ones in the pants decomposition of H.As explained in [Nad], we may identify the skeleton Core( Pd ) with the imaginary boundary of the FLTZ skeleton associated with the toric variety A d+1 .Hence we may identify Core(H) with ∂ ∞ L(Σ) for the FLTZ skeleton L(Σ) associated with T Σ . of the d-sphere.Then Φ Σ = Σ∩S d ⊂ S d carries a canonical fanifold structure.For instance, any vertex P is the intersection of a ray ρ P ∈ Σ(1) and S d .To it one associates the pair (Σ P , M P ) = (Σ/ρ P , M/⟨ρ P ⟩).Clearly, Φ Σ is closed and there is no lower dimensional strata which are not adjacent to higher dimensional strata.Proof.By Lemma 4.13 we have T(Φ Σ ) = colim Exit(Φ Σ ) op T(S (k) ).As one associates the pair (Σ P , M P ) = (Σ/ρ P , M/⟨ρ P ⟩) to each vertex P ∈ Φ Σ , the toric variety T(P) coincides with O(ρ P ).Let σ S (k) ∈ Σ P be the cone corresponding to an exit path in Φ Σ from P to S (k) .Then by definition of fanifolds T(S (k) ) is the toric variety associated with the fan Σ P /σ S (k) ⊂ M R /⟨σ S (k) ⟩, which in turn is the toric orbit closure corresponding to the inverse image σS (k) of σ S (k) under the quotient Σ → Σ P .When S (k) runs through the strata of Φ Σ , the inverse images σS (k) Corollary 1.3 (Theorem 4.16).There is an isomorphism π * Fuk * Coh !•T of cosheaves of categories over Φ whose global section yields an equivalence Fuk( W(Φ)) ≃ Coh(T(Φ)).
Lemma 2.14 ([GPS1, Proposition 2.25]).Let X be a Liouville sector.Equip C the standard symplectic form ω C = dx ∧ dy.Let λ α C be the Liouville form associated with a Liouville vector field Z α C = (1 − α)x∂ x + αy∂ y .Then any α-defining function I : Nbd Z X (∂X) → R extends to a unique identification R , commute for any stratum S ′ of the induced stratification on Nbd(S ), where the left vertical arrow is the quotient by the span of S ′ .The right vertical arrow corresponds to the map M S → M S ′ on lattices.Remark 3.4.One can identify the exit path category Exit(Σ) with Fan ↠ Σ/ as a poset via σ → [Σ → Σ/σ].
□ 4. 3 .
Sectorial descent and HMS over fanifolds.Our Weinstein sectorial cover is compatible with HMS established by Gammage-Shende in[GS1].On the A-side, there are three isomorphic constructible cosheaves of categories over Φ.First, consider the functorfsh * : (Fan ↠ ) op → * * DG, Σ → Sh L(Σ) (T Σ )which is obtained from fsh : Fan ↠ → * DG * in [GS1, Section 4.5] by taking adjoints of the images of the morphisms.Here, * * DG denotes the category of cocomplete dg categories and functors which preserves colimits and compact objects, while * DG * denotes the category of cocomplete dg categories and functors which preserves limits and colimits.Note that we have * DG * = ( * * DG) op .We use the same symbol fsh * to denote the composition Exit(Φ) op → * * DG with the functor Exit(Φ) op → (Fan ↠ ) op equipped with Φ.Second, consider the functor π * µsh L(Φ) * : Exit(Φ) op → * * DG which is obtained from π * µsh L(Φ) : Exit(Φ) → * DG * in [GS1, Section 4.5] by taking adjoints of the images of the morphisms.By definition both fsh * and π * µsh L(Φ) * are constructible cosheaves of categories over Φ. Lemma 4.11 (cf.[GS1, Proposition 4.34]).There is an isomorphism fsh * π * µsh L(Φ) * .Proof.The claim immediately follows from [GS1, Proposition 4.34].□ Third, let UI(Φ) be the category of unions of intersections of W(P) in the Weinstein sectorial cover W(Φ) = P W(P) from Lemma 4.10.Morphisms are given by inclusions of Weinstein sectors.Introduce to UI(Φ) a topology generated by maximal proper Weinstein subsectors W(P) of W(Φ) in the cover.Consider the functor Fuk * : UI(Φ) → * * DG which sends each Weinstein sector to the category of modules over the wrapped Fukaya category.The images of morphisms are induced by the pushforward functors from [GPS1] along the inclusions of Weinstein sectors.Due to the covariant functoriality and sectorial descent of wrapped Fukaya categories, Fuk * gives a cosheaf of categories over UI(Φ).Note that Fuk * preserves compact objects as it is defined before taking module categories.Since π restricts to π and W(Φ) = P W(P) is a lift of an open cover of L(Φ), the pushforward π * Fuk * : Exit(Φ) op → * * DG gives a constructible cosheaf of categories over Φ.
Lemma 5. 5 .
There are open neighborhoods of Core(H) in H and L(Φ Σ ) in W(Φ Σ ) which are isomorphic as a Weinstein manifold.Proof.For each top dimensional cone σ ∈ Σ max , consider the subfanifoldΦ σ = σ ∩ S d of Φ Σ .Unwinding the proof of[Nad, Theorem 5.13], one sees that the associated Weinstein sector W(Φ σ ) can be identified with the open neighborhood U d there of Core( Pd ) in Pd , after we canonically transform Pd to the pants whose tropicalization is dual to σ.Hence, up to suitable transformation, we may regard W(Φ Σ ) as the union of U d and L(Φ Σ ) as the union of Core( Pd ).Then, near their skeleta, H and W(Φ Σ ) are symplectomorphic by the same argument as in[Nad, Theorem 5.13].Since over Φ Σ the Weinstein handle attachments can be carried out in the ambient cotangent bundle M ∨ C T * T d+1 , the Weinstein structure on W(Φ Σ ) coincides with the restriction of the canonical one on M ∨ C .□ Lemma 5.6.The algebraic space T(Φ Σ ) coincides with ∂T Σ . | 2023-10-17T06:43:00.555Z | 2023-10-16T00:00:00.000 | {
"year": 2023,
"sha1": "12c711e96eb5dbb006a97d14a8887485e3dc4684",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "12c711e96eb5dbb006a97d14a8887485e3dc4684",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
216321819 | pes2o/s2orc | v3-fos-license | Performance Assessment of ERA5 Wave Data in a Swell Dominated Region
The present paper deals with a performance assessment of the ERA5 wave dataset in an ocean basin where local wind waves superimpose on swell waves. The evaluation framework relies on observed wave data collected during a coastal experimental campaign carried out offshore of the southern Oman coast in the Western Arabian Sea. The applied procedure requires a detailed investigation on the observed waves, and aims at classifying wave regimes: observed wave spectra have been split using a 2D partition scheme and wave characteristics have been evaluated for each wave component. Once the wave climate was defined, a detailed wave model assessment was performed. The results revealed that during the analyzed time span the ERA5 wave model overestimates the swell wave heights, whereas the wind waves’ height prediction is highly influenced by the wave developing conditions. The collected field dataset is also useful for a discussion on spectral wave characteristics during monsoon and post-monsoon season in the examined region; the recorded wave data do not suffice yet to adequately describe wave fields generated by the interaction of monsoon and local winds.
Introduction
Numerical models provide continuous and reliable meteomarine datasets in space and time in open oceans [1,2] commonly used to assess wind and wave climatology, and long term analysis related climate change's effects upon the wave climate. Meteo-marine models are an essential element in coastal zone management in the early identification of critical areas [3,4], the analysis of evolutive trends [5] and, at a higher level, the definition of medium and long-term effective strategies for environmental/territorial planning [6]. In the last few decades, several global-scale wave atlases have been proposed using wave datasets from global wave models [7][8][9], highlighting the presence of stormy areas (i.e., Northwest Pacific, Northwest Atlantic, Southern Ocean and Mediterranean Sea), and swell-dominated regions termed "swell pools" located in the eastern tropical areas of the Pacific, the Atlantic and the Indian Ocean.
On the other hand, the performances of wind and wave forecasting models depend on several variables. Among them it is worth citing the wave developing conditions. Past studies analyzed wave datasets from the European Centre for Medium-Range Weather Forecasts (ECMWF) and outlined underestimation in areas with intense cyclone activity and fetch-limited conditions and overestimation in swell-dominated regions [2]. In general, wave data reliability is below acceptable levels when coastal areas and enclosed basins with local wind influence and with orographic and bathymetric effects are considered [10]. Indeed, in a semi-enclosed basin as the Mediterranean Sea, where fetch limited conditions are predominant, wave heights are often underestimated [11,12]. In a different context, such as the North Indian Ocean where swells are predominant, the comparison between observed and modeled wave heights revealed a wave height overestimation in nearshore waters due to swell presence, except during the Indian monsoon wind season when a large underestimation can be detected [13].
Hence, wave observing programs that at first glance could appear out of date, unnecessarily elaborate or expensive, still need to be carried out to perform numerical wave model calibration and develop data-driven models [14][15][16], and to describe global wave climate [17,18] and local wave characteristics [19][20][21][22]. Of course, all available observing wave data sources suffer from a variety of problems depending on the data acquisition system. Measured waves typically show a good accuracy, but are limited by the number of instruments and the duration of the measurement campaign (wave buoys), or can be very time sparse, presenting reliability concerns in coastal zones (remote sensing techniques). Wave data from observing voluntary ships are also available, but datasets are sparse, not continuous and are clustered along the busiest sea routes. Modeled and observed data are complementary, and combined use ensures the best possible results in wave analysis [12].
The aim of the present work is to assess the ECMWF ERA5 Re-Analysis dataset's performance in a coastal region where wind and swell waves are both present. ERA5 is the latest global atmospheric reanalysis, and in the next few years the dataset will cover the period from 1950 to near real time [23]. ERA5 wave data have been compared to an observed wave dataset collected within the frame of a coastal experimental campaign carried out offshore of the Oman coast in the western Arabian Sea, where the wave climate is swell dominated (Figure 1). This paper is structured as follows: after the introduction of the study area, a detailed description of the examined dataset and the methodologies used is presented in Section 2. Section 3 presents the results of the wave spectral analysis and the ERA5 wave model comparison, and conclusions are given in Section 4.
Case Study and Data Sources
The Arabian Sea is connected with the Indian Ocean. It is one of the most important navigation routes in the world, bounded in the west by Somalia and Arabia and in the east by India (Figure 2-left image). The wind field offshore southern Oman coast is quite complex: high-resolution modeled wind fields revealed, also, the presence of the Oman coastal low-level jet clearly distinguished from the large-scale South Asia monsoon wind and the Somali jet [24]. Although Oman is one of the most arid countries in the world, the southern Oman coast (Dhofar region) enjoys a cooler temperature and misty rain from June to September (the so-called Khareef season) due to the SW monsoon. The southern Oman climate is strongly influenced by the Indian monsoon, and three different anemometric seasons can be detected: pre-monsoon (from February to May), southwest (SW) monsoon (from June to September) and post-monsoon or northeast (NE) monsoon (from October to January) [25].
Additionally, the wave climate in the region is influenced by the SW monsoon with wave heights gradually increasing from June, reaching a peak during July and gradually decreasing until the end of the Khareef [26]. Infra-gravity waves that cause a large shipping surge and reduce cargo shipments during the summer season have been widely reported [26,27]. Very few wave data are available in the western Arabian Sea, and general features of weather and wave conditions can be inferred from the wave analysis performed on wave data recorded offshore the west coast of India. To date, several buoys, moored along the eastern Arabian Sea, provide useful real-time information about weather and wave climate across that region. During the SW monsoon season, the sea state across the North Indian Ocean is dominated by swells, which come from SW directions and an increase in the significant height can be observed [28]. Wave spectra in shallow water, recorded in two different sites from June to October, are single peaked, while in the rest of the year double peaked spectra are the most frequent [29,30]. Although both sites are subject to open sea conditions, the percentage of single peaked spectra was greater in the south compared to the northern site due to local wind. Throughout the year, at the southernmost point, the double peaked spectra were predominantly dominated by swell waves. All over the years, from June to August, wave spectrum is narrow banded and energy density is concentrated predominantly between 0.07-0.12 Hz. During September, the wave energy density is predominantly between 0.09-0.14 Hz (11 s and 7 s).
During the winter season (post-monsoon and at the beginning of the pre-monsoon period) Shamal events occur (wind coming from the north) [25,29]. During these events, there is a sharp increase in wave height that may exceed 3.5 m in the northwestern Arabian Sea and 1.0-2.0 m along the west coast of India. Although south swells from the Indian Ocean are always present along the west coast of India, Shamal events are the main contributor to the sea states [31].
Wave analysis during the non-monsoon season revealed a daily variation of the wave parameters due to the coexistence of waves coming from different directions with a predominance of mature swell [31,32]. In the last few years, there has been rapid development along the southern coast of Oman. In particular, the traffic in the port of Salalah, a large container transshipment terminal, has rapidly increased, thanks to continuous improvement of its infrastructure. The absence of a meteomarine monitoring network and the lack of regular and long-term observations is still an important issue in coastal protection and harbor planning, especially in such a complex environment. In order to overcome the lack of observed wave data, and from the perspective of the development of the harbor, an experimental campaign was carried out in the second half of 2013 to collect detailed information about wave climate offshore the harbor facilities, water levels and sea currents. A Datawell Directional Waverider MKIII buoy was moored offshore Port Salalah ( Figure 2) from early August 2013 through late December 2013, partially covering the southwest monsoon season in 2013. The geographical coordinates of the buoy are 16 • 56'6.13" N and 54 • 2'36.21" E with a local depth of 30 m (Figure 2-right image). The sensor consists of a Datawell stabilized platform sensor, performing heave and direct pitch and roll measurements combined with a 3D fluxgate compass and X/Y accelerometers at 1.28 Hz. The buoy measures wave height for wave periods of 1.6 to 30 s with an accuracy equal to 0.5% of measured value.
The wave buoy installation and data management was carried out by a research team that included the unit of the Laboratory of Coastal Engineering (LIC) of the Polytechnic University of Bari. The collected dataset allows to obtain local information on the wave climate and a comparison with the eastern Arabian Sea wave climate that also experiences the SW monsoon. The observed dataset covers a five-month period and therefore it does not suffice to define the wave climate of the region. Nevertheless, the data are useful for a discussion about spectral wave characteristics during monsoon and post-monsoon period and for numerical wave data validation purpose (i.e., extracted from ERA5 dataset). Indeed, although the predominant swells from the south Indian Ocean may have similarities with the west Indian coast, the wave field offshore Dhofar region is influenced by local wind patterns.
Given the lack of previous studies reporting sea state conditions along the examined coast and in the wider region, in order to characterize the wind and wave patterns offshore Salalah and across the Arabian Sea, ERA5 Re-Analysis dataset from ECMWF has been used. The ERA5 wave dataset is produced using 4D-Var data assimilation in ECMWF's Integrated Forecast System (IFS) Cycle 41r2 [33,34]. The ERA5 dataset compared to ERA-Interim wave data [35] presents several improvements including much higher spatial and temporal resolution, information on variation in quality over space and time, much improved troposphere and an improved representation of tropical cyclones [34]. The ERA5 dataset also provides uncertainty information obtained from data assimilation and an enhanced number of output parameters. The horizontal resolution of the wave model in ERA5 is 0.36 degrees; the wave dataset derived from the global re-analysis consists of instantaneous forecast values computed every 1 hour with a horizontal resolution of approximately 0.5 degrees, whereas ERA-I wave dataset provides 6 hourly data at 1 degrees spatial resolution. Significant wave height of combined wind waves and swell (SWH), mean wave period (MWP), mean wave direction (MWD), significant height of wind waves (SHWW) and significant wave height of total swell (SHTS) have been extracted at the nearest grid point offshore the area of interest (16.5 • N, 54 • E), about 50 km offshore the wave buoy. The wave parameters are listed in the Appendix A.
Spectral Analysis and Partition Schemes
Half-hourly raw elevation data (heave, pitch and roll) have been analyzed and directional and frequency spectra have been obtained. Frequency spectra have been obtained by means of standard Fast Fourier Transform (FFT) to heave time series. Significant wave height (H m0 ) and mean wave period (T m01 ) have been computed from 1D spectral moments and peak period (T p ) as the inverse of the spectral peak frequency. Directional wave spectra have been computed through Maximum Likelihood Method (MLM) [36] with a resolution of 0.005 Hz in frequency and 3.6 • in direction. Mean wave direction (MWD) and directional spread (σ), which can be seen as the mean and standard deviation of the directional distribution function, respectively, have been estimated by using the method proposed by [37].
In addition to frequency domain analysis, a standard zero-up crossing analysis has been also performed in the time domain, defining individual waves and wave height and period distribution [38].
Several other spectral parameters have been evaluated to describe the shape and the type of recorded spectra. The spectral parameters are listed in the Appendix A. The spectral narrowness parameter ν [39] and the spectral width parameter [40] have been estimated in order to measure the width of the spectral band. The dimensionless parameters ν and can vary between 0 (narrow band) and 1 (broadband). The spectral peakedness parameter Q p was proposed by [41] to describe the peakedness of the spectral peak and the wave groupiness. According to [42] Q p has been recognized to be a appropriate parameter to describe spectral distribution, since it does not depend on the cut-off frequency unlike ν and . The significant spectral steepness S p is defined as the ratio between H m0 and deep water wavelength that corresponds with T m02 wave period. According to [43], waves depending on significant steepness values can be classified as wind wave (0.08-0.025), young swell (0.025-0.01), mature swell (0.01-0.004) and old swell (<0.004). So far, various techniques have been developed for spectral partitioning. They can be classified into 1D and 2D methods, depending on whether or not they use directional spectral information [44]. Directional spectra provide additional information about different wave systems that compose irregular sea states and 2D partitions methods seems to be more reliable in the wind wave and swell detection [45]. In the present work, swell and wind wave separation and detection have been performed exploiting image processing techniques to deal with 2D spectrum information [45]. Directional spectra have been smoothed to remove noise using a 3 × 3 convolution filter and then a watershed algorithm has been applied to separate the existing wave systems. The smoothing and spectral segmentation is an iterative process that continues until the number of spectral partitions reaches the maximum allowed number (usually 4-6). The detected wave systems have been then classified as wind seas or swell [46] comparing the 1D spectrum energy value at the peak frequency (S( f p )) of a wave system to the energy spectrum estimated according to Pierson-Moskowitz spectrum [47,48] at the same peak frequency (S PM ( f p )). If the ratio γ * = (S( f p )/(S PM ( f p ) is greater than 1 the wave system is classified as wind wave, otherwise the wave system is swell. Eventually, wave characteristics have been calculated for each wave system. Swell and wind wave series resulting from watershed partitioning have been compared to results obtained with wave age method [49]. Wave age method, unlike the method proposed by [45], requires wind information to identify different wave regions in a directional spectrum, limiting its application. Indeed, synchronous wave and wind data are not usually available. Moreover, for further check of spectral partitioning, separation frequency of wind waves from swell in 1D spectra has also been calculated using the spectrum integration method [50]. The last approach is an improvement of the wave steepness method [51], that has been proven to overestimate wind waves under light wind when a significant swell is present [45].
Wind wave and swell wave characteristics have been estimated from partitioned 1D spectra and temporal variability have been analyzed.
Performance Evaluation of ERA-I and ERA5 Wave Datasets
The ability of ERA-I and ERA5 wave model to simulate significant wave heights and mean wave periods has been assessed. The assessment is useful to gain insight about the reliability of ERA-I and ERA5 data in regions where both swell and wind waves are present. The validation framework relies on the use of goodness-of-fit parameters and their statistical significance [52]. A model performance assessment based on a single indicator could lead to incorrect verification of the model, because all goodness-of-fit parameters are affected by limitations [53,54].
The coefficient of determination R 2 , defined as the square of the correlation coefficient, is the most widely used parameter to assess the predictive accuracy of models. Nevertheless, it is insensitive to additive and proportional differences between the model simulations and observations. The Nash-Sutcliffe coefficient of efficiency (NSE), defined as the ratio of the mean square error to the variance in the observed data subtracted from unity, can be seen as an improvement of R 2 . Indeed, it is sensitive to differences in the observed and model simulated means and variance. However, both coefficients are extremely sensitive to outliers. The coefficient of determination ranges from 0.0 to 1.0, whereas NSE ranges from minus infinity to 1.0. The Nash-Sutcliffe coefficient of efficiency values lower than zero indicate that the observed mean is a better predictor than the model. The higher the values of the above mentioned coefficients, the better the agreement. Although NSE values interpretation depends on model applications and can be subjective, ref. [52] proposed four model performance classes, which are rated as unsatisfactory (NSE ≤ 0.65), acceptable (NSE = 0.65-0.8), Good (NSE = 0.8-0.9) and very good (NSE ≥ 0.9).
Circular statistical parameters (i.e., RMSE, bias) have been estimated for observed and predicted wave directions. The relationship between the direction of modeled and observed waves has been analyzed using a circular correlation coefficient (ρ cc ) as proposed by [55] for two circular random variables. The value of ρ cc takes values within the interval from −1.0 to 1.0, where zero indicates that there is no relationship between the variables and ±1 represents the strongest correlation possible. The "goodness-of-fit" parameters are listed in the Appendix A, where N is the number of observations, while O i and P i are the observed and simulated, respectively.
Observed Data Analysis
The procedure, illustrated in Figure 1, has been applied to the buoy observed dataset. After a preliminary processing of the raw data, a spectral partitioning was carried out using different methods in order to determine the best spectral splitting technique under such wave conditions. Then, the observed sea states were classified as swell and wind waves.
Wave Characteristics
Series of significant wave height (H m0 ), peak wave period (T p ), mean wave period (T m01 ), mean wave direction (MWD) and directional spread parameter (σ) have been obtained over 30-min intervals (Figure 3). and the distribution of the percentage frequency of wave direction and height has been analyzed for different months (Figure 4). The buoy was exposed to waves approaching from 55 • to 235 • . As highlighted by [56], the SW monsoon in 2013 presented some peculiarities-a very fast onset and a slower withdrawal phase that lasted from early September until the third week of October. This pattern can be detected in the recorded wave time series from offshore southern Oman. There is a clear distinction between the SW monsoon period (August), characterized by the highest waves coming almost exclusively from the south, and the post-monsoon period (from November to December), characterized by waves coming from east-southeast (ESE) and south-southeast (SSE). In that period, NE monsoon wind waves coexist with long-traveled southern swells, and these two wave systems result in a wave field with MWD tending to SE. The months of September and October 2013 can be seen as a transitional period with low waves coming from S. In August, about 60% of significant wave heights are between 2.25 m and the 3.25 m, while the remaining 40% are within the range 1.75 m and 2.25 m. In September, the direction from which the 92% of the waves come from is south. Waves from SSW constitute the remaining 8%. The wave height values, in this month, are always lower than 2.25 m. Significant wave height values (Figure 3a) decrease from the monsoon period to the post-monsoon period (usually below 1.5 m). The only significant sea storm in the post monsoon was generated by the 2013 Somalia cyclone that severely affected the Oman coast on November.
During the monsoon period, T m spans the range 5-8.5 s, while in the post-monsoon period T m has a wider range (3-10 s) (Figure 3b). When analyzing the wave peak periods (Figure 3c Wave parameters estimated using zero up-crossing analysis are listed in Table 1. The maximum value of H max in the monsoon period reaches about 6 m. It can be noted that the average valuse of H max and H 1/3 in the monsoon period are about 2.5 times larger than the values in the post-monsoon period. Regarding the wave period during the monsoon season, T m is higher with a smaller variability range than in the post-monsoon period.
Wave Spectra
Wave spectra and spectral shape parameters have been estimated and their temporal evolution during the recorded period has been analyzed.
The estimated spectral narrowness ν (Figure 5a) varies, during the monsoon season, between 0.35 and 0.65, while greater values have been detected during the post-monsoon period. Narrowness values are in agreement with values obtained for the monsoon period in the Arabian Sea along the western coast of India [29]. Differently, spectral width values for narrow-banded spectra in the monsoon period are higher than the values found in broad-band spectra in the post-monsoon: this parameter cannot be used as an indicator for the spectral width (Figure 5b). The spectral peakedness parameter Q p [41], instead, seems to be useful for spectral narrowness estimation, with higher values during the monsoon period and lower values for broad band spectra in the post-monsoon (Figure 5c). The significant spectral steepness S p decreases going from monsoon period to the post-monsoon (Figure 5c). In the SW monsoon period, the waves can be classified as either young swells or wind waves, with steepness values between 0.01 and 0.03. In the post-monsoon period, the steepness values are in a wider range between 0.005 and 0.033, and swells, young swells and wind waves can be detected. Figure 6 shows temporal variability of normalized spectral energy density. To obtain the normalized spectral energy value, energy density corresponding to a given frequency has been compared to the peak energy density. The highest values of normalized energy density are found at frequencies between 0.08 and 0.13 Hz (period between 12.5 and 7.7 s) from August to September, whereas f p are between 0.05 and 0.3 Hz in the post-monsoon period (corresponding period between 3.5 s and 20 s) when wind waves coexist with long swell waves, as can be seen from Figure 7. Figure 6. Temporal variability in the normalized spectral energy density during the observed period (computed from half-hourly data). Spectral energy values normalized with respect to peak energy density. Red dots are located at the peak frequency.
The average monthly directional spectra have been obtained by averaging semi-hourly recorded spectra. To obtain normalized spectral energy value, energy density corresponding to a given frequency has been compared to the peak energy density. Looking at the temporal variability from August to December, it can be noted that the monthly averaged spectrum tends to flatten going from the monsoon period to the post-monsoon evolving from an almost unimodal shape in August to spectra with very low energy density and multiple peaks during post-monsoon (Figure 7). This is consistent with [30]; they observed the same behavior along the western Indian coast. In August average monthly directional spectra have the highest spectral density with a peak period at 0.1 Hz and wave direction at about 180 • . In September, frequency bands are significantly wider than in August and the wave direction varies between 100 and 300 • . In October, multipeaked spectra appear and three different wave systems can be detected: a long period swell (frequency 0.06 Hz), an intermediate period swell (frequency 0.1 Hz) and a wind generated wave (frequency 0.15 Hz). During November and December, the NE monsoon wind generated waves (between NE and E) coexist with the distant swells from the south Indian Ocean (from south). Moreover, in November, long period swells coming from south with peak frequency of about 0.06 Hz can be detected. The monthly variability in the spectral energy distribution seems to be similar to that found along the western Indian coast [30].
Spectral Partitioning
Swell and wind wave separation and detection have been performed exploiting 2D spectrum information [45]. A sensitivity analysis has been carried out by changing the threshold value (γ * ). Wave spectra separation in the monsoon period is a complex issue because observed waves often present two different wave systems (multiple swells or swell and wind waves) with neighboring directions too close or even merged to form a single peaked spectrum. A threshold value equal to 1.2 has been used in the following data analysis because the lower threshold can lead to a wind wave height overestimation during weak wind condition. A further verification of 2D spectral partitioning performance was performed using the wave age criterion, and the two methods yield comparable results in wave system partitioning and classification. Moreover, frequency wave spectra have been split according to [50]. Results from 2D and 1D spectral partitioning have been compared, and outcomes are often contrasting: applying the 2D spectrum segmentation most of the waves are identified as swells, whereas using the spectrum integration method waves are mostly classified as wind waves. Portilla et al. [45] already found that the steepness method overestimates wind waves heights, especially in growing wind wave conditions where a swell is also present. In such conditions, common in the examined ocean basin, the improved spectrum integration method does not seem to overcome this problem and the analysis shows that the wind wave partition still includes a relevant part of swell portion. Furthermore, according to significant steepness values, observed waves during monsoon should be classified as young swells, and this finding is in contrast to results from [50]. However, it is to be noted that swell and wind waves cannot be effectively separated in the wave spectra recorded in the monsoon period and, in many cases, the various wave systems are only recognizable by an analysis which also includes directional distribution.
Swell and wind waves time series highlight during the monsoon season the presence of a steady swell wave field coming from SSW over which wind waves with almost the same direction superimpose (Figure 8). In the post-monsoon, storm events generated from NE monsoon wind can be easily detected. Long-period swells with low wave heights (less than 1 m) and peak wave periods greater than 14 s have been observed in the post monsoon period (2.7% of the observed waves). When long waves occur, wave spectra typically present two different swell systems: a primary swell peak at 0.06 Hz and a secondary swell peak at 0.1 Hz. Long waves along the eastern Arabian Sea are typical of pre and post-monsoon periods, while during the SW monsoon their occurrence frequency is low due to the strong winds that affect the Indian Ocean [57,58].
Comparison between Observed Data and ERA Wave Datasets
As a preliminary step, observed wave data have been compared to ERA5 and ERA-I wave datasets in order to evaluate the wave model performance in that region. Since no comparison had ever been carried out on the ERA-I data offshore Oman, it seemed useful to investigate those data, even if they are outdated and ERA5 is the latest global wave model. The comparison has been carried out using a back-propagation approach in which waves recorded by the buoy near the coast are refracted back to the nearest ERA grid node (located about 50 km offshore the Oman coast). The wave height in the nearest ERA grid point has been estimated from the observed wave height by using the energy flux conservation principle, and considering simple, straight and parallel contours of the bathymetric configuration. The wave height at ERA point (suffix "M") can be estimated from buoy observation point (suffix "B") as where H is the significant wave height, θ is the wave direction and C g is the group velocity.
A statistical comparison of observations and forecasts using ERA5 and ERA-I wave models highlights that the ERA5 wave model has a better performance for wave heights estimation (Figure 9). Mean error, standard deviation and root mean square error (RMSE), relative bias and coefficient of determination (R 2 ) have been estimated for the numerical datasets, and results have shown an RMSE of 0.23 m for ERA5, lower than RMSE estimated for ERA-I data (0.26 m). Considering the best results of ERA5 wave model, further analyses on ERA5 wave data have been carried out, evaluating the model's performance. Table 2 reports all the statistical parameters estimated in the monsoon and post-monsoon season. During the SW-monsoon period the model is rated as unsatisfactory (NSE = 0.515) with a wave height overestimation (RMSE = 0.32 m) and a quite low relative bias = 0.08 %). During the post-monsoon period, the fit results span from unsatisfactory to acceptable (NSE = 0.679), and simulated wave heights present a overestimation (RMSE = 0.19 m) and a higher relative bias (bias = 0.185%).
As proposed by [13] the ERA5 mean wave period (MWP) has been compared to the energy wave period (T e ). MWP time series obtained from ERA5 dataset are rated unsatisfactory in the monsoon season (NSE = 0.178 and acceptable (NSE = 0.677) in the post-monsoon.
A rather good agreement can be seen between ERA5 MWD and observed MWD, even if during post-monsoon period ERA5 MWD are more easterly than measured MWD (as a negative bias is present). Table 2. Seasonal variability of goodness-of-fit metrics used for comparison of the ERA5 data with the observed wave data. In order to explain such seasonal differences in wave model performance, further analyses have been performed, evaluating the model's performance for swell and wind waves' predictions separately. Figure 10 reports all the comparisons carried out: (a) a scatterplot of observed significant wave height of combined wind waves and swell versus ERA5 values; (b) a scatterplot of observed significant swell wave height versus ERA5 values; (c) a scatterplot of observed significant wind wave height versus ERA5 values. In all panels of Figure 10, red dots represent observed/predicted wave heights in the post monsoon season (October, November and December). The ERA5 wave dataset has two different performances in wave height prediction depending on wave type: swell wave height is generally overestimated both in monsoon and post-monsoon seasons, whereas wind wave height seems to be less accurate in the monsoon-season. Moreover, the performance of the model was analyzed according to the value of wind wave-swell energy ratio SSER [59] (Figure 11). If SSER 1, the sea states are classified as wind waves. Waves are identified as swells if SSER 1. For other cases a mixed sea state is present. It can be easily observed that more than 80% of modeled sea states can be classified as swell and ERA5 SHTS are usually greater than observed swell heights in all sea state conditions (swell-dominated, wind wave-dominated and mixed sea states). In the case of swell-dominated states, ERA5 SHWW are underestimated, whereas in wind wave domination, conditions are usually overestimated. As also reported in Figure 9 the ERA5 wave model fails in prediction of wave heights particular likely when a persistent swell condition occurs as in the monsoon season. Figure 11. Scatter plot of wind waves-swell energy ratio SSER values against the ratio between the predicted and observed wave heights for wind (square markers) and swell partition (circle markers).
Concluding Remarks
In order to identify the sea state conditions in which the numerical (wave) models get reliable results, a correct evaluation of the performances of numerical wave models first requires an accurate definition of the local framework in which they are applied.
The procedure outlined in this paper is intended to provide a general approach for wave field classification and wave model performance assessment in complex sea states when growing wind waves superimpose on a swell system.
The methodological approach has been applied to an observed wave dataset recorded in the Arabian Sea offshore southern Oman coast in the second half of 2013 during the monsoon and the post-monsoon seasons. The observed wave time series highlights a clear distinction between the SW monsoon and post-monsoon periods: in the first period the buoy recorded highest waves coming almost exclusively from the south, whereas in the following months waves were lower and coming from ESE and SSE. Significant variability in the monthly averaged spectrum has been observed: spectra are single peaked swells in August and September, whereas spectra are multipeaked with generated waves that coexist with the distant swells from the south Indian Ocean from October to December. In such wave conditions, it was found that the best possible results in wave spectra separation were provided by using a 2D spectral partition. Swell and wind wave systems can be detected only if a directional spectral analysis is carried out.
Once the recorded wave data have been analyzed and local sea wave climate has been defined, the observed data have been exploited to validate ERA5 wave model performance in the region. The results of the wave model validation highlighted that the wave model has two different performances depending on the sea condition. The model verification has been carried out for each season using a validation model that combines "good-of-fit parameters" and statistical significance instead of the widely-used coefficient of determination. The [52] validation model helps in overcoming problems in interpretation of NSE values, that can often be subjective, proposing four model performance classes. The analysis has shown that during the monsoon season the numerical dataset presents relevant issues in swell wave height and wave period reconstruction. During the post-monsoon period, the hindcast is also affected by biases and errors.
Furthermore, according to wind wave-swell energy ratio (SSER) the comparison has shown an overestimation in terms of swell wave heights for all sea-state conditions. A underestimation of wind wave heights in swell-dominating conditions is also observed. In wind wave dominated conditions, both swell and wind waves are overestimated.
The total significant wave height overestimation is probably due to swell wave height overestimation, as also reported for ERA-I wave dataset by [13] in the North Indian Ocean and [2] in swell-dominated basins. For the overall wave height overestimation, the highest waves seem to be underestimated in accordance with [1], who found an overestimation of the upper percentile wave heights forecasted in the ERA-I dataset. Some of the discrepancies in the wave data comparison may be related to the coarse resolution of the wind input and wave model grids, as suggested by [10], who highlighted the importance of local wind influence and bathymetry effects in ERA-I wave model. The ERA5 global model provides wave hindcast in a low-resolution grid, and a complex environment such as the nearshore Arabic Sea probably cannot be fully modeled. Thus, the ERA5 wave dataset allows one to describe wave and wind fields across the Arabian Sea, providing a regional overview and overcoming the lack of in situ data, but a finer nested grid model has to be implemented to predict wave field in that region. This is an ongoing activity.
Root mean squared error
Coefficient of determination | 2020-03-26T10:36:12.170Z | 2020-03-19T00:00:00.000 | {
"year": 2020,
"sha1": "44d5742ddf1dcce331fb40100f23d3a89f4e2228",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/jmse/jmse-08-00214/article_deploy/jmse-08-00214-v2.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "afd0ae89a3c5de5b4072b4e252b64a68b03f509c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
118600658 | pes2o/s2orc | v3-fos-license | Hierarchical Quantum Master Equation Approach to Electronic-Vibrational Coupling in Nonequilibrium Transport through Nanosystems
Within the hierarchical quantum master equation (HQME) framework, an approach is presented, which allows a numerically exact description of nonequilibrium charge transport in nanosystems with strong electronic-vibrational coupling. The method is applied to a generic model of vibrationally coupled transport considering a broad spectrum of parameters ranging from the nonadiabatic to the adiabatic regime and including both resonant and off-resonant transport. We show that nonequilibrium effects are important in all these regimes. In particular in the off-resonant transport regime, the inelastic co-tunneling signal is analyzed for a vibrational mode in full nonequilibrium, revealing a complex interplay of different transport processes and deviations from the commonly used $G_0/2$-thumb-rule. In addition, the HQME-approach is used to benchmark approximate master equation and nonequilibrium Green's function methods.
Nanosystems are often characterized by strong coupling between electronic and vibrational or structural degrees of freedom. Examples include single-molecule junctions, 1-4 nanoelectromechanical systems 5,6 as well as suspended carbon nanotubes. [7][8][9] Strong electronicvibrational coupling manifests itself in vibronic structures in the transport characteristics and may result in a multitude of nonequilibrium phenomena such as currentinduced local heating and cooling, multistability, switching and hysteresis, as well as decoherence, which have been observed experimentally [10][11][12][13] and have been the focus of theoretical studies. [14][15][16][17][18] While in certain parameter regimes, approximate methods based on, e.g., scattering theory, master equations or nonequilibrium Green's functions (NEGF) have provided profound physical insight into transport mechanisms, [14][15][16][17][18][19][20][21][22][23][24][25] the theoretical study of strong coupling situations often requires the application of methods that can be systematically converged, i.e. numerically exact methods. Methods developed in this context include path integral approaches, [26][27][28][29] the scattering state numerical renormalization group technique 30 and the multilayer multiconfiguration time-dependent Hartree method. 17,[31][32][33] In this paper, the hierarchical quantum master equation (HQME) approach is formulated to study nonequilibrium transport in systems with strong electronicvibrational coupling. The HQME approach generalizes perturbative master equation methods by including higher-order contributions as well as non-Markovian memory and allows for the systematic convergence of the results. This approach was originally developed by Tanimura and Kubo in the context of relaxation dynamics. 34,35 Yan and coworkers 36,37 as well as Härtle et al. 38,39 have used it to study charge transport in models with electron-electron interaction. An approximate formulation of the HQME method for the treatment of electronic-vibrational coupling was recently proposed. 40 Here, we apply the HQME methodology for the first time within a numerically exact formulation to treat nonequilibrium transport in nanosystems with strong electronicvibrational coupling. In contrast to other numerically exact approaches, the HQME method is directly applicable to steady state transport without time propagation, which is an advantage for systems with slow relaxation.
We apply the methodology to study transport phenomena in a broad range of parameters including offresonant and resonant transport as well as the adiabatic and nonadiabatic transport regimes. In the off-resonant transport regime, it is shown that the peak-dip transition of the first inelastic cotunneling feature does not follow the commonly used G 0 /2-thumb-rule, 1,[41][42][43][44] if the nonequilibrium excitation of the vibration is taken into account. The HQME method is also applied to benchmark approximate master equation and NEGF methods. To be specific, we adopt in the following the terminology used in the context of quantum transport in molecular nanojunctions. It should be noted, though, that the methodology is applicable also to other nanosystems with strong electronic-vibrational coupling as mentioned above.
We consider a generic model of vibrationally coupled electron transport in molecular junctions with the Hamiltonian (we use units where = e = 1) A single electronic state with energy ǫ 0 located on the molecular bridge is coupled to a continuum of electronic states with energies ǫ k in the macroscopic leads via interaction matrix elements V k . The operators d † /d and c † k /c k denote the corresponding creation/annihilation operators. We consider a single vibrational mode with frequency Ω, creation/annihilation operators a † /a and electronic-vibrational coupling strength λ. The inter-action between the molecule and the left/right lead is characterized by the spectral densities Γ L/R (ω) = 2π k∈L/R |V k | 2 δ(ω − ǫ k ).
To derive the HQME for electronic-vibrational coupling, it is expedient to employing a small polaron transformation,H = SHS † with S = exp d † d(λ/Ω) a † − a . Introducing, furthermore, a system-bath partitioning, we obtainH Thereby, the energy of the electronic state is renormalized by the reorganization energyǫ 0 = ǫ 0 − λ 2 /Ω and the molecule lead coupling term is dressed by the shift operator X = exp{(λ/Ω)(a − a † )}.
As the bath coupling operators f σ obey Gaussian statistics, all information about the system-bath coupling is encoded in the two-time correlation function of the free bath C σ To derive a closed set of equations of motion within the HQME method, C σ K (t) is expressed by a sum over exponentials, C σ K (t) = lmax l=0 η K,l e −γ K,σ,l t . 36 To this end, the Fermi distribution is represented by a sum-overpoles scheme employing a Pade decomposition 45 and the spectral density of the leads is assumed as a single denotes the overall molecule-lead coupling strength for a symmetric junction, µ K the chemical potential of lead K and W the width of the band. Choosing the latter as W = 10 4 eV, the leads are effectively described in the wide-band limit. A symmetric drop of the bias voltage at the contacts is used.
Following a similar derivation as for a noninteracting model, 36 the HQMEs for vibrationally coupled transport are obtained aṡ with the vector notation j = (j n , . . . , j 1 ) and multi-index j = (K, σ, l). Thereby, ρ (0) denotes the reduced density operator of the system and ρ (n) j (n > 0) auxiliary density operators, which describe bath-related observables such as, e.g., the current I K (t) = iTr S dXρ The equations differ from those of the noninteracting model by the superoperatorsà andC, which are dressed by the shift operator X and read In the calculations presented below, the coupled set of equations is solved directly for the steady state by settingρ j (t = ∞) = 0 (n ≥ 0). The hierarchy is truncated at a maximum level n max = 4, which provides quantitatively converged results for the electrical current. While the approach introduced above keeps the vibrational mode as part of the system and thus allows a numerically exact treatment, the approximate HQME approach by Jiang et al. 40 treats it as part of the bath. As a result of the polaron transformation, the modified bath-coupling operators do not obey Gaussian statistics. Consequently, a HQME treatment based on the two-time correlation function neglects nonequilibrium vibrational excitation and partially electronic-vibrational correlations. 40 This will be demonstrated below.
In the following, we illustrate the performance of the method by applications to representative models covering a broad range of parameters (see Tab. I). We also use the numerically exact HQME approach to benchmark often used approximate methods including a Born-Markov master equation (BMME), 16 1 shows the current-voltage characteristics (I-V s) and the average vibrational excitation for moderate (λ/Ω = 0.6) as well as strong (λ/Ω = 2) electronicvibrational coupling and for a range of molecule-lead coupling strengths Γ. Focussing first on the I-V s for model 1 (λ/Ω = 0.6) and Γ = 0.01 eV (Fig. 1a), corresponding to the nonadiabatic transport regime (Γ < Ω), the accurate HQME results exhibit the typical Franck-Condon step structure. The vibrational excitation depicted in the inset demonstrates the strong nonequilibrium character of the transport process, which results in values significantly larger than the thermal equilibrium value of 4.4 · 10 −4 . The current-induced vibrational excitation results in a suppression of the current for λ/Ω < 1. 16 As a result, the approximate HQME method of Jiang et al., 40 which neglects the nonequilibrium vibrational excitation, overestimates the current in the resonant transport regime (Φ 2ǫ 0 ). However, it includes the broadening of the electronic level due to molecule-lead coupling, which is completely neglected in the BMME. The 4th-order ME calculation perfectly agrees with the accurate result in this regime of small molecule-lead coupling.
In the regime of strong electronic-vibrational coupling (λ/Ω = 2, model 2), the first step in the I-V (Fig. 1b) is significantly smaller than for λ/Ω = 0.6. This is a manifestation of Franck-Condon blockade. 22 For λ/Ω > 1, the transitions between the low-lying vibrational states of the unoccupied and occupied molecular bridge are exponentially suppressed. In this case, the I-V obtained by Jiang's approximate HQME approach exhibits a lower current level than the accurate result because the Franck-Condon blockade is more pronounced if the nonequilibrium excitation of the molecular bridge is neglected. 22 The 4th-order ME reproduces the accurate result whereas the BMME shows small deviations due to the neglected molecule-lead broadening. Figs. 1c,d show I-V s for moderate molecule-lead interaction, Γ = 0.1 eV. The increased molecule-lead interaction results in a broadening of the Franck-Condon steps. As a result, the deviations of the results obtained by the BMME are more pronounced than for Γ = 0.01 eV. For λ/Ω = 0.6, the 4th-order ME calculation exhibits spurious oscillations around the accurate result indicating the breakdown of perturbation theory. A similar behavior has already been reported in Ref. 47 for a double quantum dot with Coulomb interaction but without electronic-vibrational coupling. Remarkably, these oscillations are much less pronounced for λ/Ω = 2 and Γ = 0.1 eV. This can be attributed to the fact that the effective molecule-lead coupling, which determines the range of validity of the perturbative expansion, is given by |X max | 2 Γ. 53 For strong molecule-lead coupling (Γ = 1 eV), corresponding to the adiabatic transport regime (Γ > Ω), the accurate HQME results predict almost linear I-V s (Fig. 1e,f). For moderate electronic-vibrational coupling (λ/Ω = 0.6), the approximate HQME result shows rather good agreement, indicating negligible vibrational nonequilibrium effects. For strong molecule-lead coupling, the BM-approximation and the 4th-order ME treatment are invalid. In the case of additional strong electronic-vibrational coupling, also the approximate version of the HQME method fails (data not shown). HQME 4th ME SCBA SCBA w exc. HQME approx. HQME FIG. 2. IETS for model 3 and Γ = 6.667·10 −3 eV. The purely electronic contribution I el has been substracted for a better resolution of inelastic effects. The 4th-order ME as well as the NEGF-SCBA approach are compared to the accurate HQME approach in panel (a). The inset shows the peak-dip structure at Φ = 0.328 V for T = 50 K. Panel (b) depicts a comparison with the approximate version of the HQME approach.
Next, we consider in more detail the off-resonant transport regime for low bias voltages Φ < 2ǫ 0 . In this regime transport is governed by elastic and inelastic cotunneling processes. 44 The latter result in characteristic structures in the inelastic electron tunneling spectrum (IETS), given by the second derivative of the current d 2 I/dΦ 2 , which have been observed for many molecular junctions. 1,54-56 Even though we consider a single vibrational mode, we already obtain a rather complex IETS, which is depicted for model 3 in Fig. 2a. The accurate HQME results exhibit a peak at Φ = Ω, which marks the onset of inelastic cotunneling via the emission of one vibrational quantum. The satellite peak at Φ = 2Ω corresponding to the emission of two vibrational quanta is suppressed and appears as a shoulder because of the overlap with the peak around Φ = Ω due to thermal broadening. For Φ ∈ [0.28, 0.44] V, the graph exhibits a structure which results from the superposition of two effects: (i) further inelastic cotunneling peaks at Φ = 3Ω and Φ = 4Ω, the intensity of which is, however, increasingly suppressed and (ii) resonant transport processes facilitated by current induced vibrational excitation. The latter processes include the deexcitation by n vibrational quanta and become active at the thresholds Φ = 2(ǫ 0 − nΩ). These resonant transport processes are reflected by peaks in the conductance and thus by a peak-dip feature in the IETS, which is more clearly seen for lower temperature in the inset of Fig. 2a.
The comparison of the numerically exact HQME results to results of approximate methods for the IETS reveals that the 4th-order ME provides a good approximation for Φ 0.2 V. For larger voltages it deviates significantly because it misses to some extent the broadening due to molecule-lead coupling. This is especially apparent in the lower temperature result in the inset of Fig. 2a, which has reduced thermal broadening. The NEGF-SCBA approach underestimates the height of the first peak at Φ = 0.1 V in the IETS by almost 70 % and essentially misses the second peak around Φ = 2Ω = 0.2 V. 52 This deficiency is a consequence of the thermal equilibrium treatment of the vibration. This is demonstrated by the cyan line, which has been obtained by using the average vibrational excitation obtained from the HQME calculation as input for the SCBA calculation, resulting in good agreement of the IETS with the HQME result for Φ 0.5 V. The approximate version of the HQME method (solid blue line in Fig. 2b) overestimates the height of the inelastic cotunneling peaks profoundly. This shows the importance of electronic-vibrational correlations, in particular in the off-resonant transport regime. 32 Finally, we consider in Fig. 3 the change of the IETS line shape upon increase of the molecule-lead coupling, which has been the focus of several theoretical studies recently. 50,[57][58][59] The HQME results show the transition of the inelastic cotunneling feature from a peak (Γ 0.6 eV) to a dip (Γ 0.8 eV) via a dip-peak feature in the interval Γ ∈ [0.7, 0.75] eV. Qualitatively, our results do not strictly follow the commonly used G 0 /2thumb-rule, 1,41-44 which states that for a system with a zero-bias conductance (determined in the non-interacting case), which is smaller than half of the conductance quantum G 0 , the IETS exhibits a peak, whereas it shows a dip for higher zero-bias conductance. This rule was originally derived based on a lowest order perturbative expansion in electronic-vibrational coupling. 41 Assuming a thermally equilibrated vibration, it was later generalized by Egger 58 and Entin-Wohlman 59 et al., who found that the peak-dip transition is not universal at a zero bias conductance of G 0 /2 but depends on all model parameters. They reported an upper bound of G 0 /2 for the peak-dip transition. In contrast, the results obtained for Γ = 0.6 eV (orange circles in Fig. 3a), corresponding to a zero bias conductance of 0.54 G 0 (0.5 G 0 in the non-interacting case) still exhibit a peak in the IETS at Φ = Ω. The crossover between the peak-and diplike structure rather occurs for a zero bias conductance between 0.62 G 0 and 0.65 G 0 (0.58 G 0 and 0.61 G 0 in the non-interacting case) in model 4. This is demonstrated by the green (Γ = 0.7 eV) and cyan (Γ = 0.75 eV) circles, which show a dip-peak feature around Φ = Ω. Our findings suggest that the deviations from the G 0 /2thumb-rule result from the nonequilibrium excitation of the vibrational mode. This conjecture is confirmed by the comparison of the HQME results with SCBA-as well as FSCBA-calculations in Fig. 3. While the SCBA results, which treat the vibration in equilibrium follow strictly the G 0 /2-thumb-rule, the FSCBA, which incorporates nonequilibrium effects within a perturbative treatment are in rather good agreement with the HQME results. The comparison of different truncation levels shows that the HQME results for the conductance are quantitatively converged for n = 4. For the IETS small deviations occur for some of the parameters. This is not very surprising, because the quantity d 2 (I − I el )/dΦ 2 is more difficult to converge than the current or the conductance (Fig. 3b).
In summary, the HQME method presented here allows a numerically exact treatment of nonequilibrium charge transport in nanosystems with strong electronicvibrational coupling. It covers a broad spectrum of parameters ranging from the nonadiabatic to the adiabatic regime and including both resonant and off-resonant transport. Being a nonperturbative method that includes all nonequilibrium effects, it allows a comprehensive description of this complex transport problem, as demonstrated here, for example, in the analysis of the structures and line shapes of the IETS. In the current formulation, the use of the exponential expansion of the bath correlation functions limits the application to moderate and high temperatures. Recent proposals 60,61 to overcome this limitation appear promising. The implementation of such improved schemes as well as the extension of the method to describe current fluctuations will be the subject of future work. | 2016-09-16T17:37:11.000Z | 2016-09-16T00:00:00.000 | {
"year": 2016,
"sha1": "26bc9bccd383a74f788f76395c8056987d64dcc7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1609.05149",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "26bc9bccd383a74f788f76395c8056987d64dcc7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
15304237 | pes2o/s2orc | v3-fos-license | Theory of Parabolic Arcs in Interstellar Scintillation Spectra
Our theory relates the secondary spectrum, the 2D power spectrum of the radio dynamic spectrum, to the scattered pulsar image in a thin scattering screen geometry. Recently discovered parabolic arcs in secondary spectra are generic features for media that scatter radiation at angles much larger than the rms scattering angle. Each point in the secondary spectrum maps particular values of differential arrival-time delay and fringe rate (or differential Doppler frequency) between pairs of components in the scattered image. Arcs correspond to a parabolic relation between these quantities through their common dependence on the angle of arrival of scattered components. Arcs appear even without consideration of the dispersive nature of the plasma. Arcs are more prominent in media with negligible inner scale and with shallow wavenumber spectra, such as the Kolmogorov spectrum, and when the scattered image is elongated along the velocity direction. The arc phenomenon can be used, therefore, to constrain the inner scale and the anisotropy of scattering irregularities for directions to nearby pulsars. Arcs are truncated by finite source size and thus provide sub micro arc sec resolution for probing emission regions in pulsars and compact active galactic nuclei. Multiple arcs sometimes seen signify two or more discrete scattering screens along the propagation path, and small arclets oriented oppositely to the main arc persisting for long durations indicate the occurrence of long-term multiple images from the scattering screen.
to the underlying scattered image of the pulsar. We find that the arc phenomenon is a generic feature of forward scattering and we identify conditions that enhance or diminish arcs. This paper contains an explanation for arcs referred to in Paper 1 and also in a recent paper by Walker et al. (2004), who present a study of the arc phenomenon using approaches that complement ours. In §2 we review the salient observed features of scintillation arcs. Then in §3 we introduce the general theory of secondary spectra through its relation to angular scattering and provide examples that lead to generalizations of the arc phenomenon. In §4 we discuss cases relevant to the interpretation of observed phenomena. In §5 we discuss the scattering physics and show examples of parabolic arcs from a full screen simulation of the scattering, and we end with discussion and conclusions in §6.
observed phenomena
The continuum spectrum emitted by the pulsar is deeply modulated by interference associated with propagation through the irregular, ionized ISM. Propagation paths change with time owing to motions of the observer, medium and pulsar, causing the modulated spectrum to vary. The dynamic spectrum, S(ν, t), is the main observable quantity for our study. It is obtained by summing over the on-pulse portions of several to many pulse periods in a multi-channel spectrometer covering a total bandwidth up to ∼ 100 MHz, for durations ∼ 1 hr. We compute its two-dimensional power spectrum S 2 (f ν , f t ) = |S(f ν , f t )| 2 , the secondary spectrum, where the tilde indicates a two dimensional Fourier transform and f ν and f t are variables conjugate to ν and t, respectively. The total receiver bandwidth and integration time define finite ranges for the transform.
The dynamic spectrum shows randomly distributed diffractive maxima (scintles) in the frequency-time plane. Examples are shown in Figure 1. In addition, organized patterns such as tilted scintles, periodic fringe patterns, and a loosely organized crisscross pattern have been observed and studied (e.g. Hewish, Wolszczan, & Graham 1985;Cordes & Wolszczan 1986). In an analysis of four bright pulsars observed with high dynamic range at Arecibo, we discovered that the crisscross pattern has a distinctive signature in the secondary spectrum (Paper 1). Faint, but clearly visible power extends away from the origin in a parabolic pattern or arc, the curvature of which depends on observing frequency and pulsar. The observational properties of these scintillation arcs are explored in more detail in Hill et al. (2003;hereafter Paper 2) and Stinebring et al. (in preparation, 2004;hereafter Paper 3). Here we summarize the major properties of pulsar observations in order to provide a context for their theoretical explanation: 1. Although scintillation arcs are faint, they are ubiquitous and persistent when the secondary spectrum has adequate dynamic range 4 and high frequency and time resolution (see Figure 1 and Paper 3).
2. The arcs often have a sharp outer edge, although in some cases (e.g. Figure 1c) the parabola is a guide to a diffuse power distribution. There is usually power inside the parabolic arc ( Figure 1), but the power falls off rapidly outside the arc except in cases where the overall distribution is diffuse.
3. The arc outlines are parabolic with minimal tilt or offset from the origin: f ν = af 2 t . Although symmetrical outlines are the norm, there are several examples of detectable shape asymmetries in our data. 4. In contrast to the symmetrical shape typical of the arc outline, the power distribution of the secondary spectrum can be highly asymmetric in f t for a given f ν and can show significant substructure. An example is shown in Figure 2. The timescale for change of this substructure is not well established, but some patterns and asymmetries have persisted for months.
5.
A particularly striking form of substructure consists of inverted arclets with the same value of |a| and with apexes that lie along or inside the main arc outline (Figure 2 panels b and c).
6. Although a single scintillation arc is usually present for each pulsar, there is one case (PSR B1133+16) in which multiple scintillation arcs, with different a values, are seen (Figure 3). At least two distinct a values (and, perhaps, as many as four) are traceable over decades of time.
7. Arc curvature accurately follows a simple scaling law with observing frequency: a ∝ ν −2 (Paper 2). In contemporaneous month-long observations over the range 0.4 -2.2 GHz, scintillation arcs were present at all frequencies if they were visible at any for a given pulsar, and the scintillation arc structure became sharper and better defined at high frequency.
8. The arc curvature parameter a is constant at the 5-10% level for ∼ 20 years for the half dozen pulsars for which we have long-term data spans (see Paper 3).
In this paper we explain or otherwise address all of these points as well as explore the dependence of arc features on properties of the scattering medium and source. -Primary and secondary spectrum pairs are shown for 4 pulsars that exhibit the scintillation arc phenomenon. The grayscale for the primary spectrum is linear in flux density. For the secondary spectrum, the logarithm of the power is plotted, and the grayscale extends from 3 dB above the noise floor to 5 dB below the maximum power level. The dispersion measures of these pulsars range from 3.2 to 48.4 pc cm −3 and the scattering measures range from 10 −4.5 to 10 −3.6 kpc m −20/3 , with PSR B1929+10 and PSR B0823+26 representing the extremes in scattering measure. The data shown here and in Figures 2-3 were obtained from the Arecibo Observatory.
3. theory of secondary spectra and scintillation arcs
A Simple Theory
Though the parabolic arc phenomenon is striking and unexpected, it can be understood in terms of a simple model based on a "thin screen" of ionized gas containing fluctuations in electron density on a wide range of scales. Incident radiation is scattered by the screen and then travels through free space to the observer, at whose location radiation has a distribution in angle of arrival. The intensity fluctuations are caused by interference between different components of the angular spectrum whose relative phases increase with distance from the screen. Remarkably, we find that the arcs can be explained solely in terms of phase differences from geometrical path lengths. The phenomenon can equally well be described, using the Fresnel-Kirchoff diffraction integral, by spherical waves originating at a pair of points in the screen interfering at the observer having traveled along different geometric paths. Thus, though the screen is dispersive, the arcs arise simply from diffraction and interference. Dispersion and refraction in the screen will likely alter the shapes of the arcs, though we do not analyze these effects in this paper. -Three observations of PSR B0834+06 taken within two weeks of each other. The asymmetry with respect to the conjugate time axis is present, in the same sense, in all three observations. The broad power distribution at 430 MHz in (a) is much sharper one day later at 1175 MHz (b); however a more diffuse component has returned 14 days later (c). Note that the scales for the delay axis in (b) and (c) differ from that in (a). The inverted parabolic arclets noticeable in panels b and c are a common form of substructure for this and several other pulsars.
To get interference, the screen must be illuminated by a radiation field of high spatial coherence, i.e. radiation from a point-like source. The source can be temporally incoherent because the different components of the temporal Fourier transform each contribute nearly identical interference patterns, as in interferometry. Two components of the angular spectrum arriving from directions θ 1 and θ 2 interfere to produce a two-dimensional fringe pattern whose phase varies slowly with observing frequency. The pattern is sampled in time and frequency as the observer moves through it, creating a sinusoidal fringe pattern in the dynamic spectrum. The fringe appears as a single Fourier component in the secondary spectrum. Under the small-angle scattering of the ISM its f ν coordinate is the differential geometric delay ∝ θ 2 2 − θ 2 1 and its f t coordinate is the fringe rate ∝ V ⊥ · (θ 2 − θ 1 ), where V ⊥ is an appropriate transverse velocity (see below). A quadratic relationship f ν to f t results naturally from their quadratic and linear dependences on the angles. When one of the interfering waves is undeviated (e.g. θ 1 = 0) we immediately get the simple parabola f ν ∝ f 2 t . When the scattering is weak there will be a significant undeviated contribution resulting in a simple parabolic arc. However, we also find that an arc appears in strong scattering due to the interference of waves contained within the normal scattering disc with a faint halo of waves scattered at much larger angles. The faint halo exists only for scattering media having wavenumber spectra less steep than (wavenumber) −4 , including the Kolmogorov spectrum.
While the relation of f ν to time delay is well known through the Fourier relationship of the pulse broadening to the radio spectrum intensity scintillation, the physical interpretation of f t can be viewed in several ways besides the fringe rate. Scintillation versus time is often Fourier analyzed into a spectrum versus frequency (f t ) in Hz; in turn this is simply related to spatial wavenumber f t = κ · V ⊥ /(2π) and hence to angle of arrival (κ = kθ). It can also be thought of as the beat frequency due to the different Doppler shifts of the two scattered waves. Thus the secondary spectrum can be considered as a differential delay-Doppler spectrum , similar to that measured in radar applications.
Screen Geometry
In this section we discuss the relationship between the scattered image and the secondary spectrum S 2 . Later, in §5, we obtain the relation more rigorously by deriving S 2 as a fourth moment of the electric field. Explicit results can be obtained -One of the pulsars in our sample, PSR B1133+16, shows multiple scintillation arcs on occasion. The broad, asymmetric power distribution in (a) has numerous arclets at 321 MHz. Panels (b) and (c) are at frequencies above 1 GHz. Panel (b) shows two clear arcs (along with a vertical line at ft due to narrowband RFI and the sidelobe response of power near the origin). Four months later(c), only the outer of these two arcs -widened by the a ∝ ν −2 scaling -is visible.
in the limits of strong and weak scintillation, as shown in Appendices C and D. We find that the weak scintillation result is simpler, being a second moment of the electric field since it only involves the interference of scattered waves with the unscattered field. However, pulsars are typically observed in strong scintillation, and the strong scintillation limit gives exactly the same result as in the approximate theory (Eq. 8) used below. So we now apply the approximate theory to spherical waves from a pulsar scattered by a thin screen.
Consider the following geometry: a point source at z = 0, a thin screen at z = D s and an observer at z = D. For convenience, we define s = D s /D. The screen changes only the phase of incident waves, but it can both diffract and refract radiation from the source. For a single frequency emitted from the source two components of the angular spectrum at angles θ 1 , θ 2 (measured relative to the direct path) sum and interfere at the observer's location with a phase difference Φ. The resulting intensity is I = I 1 + I 2 + 2 √ I 1 I 2 cos Φ where I 1 and I 2 are the intensities from each component (e.g. Cordes & Wolszczan 1986). The total phase difference Φ = Φ g + φ, includes a contribution from geometrical path-length differences Φ g and from the screen phase, φ, which can include both small and large-scale structures that refract and diffract radiation. Expanding Φ to first order in time and frequency increments, δt and δν, the phase difference is where δν = ν − ν 0 , δt = t − t 0 and t 0 , ν 0 define the center of the observing window; f t = (1/2π)∂ t Φ is the fringe rate or differential Doppler shift, and f ν = (1/2π)∂ ν Φ is the differential group delay. In general Φ includes both a geometrical path-length difference and a dispersive term. At the end of §5.2 we consider the effects of dispersion, but in many cases of pulsar scintillation it can be shown that the dispersive term can be ignored. We proceed to retain only the geometric delays and obtain results (Appendix A) that were reported in Paper 1: (2) Here λ is the wavelength at the center of the band, s = D s /D, and V ⊥ is the velocity of the point in the screen intersected by a straight line from the pulsar to the observer, given by a weighted sum of the velocities of the source, screen and observer (e.g. Cordes & Rickett 1998): (4) It is also convenient to define an effective distance D e , D e = Ds(1 − s) .
(5) The two-dimensional Fourier transform of the interference term cos Φ(δν, δt) is a pair of delta functions placed symmetrically about the origin of the delay fringe-rate plane. The secondary spectrum at (f ν , f t ) is thus the summation of the delta functions from all pairs of angles subject to Eq. 2 & 3, as we describe in the next section.
Secondary Spectrum in Terms of the Scattered Brightness
In this section we examine how the form of the secondary spectrum varies with the form assumed for the scattered image, without considering the associated physical conditions in the medium. We postpone to §5 a discussion of the scattering physics and the influence of the integration time. The integration time is important in determining whether the scattered brightness is a smooth function or is broken into speckles as discussed by Narayan & Goodman (1989). This in turn influences whether the secondary spectrum takes a simple parabolic form or becomes fragmented.
We analyze an arbitrary scattered image by treating its scattered brightness distribution, B(θ), as a probability density function (PDF) for the angles of scattering. In the continuous limit the secondary spectrum is the joint PDF of f ν , f t subject to the constraints of Eq. (2 & 3). It is convenient to use dimensionless variables for the delay and fringe rate: whereV ⊥ is a two-dimensional unit vector for the transverse effective velocity. It is also useful to normalize angles by the characteristic diffraction angle, θ d , so in some contexts discussed below θ → θ/θ d , in which case p → f ν /τ d and q → 2π∆t d f t , where τ d is the pulse broadening time and ∆t d is the diffractive scintillation time. The secondary spectrum is then given by an integral of the conditional probability δ(p − p)δ(q − q) for a given pair of angles multiplied by the PDFs of those angles, where p, q are the values of p, q for particular values of θ 1,2 as given in Eq. 6 and 7. In Appendix C, Eq. 8 is derived formally from the ensemble average in the limit of strong diffractive scattering. The secondary spectrum is symmetric through the origin (p → −p and q → −q). Eq. (8) shows that it is essentially a distorted autocorrelation of the scattered image. With no loss of generality we simplify the analysis by taking the direction of the velocity to be the x direction. The four-fold integration in Eq. 8 may be reduced to a double integral by integrating the delta functions over, say, θ 2 to obtain where U ≡ p − q 2 − 2qθ 1x + θ 1 2 y (10) and H(U ) is the unit step function. With this form, the integrand is seen to maximize at the singularity U = 0, which yields a quadratic relationship between p and q. For an image offset from the origin by angle θ 0 , e.g. B(θ) → B(θ − θ 0 ), the form for S 2 is similar except that U = p− q 2 − 2q(θ 1x + θ 0x )+ (θ 1y + θ 0y ) 2 and the θ y arguments of B in square brackets in Eq. 9 become ± √ U − θ 0y . The integrable singularity at U = 0 makes the form of Eq. 9 inconvenient for numerical evaluation. In Appendix B we use a change of variables to avoid the singularity, giving form (Eq. B4) that can be used in numerical integration. However, this does not remove the divergence of S 2 at the origin p = q = 0. In Appendix B we show that inclusion of finite resolutions in p or q, which follow from the finite extent of the orignal dynamic spectrum in time and frequency, avoids the divergence at the origin, emphasizing that the dynamic range in the secondary spectrum is strongly influenced by resolution effects.
The relation p = q 2 follows from the singularity U = 0 after integration over angles in Eq. 9 and is confirmed in a number of specific geometries discussed below. This relation becomes f ν = af 2 t in dimensional units, with where V ⊥ = 100 km s −1 V 100 and ν is in GHz. For screens halfway between source and observer (s = 1/2), this relation yields values for a that are consistent with the arcs evident in Figures 1-3 and also shown in Papers 1-3. As noted in Paper 1, a does not depend on any aspect of the scattering screen save for its fractional distance from the source, s, and it maximizes at s = 1/2 for fixed V ⊥ . When a single arc occurs and other parameters in Eq. 11 are known, s can be determined to within a pair of solutions that is symmetric about s = 1/2. However, when V ⊥ is dominated by the pulsar speed, s can be determined uniquely (c.f. Paper 1).
Properties of the Secondary Spectrum
To determine salient properties of the secondary spectrum that can account for many of the observed phenomena, we consider special cases of images for which Eq. 8 can be evaluated. As noted above the effects of finite resolution are important both observationally and computationally and are discussed in Appendix B.
Point Images
A point image produces no interference effects, so the secondary spectrum consists of a delta function at the origin, (p, q) = (0, 0). Two point images with amplitudes a 1 and a 2 produce fringes in the dynamic spectrum, which give delta functions of amplitude a 1 a 2 at position given by Eq. 6 and 7 and its counterpart reflected through the origin and also a delta function at the origin with amplitude 1 − 2a 1 a 2 . Evidently, an assembly of point images gives a symmetrical pair of delta functions in p, q for each pair of images.
One-dimensional Images
Arc features can be prominent for images elongated in the direction of the effective velocity. Consider an extreme but simple case of a one dimensional image extended along the x axis only and where g is an arbitrary function. The secondary spectrum is By inspection, parabolic arcs extend along p = ±q 2 with amplitude S 2 ∝ g(q)/|q| along the arc, becoming wider in p at large |q|. However, for the particular case where g is a Gaussian function, the product of the g functions in Eq. (13) ∝ exp[−(p 2 + q 4 )/2q 2 ], which cuts off the arcs very steeply (see §4.1). For images with slower fall offs at large angles, the arc features can extend far from the origin. For the same image shape elongated transverse rather than parallel to x, inspection of Eq. 7 -8 indicates that S 2 (p, q) ∝ δ(q), so there is only a ridge along the q-axis. These examples suggest that prominent arcs are expected when images are aligned with the direction of the effective velocity, as we confirm below for more general image shapes.
Images with a Point and an Extended Component
Consider a scattered image consisting of a point source at θ p and an arbitrary, two-dimensional image component, The secondary spectrum consists of the two self-interference terms from each image component and cross-component terms of the form where U = p − q 2 − 2qθ p x + θ p 2 y , H(U ) is the unit step function and "S.O." implies an additional term that is symmetric through the origin, corresponding to letting p → −p and q → −q in the first component. A parabolic arc is defined by the H(U )U −1/2 factor. As U → 0 the arc amplitude is a 1 a 2 U −1/2 g(q + θ px , 0). Considering g to be centered on the origin with width W g x in the x direction, the arc extends to |q + θ p x | W g x /2. Of course if the point component has a small but non-zero diameter, the amplitude of the arc would be large but finite. If the point is at the origin, the arc is simply p = q 2 as already discussed. In such cases Eq. 15 can be inverted to estimate g(θ) from measurements of ∆S 2 . The possibility of estimating the two-dimensional scattered brightness from observations with a single dish is one of the intriguing aspects of the arc phenomenon.
With the point component displaced from the origin, the apex of the parabola is shifted from the origin to (p, q) apex = (−θ 2 p , −θ p x ).
(16) By inspection of Eq. 15, the condition U = 0 implies that the arc is dominated by contributions from image components scattered parallel to the velocity vector at angles θ = (q + θ px , 0). Thus images elongated along the velocity vector produce arcs enhanced over those produced by symmetric images. This conclusion is general and is independent of the location, θ p , of the point component. Figure 4 shows parabolic arc lines (U = 0) for various assumed positions for the point component. Eq.16 gives a negative delay at the apex, but since S 2 is symmetric about the origin, there is also a parabola with a positive apex, but with reversed curvature. When θ py = 0 the apex must lie on the basic arc p = q 2 (left panel), and otherwise the apex must lie inside it (right panel). Such features suggest an explanation for the reversed arclets that are occasionally observed, as in Figure 2. We emphasize that the discussion here has been restricted to the interference between a point and an extended component, and one must add their self-interference terms for a full description.
The foregoing analysis can, evidently, be applied to a point image and an ensemble of subimages. The general nature of S 2 is similar in that it is enhanced along the curve where U = 0 which enhances subimages lying along the velocity vector.
secondary spectra for cases relevant to the interstellar medium
We now compute the theoretical secondary spectrum for various simple scattered brightness functions commonly invoked for interstellar scattering. The computations include self-interference and also the finite resolution effects, discussed in Appendix B.
Elliptical Gaussian Images
Measurements of angular broadening of highly scattered OH masers (Frail et al. 1994), Cyg X-3 (Wilkinson et al. 1994), pulsars (Gwinn et al. 1993, and AGNs (e.g. Spangler & Cordes 1988;Desai & Fey 2001) indicate that the scattering is anisotropic and, in the case of Cyg X-3, consistent with a Gaussian scattering image, which probably indicates diffractive scales smaller than the "inner scale" of the medium. Dynamic spectra cannot be measured for such sources because the time and frequency scales of the diffractive scintillations are too small and are quenched by the finite source size. However, some less scattered pulsars may have measurable scintillations that are related to an underlying Gaussian image.
The secondary spectrum for an elliptical Gaussian image cannot be solved analytically, although Eq. B4 can be reduced to a one dimensional integral which simplifies the numerical evaluation. Figure 5 shows some examples. The upper left hand panel is for a single circular Gaussian image. It does not exhibit arc-like features, although it "bulges" out along the dashed arc-line (p = q 2 ). Upper right is for an elliptical Gaussian image with a 2:1 axial ratio parallel to the velocity vector. The 3:1 case (lower left) shows the deep "valley" along the delay (p) axis, which is characteristic of images elongated parallel to the velocity. Such deep valleys are frequently seen in the observations and provide evidence for anisotropic scattering. Notice, however, that the secondary spectrum can be strong outside of the arc line, where the contours become parallel to the p-axis. Inside the arc-line the contours follow curves like the arc-line. The 3:1 case with the major axis transverse to the velocity (lower right) shows enhancement along the delay axis with no bulging along the arc-line.
Media with Power-Law Structure Functions
Here we consider brightness distributions associated with wavenumber spectra having a power-law form. For circularly symmetric scattering, the phase structure function depends only on the magnitude of the baseline so the visibility function is where s 0 is the spatial scale of the intensity diffraction pattern, defined such that |Γ(s 0 )| 2 = e −1 (e.g. Cordes & Rickett 1998). The corresponding image is where J 0 is the Bessel function. Computations are done in terms of an image normalized so that B(0) = 1, a scaled baseline η = b/s 0 , and a scaled angular coordinate ψ = θ/θ d , where θ d = (ks 0 ) −1 is the scattered angular width.
Single-slope Power Laws
First we consider phase structure functions of the form D φ (b) ∝ b α . The corresponding wavenumber spectra ∝ (wavenumber) −β with indices β = α + 2 for β < 4 (e.g. Rickett 1990). Figure 6 (left-hand panel) shows one-dimensional slices through the images associated with four different values of α, including α = 2, which yields a Gaussian-shaped image. The corresponding secondary spectra are shown in Figure 7. Arcs are most prevalent for the smallest value of α, become less so for larger α, and are nonexistent for α = 2. Thus the observation of arcs in pulsar secondary spectra rules out the underlying image having the form of a symmetric Gaussian function, and so puts an upper limit on an inner scale in the medium. Arcs for all of the more extended images, including the α = 1.95 case, appear to be due to the interference of the central "core" (ψ < 1) with the weak "halo" (ψ ≫ 1) evident in Figure 6 (both panels). This interpretation is confirmed by the work of Codona et al. (1986). Their Eq. (43) gives the cross spectrum of scintillations between two frequencies in the limit of large wavenumbers in strong scintillation. When transformed, the resulting secondary spectrum is exactly the form of an unscattered core interfering with the scattered angular spectrum.
Kolmogorov Spectra with an Inner Scale
A realistic medium is expected to have a smallest ("inner") scale in its density fluctuations. Depending on its size, the inner scale may be evident in the properties of pulsar scintillations and angular broadening. Angular broadening measurements, in particular, have been used to place constraints on the inner scale for heavily scattered lines of sight (Moran et al. 1990;Molnar et al. 1995;Wilkinson et al. 1994;Spangler & Gwinn 1990). For scintillations, the inner scale can alter the strength of parabolic arcs, thus providing an important method for constraining the inner scale for lines of sight with scattering measures much smaller than those on which angular broadening measurements have been made. We consider an inner scale, ℓ 1 , that cuts off a Kolmogorov spectrum and give computed results in terms of the normalized inner scale ζ = ℓ 1 /s 0 . The structure function scales asymptotically as D φ ∝ b 2 below ℓ 1 and ∝ b 5/3 above.
In the right panel of Figure 6, we show one-dimensional images for six values of ζ. For ζ ≪ 1, the inner scale is negligible and the image falls off relatively slowly at large ψ, showing the extended halo ∝ ψ −11/3 , as for the Kolmogorov spectrum with no inner scale. For ζ = 2, the image falls off similarly to the shown Gaussian form and thus does not have an extended halo. Secondary spectra are shown in Figure 8 for four values of inner scale. As expected, the arcs are strongest for the case of negligible inner scale and become progressively dimmer and truncated as ζ increases and the image tends toward a Gaussian form. The appearance of strong arcs in measured data indicates that the the inner scale must be much less than the diffraction scale, or ℓ 1 ≪ 10 4 km, for lines of sight to nearby pulsars, such as those illustrated in Figures 1-3. Figure 9 shows the results for an anisotropic Kolmogorov spectrum with a 3:1 axial ratio (R = 1/3) for different orientation angles of the image with respect to the velocity. In the upper left panel the scattered image is elongated parallel to the velocity, and the arc is substantially enhanced with a deep valley along the p-axis. As the orientation tends toward normal with respect to the velocity in the other panels, the arc diminishes and essentially disappears. The deep valley in the parallel case occurs because q = θ x2 − θ x1 = 0 along the p-axis, so p = θ 2 y 2 − θ 2 y 1 , in which case the secondary spectrum falls steeply like the brightness distribution along its narrow dimension. By comparison, along the arc itself, the secondary spectrum probes the wide dimension and thus receives greater weight. Taking the asymptotic brightness at large angles, B ∝ (Rψ 2
Anisotropic Kolmogorov Spectrum
x + ψ 2 y /R) −11/6 for interference with the undeviated core in Eq. 15 we obtain: Along the p-axis the amplitude of the secondary spectrum in Eq. 19 tends asymptotically to ∝ p −7/3 .
Derivation of the Secondary Spectrum
In prior sections we related f ν and f t to the angles of scattering through Eq. 2 and 3. The intensity of the scattered waves at each angle was represented by a scattered brightness function B(θ) with little discussion of the scattering physics that relates B(θ) to the properties of the ISM.
In discussing the scattering physics, we must consider the relevant averaging interval. Ensemble average results can be found in the limits of strong and weak scintillation, as we describe in Appendices C and D. In both cases the analysis depends on the assumption of normal statistics for the scattered field and the use of the Fresnel-Kirchoff diffraction integral to obtain a relation with the appropriate ensemble average B(θ). However, for strong scattering we adopt the common procedure of approximating the observed spectrum by the ensemble average in the diffractive limit (after removal of the slow changes in pulse arrival time due to changing dispersion measure). In both cases B(θ) is a smooth function of angle (except for the unscattered component in weak scintillation).
An instantaneous angular spectrum is a single realization of a random process. Thus it will exhibit "speckle", i.e. the components of the angular spectrum are statistically independent having an exponential distribution. Goodman and Narayan (1989) called this the snapshot image, which is obtained over times shorter than one diffractive scintillation time (∼minutes). However, we average the secondary spectrum over ∼ 1 hour, which includes many diffractive scintillation timescales but is short compared to the time scale of refractive interstellar scintillation (RISS). Thus we need an image obtained from averaging over many diffractive times, which Goodman and Narayan refer to as the short-term average image. The short-term average image has some residual speckle but is less deeply modulated than the snapshot image. An approximate understanding of the effect of speckle in the image on the secondary spectrum is obtained from §3.4.3, by considering a single speckle as a point component that interferes with an extended component that represents the rest of the image. Thus each speckle can give rise to part of an arclet and multiple speckles will make an assembly of intersecting forward and reverse arclets.
While analytical ensemble average results for the secondary spectrum can be written in the asymptotic limits of weak and strong scattering, one must resort to simulation for reliable results in the intermediate conditions that are typical of many observations and in cases where a short-term average image is appropriate. We use the method and code described by Coles et al. (1995) and analyze the resulting diffraction pattern in the same manner as for the observations. We first create a phase screen at a single frequency whose wavenumber spectrum follows a specified (e.g. the Kolmogorov) form. We then propagate a plane wave through the screen and compute the resulting complex field over a plane at a distance z beyond the screen. The intensity is saved along a slice through the plane of observation, giving a simulated time series at one frequency. The frequency is stepped, scaling the screen phase as for a plasma by the reciprocal of frequency, and the process is repeated. The assembly of such slices models a dynamic spectrum, which is then subject to the same secondary spectral analysis as used in the observations. The results, while calculated for a plane wave source, can be mapped to a point source using the well-known scaling transformations for the screen geometry (i.e. z → D e , as defined in Eq. 5). While the screen geometry is idealized, the simulation uses an electromagnetic calculation and properly accounts for dispersive refraction, diffraction and interference. Furthermore the finite size of the region simulated can approximate the finite integration time in the observations. A scalar field is used since the angles of scattering are extremely small and magnetionic effects are negligible in this context (c.f. Simonetti et al. 1984).
Interstellar Conditions Responsible for Arcs
In §3 we found two conditions that emphasize the arcs. The first is the interference between an undeviated wave and scattered waves, and the second is the enhancement of arcs when the waves are scattered preferentially parallel to the scintillation velocity. We now discuss how these conditions might occur.
Core-Halo Conditions.
The basic arc, p = q 2 , is formed by interference of a core of undeviated or weakly deviated waves with widely scattered waves. There are two circumstances where a significant fraction of the total flux density can come from near θ = 0. One is in strong scintillation, when the "central core" of the scattered brightness interferes with a wider low level halo, as discussed in §4.2 for a Kolmogorov density spectrum and other power-law forms with β < 4 and neglible inner scale. In this case the core of the scattered brightness is not a point source, merely much more compact than the far-out halo radiation. Thus the arc is a smeared version of the form in Eq. 9, which has no power outside the basic arcline. Examples were shown in figures 7 and 8.
The other case is when the scattering strength is low enough that a fraction of the flux density is not broadened significantly in angle. The requirements on such an "unscattered core" are simply that the differential delay over the core must not exceed, say, a quarter of the wave period. Then the core is simply that portion of the brightness distribution in the first Fresnel zone, r F , and the electric-field components in the core sum coherently. Thus the core component can be significant even if the overall scattered brightness function is much wider than the first Fresnel zone.
The relative strength of the core to the remainder of the brightness distribution can be defined in term of the "strength of scintillation". We define this as m B , the normalized rms intensity (scintillation index) under the Born approximation. It can be written in terms of the wave structure function D φ (r), as m 2 B = 0.77D φ (r F ) with a simple Kolmogorov spectrum, where D φ (r) = (r/s 0 ) 5/3 , s 0 is the field coherence scale and r F = D e /k and k = 2π/λ. Here θ d = 1/ks 0 defines the scattering angle. This gives m 2 B = 0.77(r F /s 0 ) 5/3 ; thus it is also related to the fractional diffractive bandwidth ∼ (s 0 /r F ) 2 . Note that a distinction should be made between strength of scattering and strength of scintillation. For radio frequencies in the ISM we always expect that the overall rms phase perturbations will be very large compared to one radian, albeit over relatively large scales, which corresponds to strong scattering. In contrast strong scintillation corresponds to a change in phase of more than one radian across a Fresnel scale. On the left the primary dynamic spectrum has a grey scale proportional to the intensity from zero (white) to three times the mean (black). On the right the secondary spectrum has a grey scale proportional to the logarithm of the spectral density over a dynamic range of 5.5 decades (top) and 6.5 decades (bottom). Note the steep decrease outside the theoretical arcline (p = q 2 , shown dotted).
In weak scintillation (but strong scattering) the fraction of the flux density in the core is ∼ (θ F /θ d ) 2 = (s 0 /r F ) 2 ∼ m −2.4 B . Figure 10 (top panels) shows a simulation with m 2 B = 0.1, in which S 2 drops steeply outside the arc p = q 2 . The analysis of Appendix D shows that in weak scintillation S 2 should be analyzed as a function of wavelength rather than frequency because the arcs are more clearly delineated when the simulations are Fourier analyzed versus wavelength. We have also simulated weak scintillation in a nondispersive phase screen and find similar arcs. In strong scintillation there is a small fractional range of frequencies and the distinction between wavelength and frequency becomes unimportant. Figure 10 shows that the secondary spectrum in weak scintillation has a particularly well defined arc defined by a sharp outer edge. This is because it is dominated by the interference between scattered waves and the unscattered core, as described by Eq. (15) with a point component at the origin. We can ignore the mutual interference between scattered waves since they are much weaker than the core. The result is that the secondary spectrum is analogous to a hologram in which the unscattered core serves as the reference beam. The secondary spectrum can be inverted to recover the scattered brightness function using Eq. (15), which is the analog of viewing a holographic image. The inversion, however, is not complete in that there is an ambiguity in the equation between positive and negative values of θ y . Nevertheless, the technique opens the prospect of mapping a two-dimensional scattered image from observations with a single dish at a resolution of milliarcseconds.
As the scintillation strength increases the core becomes less prominent, and the parabolic arc loses contrast. However, we can detect arcs at very low levels and they are readily seen at large values of m 2 B . Medium strong scintillation is shown in Figure 10 (bottom panels), which is a simulation of a screen with an isotropic Kolmogorov spectrum with m 2 B = 10 for which s 0 = 0.22r F . The results compare well with both the ensemble average, strong scintillation computations for a Kolmogorov screen in §4 and with several of the observations shown in Figure 1. We note that several of the observations in §2 show sharp-edged and symmetric parabolic arcs, which appear to be more common for low or intermediate strength of scintillation (as characterized by the apparent fractional diffractive bandwidth). This fits with our postulate that the arcs become sharper as the strength of scintillation decreases. In weak scintillation the Born approximation applies, in which case two or more arcs can be caused by two or more screens separated along the line of sight. We have confirmed this by simulating waves passing through several screens, each of which yields a separate arc with curvature as expected for an unscattered wave incident on each screen. This is presumably the explanation for the multiple arcs seen in Figure 3 for pulsar B1133+16.
In terms of normalized variables p, q the half-power widths of S 2 are approximately unity in strong scattering and yet we can see the arcs out to q ≫ 1. As we noted earlier this corresponds to scattering angles well above the diffractive angle θ d , and so probes scales much smaller than those probed by normal analysis of diffractive ISS. Our simulations confirm that with an inner scale in the density spectrum having a wavenumber cutoff κ inner , the arc will be reduced beyond where q ∼ κ inner s 0 . However, to detect such a cut-off observationally requires very high sensitivity and a large dynamic range in the secondary spectrum.
Anisotropy, Arclets and Image Substructure
The enhancement in arc contrast when the scattering is extended along the direction of the effective velocity was discussed in §3. The enhancement is confirmed by comparing the m 2 B = 10 simulations in Figures 10 and 11. In the anisotropic case the arc consists of finely-spaced parallel arcs which are in turn crossed by reverse arclets, somewhat reminiscent of the observations. The discussion of §3.4.3 suggests that these are caused by the mutual interference of substructure in the angular spectrum B(θ). The question is, what causes the substructure? One possibility is that it is stochastic speckle-like substructure in the image of a scatter-broadened source. Alternatively, there could be discrete features in B(θ), ("multiple images") caused by particular structures in the ISM phase.
In our simulations, the screen is stochastic with a Kolmogorov spectrum and so it should have speckle but no deterministic multiple images. The diffractive scale, s 0 , is approximately the size of a stationary phase point on the screen and, also, approximately the size of a constructive maximum at the observer plane. There are, thus, ∼ (r F /s 0 ) 4 across the 2d scattering disk that contribute to an individual point at the observer plane. This large number of speckles (∼ 400 in these simulations) has the effect of averaging out details in the secondary spectrum, particularly in the isotropic scattering case.
The effect of anisotropy in the coherence scale is to broaden the image along one axis with respect to the other. An image elongated by a ratio of R = θ x /θ y ≈ s 0,y /s 0,x > 1 would have N x ≈ (r F /s 0,x ) 2 speckes in the x-direction but only N y ≈ (r F /Rs 0,x ) 2 in the y-direction. In short, the image would break into a line of many elliptical speckles (elongated in y) distributed predominantly along x. In comparing the anisotropic and isotropic simulations (upper Figure 11 and lower Figure 10), it appears that the arclets are more visible under anisotropy. We suggest that the changed substructure in the image is responsible. Substructure may consist of short-lived speckles or longer-lived multiple subimages whose relative contributions depend on the properties of the scattering medium. In the simulations the arclets appear to be independent from one realization to the next, as expected for a speckle phenomenon. However, in some of the observations arclets persist for as long as a month (see §2) and exhibit a higher contrast than in the simulation. These require long-lived multiple images in the angular spectrum and so imply quasi-deterministic structures in the ISM plasma.
In summary we conclude that the occasional isolated arclets (as in Figure 2) must be caused by fine substructure in B(θ). Whereas some of these may be stochastic as in the speckles of a scattered image, the long lived arclets require the existence of discrete features in the angular spectrum, which cannot be part of a Kolmogorov spectrum. These might be more evidence for discrete structures in the medium at scales 1 AU, that have been invoked to explain fringing episodes (e.g. Hewish Figure 10 with m 2 B = 10. Top: Anisotropic scattering (axial ratio 4:1 spectrum elongated along the velocity direction). The result is higher contrast in the arc, which is seen to consist of nearly parallel arclets and is crossed by equally fine scale reverse arcs. Bottom: Isotropic scattering with a linear refractive phase gradient of 2 radians per diffractive scale. The result is a shift in the parabola and an asymmetry in its strength. 1997) and, possibly, extreme scattering events (ESEs) (Fiedler et al. 1987;Romani, Blandford & Cordes 1987) in quasar light curves.
Asymmetry in the Secondary Spectrum
As summarized in §2 the observed arcs are sometimes asymmetric as well as exhibiting reverse arclets (e.g. Figure 2). A scattered brightness that is asymmetric in θ x can cause S 2 to become asymmetric in q. We do not expect a true ensemble average brightness to be asymmetric, but the existence of large scale gradients in phase will refractively shift the seeing image, with the frequency dependence of a plasma.
Simulations can be used to study refractive effects, as in Figure 11 (bottom panels), which shows S 2 for a screen with an isotropic Kolmogorov spectrum and a linear phase gradient. The phase gradient causes sloping features in the primary dynamic spectrum and asymmetry in the secondary spctrum. The apex of the parabolic arc is shifted to negative q value and becomes brighter for positive q. Thus we suggest refraction as the explanation of the occasional asymmetry observed in the arcs. Relatively large phase gradients are needed to give as much asymmetry as is sometimes observed. For example, a gradient of 2 radians per s 0 , which shifts the scattering disc by about its diameter, was included in the simulation of Figure 11 to exemplify image wandering. In a stationary random medium with a Kolmogorov spectrum, such large shifts can occur depending on the outer scale, but they should vary only over times long compared with the refractive scintillation timescale. This is a prediction that can be tested.
In considering the fringe frequencies in Appendix A we only included the geometric contribution to the net phase and excluded the plasma term. We have redone the analysis to include a plasma term with a large scale gradient and curvature in the screen phase added to the small scale variations which cause diffractive scattering. We do not give the details here and present only the result and a summary of the issues involved.
We analyzed the case where there are large scale phase terms following the plasma dispersion law, that shift and stretch the unscattered ray, as in the analysis of Cordes, Pidwerbetsky and Lovelace (1986). In the absence of diffractive scattering these refractive terms create a shifted stationary phase point (i.e. ray center) at θ and a weak modulation in amplitude due to focusing or defocusing over an elliptical region. With the diffractive scattering also included we find that the fringe frequency q is unaffected by refraction but the delay p becomes: where C is a 2 × 2 matrix that describes the quadratic dependence of the refracting phase from the image center. A related question is whether the shift in image position due to a phase gradient also shifts the position of minimum delay, in the fashion one might expect from Fermat's principle that a ray path is one of minimum delay. However, since Fermat's minimum delay is a phase delay and our variable f ν or p is a group delay, its minimum position is shifted in the opposite direction by a plasma phase gradient. This is shown by the first two terms, i.e. (θ 2 + θ) 2 which goes to zero at θ 2 = −θ. However, we do not pursue this result here, since the formulation of §3 does not include the frequency dependence of the scattered brightness function, which would be required for a full analysis of the frequency dependence of the plasma scattering. It is thus a topic for future theoretical study, but meanwhile the simulations provide the insight that indeed plasma refraction can cause pronounced asymmetry in the arcs.
Arcs from Extended Sources
Like other scintillation phenomena, arcs will be suppressed if the source is extended. In particular, the lengths of arcs depend on the transverse spatial extent over which the scattering screen is illuminated by coherent radiation from the source. For a source of finite angular size θ ss as viewed at the screen, the incident wave field is spatially incoherent on scales larger than b ss ∼ λ/(θ ss ). An arc measured at a particular fringe rate (f t ) represents the interference fringes of waves separated by a baseline b = (D − D s )(θ 2 − θ 1 ) at the screen, corresponding to a fringe rate f t = b · V ⊥ /λD e . The fringes are visible only if the wave field is coherent over b and are otherwise suppressed. Thus arcs in the secondary spectrum S 2 will be cut off for fringe rates f t f t,sou , where Here we distinguish between the source size θ sou viewed at the observer and its size viewed at the screen, θ ss = θ sou /s. For ISS of a distant extragalactic source, the factor s approaches unity but can be much smaller for a Galactic pulsar. Eq. 21 indicates that longer arcs are expected for more compact sources, larger effective velocities, and scattering screens nearer to the observer. Equivalently, using f ν = af 2 t , the arc length can be measured along the f ν axis with a corresponding cut-off, where θ Fr = [kD(1 − s)] −1/2 is the effective Fresnel angle.
It is useful to measure the arc's extent in f t in terms of the characteristic DISS time scale, ∆t d ∼ s 0 /V ⊥ and to relate the characteristic diffraction scale s 0 to the isoplanatic angular scale, θ iso ∼ s 0 /(D − D s ). The product of maximum fringe rate and DISS time scale is then The isoplanatic angular scale defines the source size that will quench DISS by ∼ 50%. We note that Eq. 23 is consistent with the extended source result for scintillation modulations (Salpeter 1967). Thus the length of the arc along the f t axis in units of the reciprocal DISS time scale is a direct measure of the ratio of isoplanatic angular scale to source size. The long arcs seen therefore demonstrate that emission regions are much smaller than the isoplanatic scale, which is typically 10 −6 arcsec for measurements of dynamic spectra. The theoretical analysis under weak scintillation conditions is given in Appendix D.2, where it is seen that the squared visibility function provides a cut-off to the point-source secondary spectrum S 2 . A remarkable result from this analysis is that measurements of S 2 of an extended source can be used, in principle, to estimate the squared visibility function of the source in two dimensions -if the underlying secondary spectrum, S 2 , for a point source is already known.
The effects of an extended source can also be analyzed in asymptotic strong scintillation. This requires the same type of analysis as for the frequency decorrelation in scintillation of an extended source. Chashei and Shishov (1976) gave the result for a medium modeled by a square law structure function of phase. Codona et al. (1986) gave results for screens with a power law spectrum of phase in both weak and strong scintillation. We have used their analysis to obtain an expression for the secondary spectrum in the strong scattering limit. The result is that the source visibility function appears as an additional factor |V | 2 inside the brightness integral of Eq. (8) (with arguments depending on p, q and other quantities).
It is clear that the detection and measurement of arcs from pulsars can put constraints on the size of their emitting regions. This is intimately related to estimating source structure from their occasional episodes of interstellar fringing (e.g. Cordes and Wolszczan 1986;Wolszczan and Cordes 1987;Smirnova et al. 1996;Gupta, Bhat and Rao 1999). These observers detected changes in the phase of the fringes versus pulsar longitude, and so constrained any spatial offset in the emitting region as the star rotates. They essentially measured the phase of the "cross secondary spectrum" between the ISS at different longitudes, at a particular f ν , f t . Clearly one could extend this to study the phase along an arc in f ν , f t . Such studies require high signal-to-noise ratio data with time and frequency samplings that resolve scintillations in dynamic spectra, which can be obtained on a few pulsars with the Arecibo and Green Bank Telescopes. The future Square Kilometer Array with ∼ 20 times the sensitivity of Arecibo would allow routine measurements on large samples of pulsars.
ISS has been seen in quasars and active galactic nuclei (sometimes referred to as intraday variations), but few observations have had sufficient frequency coverage to consider the dynamic spectrum and test for arcs. However, spectral observations have been reported for the quasars J1819+385 (de Bruyn and Macquart, in preparation) and in preparation). A preliminary analysis of the data from J1819+385 by Rickett et al. (in preparation) showed no detectable arc, from which they set a lower limit on the source size. New observations over a wide well-sampled range of frequencies will allow better use of this technique.
discussion and conclusions
It is evident that we have only begun to explain the detailed structures in the parabolic arcs observed in pulsar scintillation. However, it is also clear that the basic phenomenon can be understood from a remarkably simple model of small angle scattering from a thin phase-changing screen, and does not depend on the dispersive nature of the refractive index in the screen. Interference fringes between pairs of scattered waves lie at the heart of the phenomenon. The f ν coordinate of the secondary spectrum is readily interpreted as the differential group delay between the two interfering waves, and the coordinate f t is interpreted as their fringe rate or, equivalently, the differential Doppler frequency, which is proportional to the difference in angles of scattering projected along the direction of the scintillation velocity.
We have developed the theory by modeling the interference for an arbitrary angular distribution of scattering. We have given the ensemble average secondary spectrum in asymptotic strong and weak scintillation, and we have used a full phase screen simulation to test the results under weak and intermediate strength of scintillation. The results are mutually consistent.
A simple parabolic arc with apex at the origin of the f ν , f t plane arises most simply in weak scintillation as the interference between a scattered and an "unscattered" wave. The secondary spectrum is then of second order in the scattered field and maps to the two-dimensional wavenumber spectrum of the screen phase, though with an ambiguity in the sign of the wavenumber perpendicular to the velocity. Remarkably, this gives a way to estimate two-dimensional structure in the scattering medium from observations at a single antenna, in a fashion that is analogous to holographic reconstruction.
In strong scattering the parabolic arcs become less distinct since the interference between two scattered waves has to be summed over all possible angles of scattering, making it a fourth order quantity in the scattered field. Nevertheless, the arc remains visible when the scattered brightenss has a compact core and a "halo" of low level scattering at relatively large angles. Media with shallow power-law wavenumber spectra (including the Kolmogorov spectrum) have such extended halos, and the detection of arcs provides a powerful probe of structures ten or more times smaller than those probed by normal interstellar scintillation and can thus be used to test for an inner scale cut-off in the interstellar density spectrum.
The prominence of arcs depends on the isotropy of the scattering medium as well as on the slope and inner scale of its wavenumber spectrum. Arcs become more prominent when the scattering is anisotropic and enhanced preferentially along the scintillation velocity. However, in simulations, prominent arcs are seen over quite a wide range of angles about this orientation. Scattering that is enhanced parallel to the scintillation velocity corresponds to spatial structures that are elongated transverse to the velocity vector. Thus the common detection of arcs may provide evidence for anisotropy in the interstellar plasma, and with careful modeling observations should yield estimates for the associated axial ratios.
There are several details of the observed arcs for which we have only tentative explanations. We can understand the existence of discrete reverse arcs as due to discrete peaks in the scattered brightness interfering with an extended halo. Such isolated peaks are to be expected in short term integrations due to speckle in the scattered image. However, observations with only a few isolated reverse arcs -and, particularly, arclets that persist for days to weeks -imply only a few discrete peaks in the scattered image, while normal speckle is expected to give multiple bright points with a much higher filling factor. This is a topic for further investigation.
Another observational detail is that on some occasions the arc power distribution is highly asymmetrical in fringe frequency. This can only be caused by asymmetry in the scattered brightness relative to the velocity direction. Our proposed explanation is that it is due to large scale gradients in the medium that cause the image to be refractively shifted. The simulations demonstrate that this explanation is feasible, but considerable more work needs to be done to interpret what conditions in the ISM are implied by the unusual asymmetric arcs.
Our theoretical analysis is based on a thin screen model, and future theoretical work is needed on the arc phenomena with multiple screens (such as might cause the multiple arcs in Figure 3) and with an extended scattering medium. While the extension to an extended medium or multiple screens is relatively straightforward in weak scintillation, it is more difficult in strong scintillation. Adding the effect of a source with a finite diameter is also important since pulsar emission regions may have finite size and the detection of arcs from quasars provides the prospect of a more powerful probe of their angular structure than from simple analysis of their scintillation light curves. In addition to these extensions of our analysis, future work will include a detailed study of the inverted arclet phenomenon, exploiting the arc phenomenon to determine the anisotropy and inner scale of scattering irregularities; and using the multiple arc phenomenon to aid modeling of the local interstellar medium, for which the weak scattering regime is especially relevant.
We acknowledge helpful discussions with D. Melrose and M. Walker. DRS wishes to thank Oberlin students H. Barnor, D. Berwick, A. Hill, N. Hinkel, D. Reeves, and A. Webber for assistance in the preparation of this work. This work was supported by the National Science Foundation through grants to Cornell (AST 9819931 and 0206036), Oberlin (AST 0098561) and UCSD (AST 9988398). This work was also supported by the National Astronomy and Ionosphere Center, which operates the Arecibo Observatory under a cooperative agreement with the NSF. The Australia Telescope National Facility provided hospitality for DRS during preparation of this paper.
APPENDIX fringe frequencies from a plasma screen
Consider the following thin-screen geometry: a point source at (r s , 0), a thin screen in the plane (r ′ , D s ) and an observer at (r, D), where r s , r ′ and r are two dimensional vectors. The screen changes the phase of incident waves and thus diffracts and refracts radiation from the source.
The Kirchoff diffraction integral (KDI) gives the wave field at r as using the effective distance D e as defined in Eq. 5 and where Φ = Φ g + φ d is the sum of the geometric phase, a diffractive phase φ d (r ′ ) that scatters radiation. Frequency scalings are Φ g ∝ ν and φ d ∝ ν −1 . The secondary spectrum is the distribution of conjugate frequencies f ν , f t produced by all pairs of exit points from the screen. Consider the relative phase, ∆Φ = Φ 2 − Φ 1 , between two components of the radiation field that exit the phase screen at two different points, r ′ 1,2 , that correspond to deviation angles as viewed by the observer, θ 1,2 = r ′ 1,2 /(D − D s ). The combined radiation from the two points will oscillate as a function of time, frequency, and spatial location. For fixed location on axis (r = 0) and using the effective velocity (Eq. 4) to map spatial offsets at the screen to time, we can expand ∆Φ in time and frequency offsets: where (using ∂ t ≡ ∂/∂ t , etc.) the fringe frequencies are (A5) Here we use only the geometric phase to calculate the fringe frequencies. The result is given by equations (2 and 3) in terms of the two apparent angles θ 1,2 and the effective velocity V ⊥ (equation 4). Since the delay is defined in terms of the frequency derivative of the phase, f ν is the difference in the group delay. While the distinction is unimportant for the geometric phase, it makes a difference for the dispersive plasma contributions. When the analysis is done including the derivatives in these plasma terms, the equation for f ν is modified but there is no change in the equation for f t , as mentioned at the end of §5.2.
In appendices C and D a derivation is given of the secondary spectrum in the limits of strong and weak scintillation, respectively. In the strong limit we find that it is given explicitly by the double integral over the observed angles of scattering in equation (8). We note that the integrand is the scattered brightness function, defined as a spectrum of plane waves. In contrast the discussion given above in terms of the KDI is based on spherical waves emanating from points in the screen. While one cannot equate the apparent angular position of points on the screen, θ 1,2 = r ′ 1,2 /(D − D s ), to the angles of arrival of plane waves components in the angular spectrum, one can obtain identical equations for f ν and f t by considering an integral over plane waves emanating from a screen, which is illuminated by a point source. The method is similar to the KDI analysis above, except that one expands the propagation phase of each plane wave component as a function of frequency and time. Its derivatives give f ν and f t , precisely, as in equations (2 and 3).
finite resolution and numerical issues for the secondary spectrum
Empirically, the secondary spectrum is estimated over a finite total bandwidth B and integration time T with finite resolution in frequency and time. These in turn set finite resolutions in f ν and f t and so in p and q. The integral expressions for the secondary spectrum such as Eq. (8) diverge at the origin of the p-q plane since they ignore resolution effects.
Finite resolution in p can be included by replacing the Dirac delta functions in Eq. (8) by rectangular functions of unit area whose limiting forms are delta functions. Then performing the integrations over dθ 2 yields a form for S 2 , where U = (θ 1 2 y + p − q 2 − 2qθ 1x ) and the summation is over the two ideal solutions θ 2y = ± √ U , and we have ignored the variation in B over the range ∆p near each solution; H ′ is a modified unit step function with a transition width ∆p. As ∆p → 0, the factors involving ∆p tend toward a delta function. For finite ∆p, however, S 2 (0, 0) remains finite.
In terms of the bandwidth B and time span T , the resolutions in p and q are (when angles are normalized by the diffraction angle, θ d ) where N t and N ν are, respectively, the number of distinct 'scintles' along the time and frequency axes, respectively (here, ǫ ∼ 0.2 is a constant that quantifies the filling factor of scintles in the dynamic spectrum; e.g. Cordes & Lazio 2002). The resolutions of p and q therefore are determined by how many scintles are contained in the dynamic spectrum, which in turn determine the statistical robustness (through N −1/2 effects) of any analyses of a particular dynamic spectrum. For typical dynamic spectra, N t and N ν are each 10, so ∆p and ∆q are each 0.1. Observationally, the individual channel bandwidth and sampling time are also important since they determine the Nyquist points in p and q.
In computing the secondary spectrum from the integral in Eq. (9) one also needs to resolve the (integrable) singularity U −1/2 . This can be achieved by changing variables to s x,y = (θ 2x,y + θ 1x,y )/2 |q| and d x,y = (θ 2x,y − θ 1x,y )/2 |q|, and integrating over the delta functions. Then letting x = s y and y = −d y we obtain where sgn(q) is the sign of q. Note that symmetry of S 2 upon letting p → −p and q → −q is demonstrated by also letting y → −y.
the secondary spectrum in the strong scattering limit Here we present secondary spectrum expected in the asymptotic limit case of strong scintillations from a single phase screen. The intensity from a point source recorded at position r and frequency ν is the squared magnitude of the phasor ε for the electric field: I(r, ν) = |ε(r, ν)| 2 , where the dependence on source position r s is suppressed. From the dynamic spectrum of a pulsar we can define the correlation of intensity versus offsets in both space and frequency. After subtracting the mean, ∆I = I− < I >, the correlation function is: Under asymptotic conditions of strong scintillation the phasor ε becomes a Gaussian random variable with zero mean and random phase, the real and imaginary parts of ε are uncorrelated, and the fourth moment can be expanded in products of second moments. It follows that: where Γ(r, ∆r, ν, ν + ∆ν) = < ε(r, ν)ε * (r + ∆r, ν + ∆ν) > .
When the field ε is due to a point source scattered by a phase screen at distance D s from the source and D − D s from the observer (with D the pulsar distance), the second moment is the product (see LR99): Here Γ point is simply due to the spherical wave nature of a point source, which is essentially unity for typical pulsar observations and Γ R is due to the wandering of dispersive travel times about its ensemble average as the electron column density changes, Γ R = exp(−π 2 ∆ν 2 τ 2 R ). Γ D is the diffractive second moment, which is given in terms of the scattered angular spectrum B(θ) at the radio frequency ν: recall that s = D s /D. The phase term in the first exponential is proportional to the extra delay θ 2 D(1 − s)/(2cs) for waves arriving at the observer at an angle θ; this quadratic relation between time delay and angle of arrival gives rise to the quadratic features in the secondary spectrum. In single-dish pulsar observations the spatial offset ∆r is sampled by a time offset t times the relative velocity of the diffraction pattern past the observer. Such observations are in a short-term regime in which the dispersive delay is essentially constant over the integration time and so observations are characterized by Γ D , the diffractive second moment at ∆r = V ⊥ tD/D s ; the distance ratio is needed since V ⊥ is the effective screen velocity. The secondary spectrum is the double Fourier Transform of the correlation function R ∆I , In the short-term regime, this equation is evaluated using C5 for Γ D in place of Γ in Eq. C2. Integration over t and ∆ν and conversion to the scaled variables of §3 yields Eq. 8 in the main text.
the weak scintillation limit
We now examine the secondary spectrum in the limit of weak scintillation due to a plasma phase screen with a power law wavenumber spectrum.
Point Source
Scintillation is said to be weak when the point-source, monochromatic scintillation index (rms/mean intensity) is much less than one. This applies near a phase screen since intensity fluctuations only build up as a wave travels beyond the screen. In this regime the KDI (Eq. A1) can be approximated by a first order expansion of exp[iφ d (r ′ )], where the screen phase at r ′ is written as the screen phase at the observer coordinate r plus the difference in phase between r ′ and r. This allows a linearization of the problem, even though the rms variation in overall screen phase may be very large, as expected for a power law spectrum.
Various authors have described the frequency dependence of scintillation under these conditions. Codona et al. (1986), in particular, give a thorough analysis applicable to evaluating the secondary spectrum. They obtain expressions for the correlation of intensity fluctuations between different observing wavelengths (λ 1 , λ 2 ), the quantity in our Eq. C1. Under weak scintillation conditions the result is most simply expressed in terms of its 2-D Fourier transform over κ, the cross-spectrum of intensity fluctuations: P ∆I (κ, λ 1 , λ 2 ) = dκR ∆I (∆r, λ 1 , λ 2 ) exp(iκ · ∆r)/(4π 2 ) . (D1) Here we find it convenient to work in terms of observing wavelength rather than frequency, because it simplifies the weak scintillation results. Codona et al. give an expansion for the cross-spectrum applicable to low wavenumbers in their equation (27). This is the product of the wavenumber spectrum of the screen phase with the two-wavelength "Fresnel filter" and with an exponential cut-off applicable to strong refractive scintillation, which can be ignored in weak scintillation. Their results are given for a non-dispersive phase screen and a plane incident wave; when converted to a plasma screen and the point source geometry described in previous sections the result is: P ∆I (κ, λ 1 , λ 2 ) = (λ 1 λ 2 /λ 2 0 )P φ (κ)2[cos(κ 2 D e λ d /2π) − cos(κ 2 D e λ 0 /4π)] .
(D5) Here H(u) is the unit step function and κ yp = 8π 2 |f λ |/D e − (2πf t /V ⊥ ) 2 (D6) P w (f t ) is It is closely related to the normal weak scintillation spectrum at wavelength λ 0 , with the difference that the Fresnel filter (sin 2 ()) function is replaced here by sin 2 () − 1/2. Excluding the P w term in Eq. (D5), we see that S 2 diverges along the parabolic curve where κ yp = 0, creating a parabolic arc, and is cut-off by the step function outside that curve. With circularly symmetric scattering, P φ is a function only of |κ| 2 = 8π 2 |f λ |/D e and so the dependence on f t is purely through the known arc-enhancement factor 1/κ yp using Eq. D6. Thus a measurement of S 2 (f λ,ft ) can be inverted to estimate P φ (κ x , κ y ), and so we have a direct method of estimating the phase spectrum of the medium (averaged over positive and negative κ y values). This is analogous to the reconstruction of an image from a hologram. The final result (D5) can be viewed as the interference of scattered waves with an unscattered wave. To see this compare S 2 (f λ , f t ), (excluding the P w term) with the interference result in Eq. 15 discussed in §3.4.3. First we transform into scaled variables (p, q) and assume small fractional differences in wavelength, for which f λ λ 0 ∼ f ν ν 0 . Hence we can express κ yp in Eq. D6 as: κ yp = |p| − q 2 /(s s 0 ) , (D8) so H(κ 2 yp )/κ yp becomes H(U )/ √ U (s s 0 ), where U is defined as in the interference result for S 2 (f λ , f t ) (Eq. D5) with ψ a = 0, and s 0 is the diffractive scale as defined in §4.2. In Eq. 15, the brightness function g represents the scattered waves that interfere with an undeviated plane wave, corresponding to the mean intensity in weak scintillation.
Extended Source
A temporally and spatially incoherent extended source at a distance D from the observer is described by its brightness distribution B sou (θ p ). Hence we can simply add the intensity patterns due to each point component at θ p to obtain the well-known convolution result: I ext (r) = dθ p I(r + θ p D(1 − s), 0, λ) B sou (θ p ) , where I(r, r s , λ) is the intensity pattern for a point source at r s . This convolution can also be expressed in the wavenumber (κ) domain as a product using the source visibility function V (u = κD(1 − s)/2π), where u is the baseline scaled by the wavelength. Combining this relation with the point-source expressions, we find that the integrand in Eq. D4 is multiplied by the product V 1 (u = κD(1 − s)/2π)V * 2 (u = κD(1 − s)/2π), where V 1 , V 2 are visibilities at λ 1 , λ 2 . Now consider the wavelength dependence of the visibility function. If the source brightness distribution is independent of wavelength (i.e. fixed angular size), then V 1 (u) = V 2 (u). Consequently, in Eq. D5 P φ is simply multiplied by |V | 2 to give: where the summation is over two equal and opposite values for κ yp . In this equation the P w,ext function is similarly modified by the visibility function but is of no immediate interest here. Our discussion shows that the secondary spectrum, S 2 (f λ , f t ) in the weak scintillation regime can be inverted to estimate the product of the medium phase spectrum by the squared visibility function of the source -in two dimensions. Since there are several lines of evidence supporting a Kolmogorov model for the phase spectrum, we have a new way of estimating the squared visibilty function of a source. This allows a form of imaging from spectral observations with a single dish. | 2014-10-01T00:00:00.000Z | 2004-07-03T00:00:00.000 | {
"year": 2004,
"sha1": "a53158ef4fe751ccdfaaaa1dfee1841ec8c882d7",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "a53158ef4fe751ccdfaaaa1dfee1841ec8c882d7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
218907793 | pes2o/s2orc | v3-fos-license | MESSI: In-Memory Data Series Indexing
Data series similarity search is a core operation for several data series analysis applications across many different domains. However, the state-of-the-art techniques fail to deliver the time performance required for interactive exploration, or analysis of large data series collections. In this work, we propose MESSI, the first data series index designed for in-memory operation on modern hardware. Our index takes advantage of the modern hardware parallelization opportunities (i.e., SIMD instructions, multi-core and multi-socket architectures), in order to accelerate both index construction and similarity search processing times. Moreover, it benefits from a careful design in the setup and coordination of the parallel workers and data structures, so that it maximizes its performance for in-memory operations. Our experiments with synthetic and real datasets demonstrate that overall MESSI is up to 4x faster at index construction, and up to 11x faster at query answering than the state-of-the-art parallel approach. MESSI is the first to answer exact similarity search queries on 100GB datasets in _50msec (30-75msec across diverse datasets), which enables real-time, interactive data exploration on very large data series collections.
I. INTRODUCTION
[Motivation] Several applications across many diverse domains, such as in finance, astrophysics, neuroscience, engineering, multimedia, and others [1]- [3], continuously produce big collections of data series 1 which need to be processed and analyzed. The most common type of query that different analysis applications need to answer on these collections of data series is similarity search [1], [4], [5].
The continued increase in the rate and volume of data series production renders existing data series indexing technologies inadequate. For example, ADS+ [6], the state-of-the-art sequential (i.e., non-parallel) indexing technique, requires more than 2min to answer exactly a single 1-NN (Nearest Neighbor) query on a (moderately sized) 100GB sequence dataset. For this reason, a disk-based data series parallel indexing scheme, called ParIS, was recently designed [7] to take advantage of modern hardware parallelization. ParIS effectively exploits the parallelism capabilities provided by multi-core and multisocket architectures, and the Single Instruction Multiple Data (SIMD) capabilities of modern CPUs. In terms of query answering, experiments showed that ParIS is more than 1 order of magnitude faster than ADS+, and more than 3 orders of magnitude faster than the optimized serial scan method.
Still, ParIS is designed for disk-resident data and therefore its performance is dominated by the I/O costs it encounters. For instance, ParIS answers a 1-NN (Nearest Neighbor) exact query on a 100GB dataset in 15sec, which is above the limit for keeping the user's attention (i.e., 10sec), let alone for supporting interactivity in the analysis process (i.e., 100msec) [8].
[Application Scenario] In this work, we focus on designing an efficient parallel indexing and query answering scheme for in-memory data series processing. Our work is motivated and inspired by the following real scenario. Airbus 2 , currently stores petabytes of data series, describing the behavior over time of various aircraft components (e.g., the vibrations of the bearings in the engines), as well as that of pilots (e.g., the way they maneuver the plane through the fly-by-wire system) [9]. The experts need to access these data in order to run different analytics algorithms. However, these algorithms usually operate on a subset of the data (e.g., only the data relevant to landings from Air France pilots), which fit in memory. Therefore, in order to perform complex analytics operations (such as searching for similar patterns, or classification) fast, in-memory data series indices must be built for efficient data series query processing. Consequently, the time performance of both index creation and query answering become important factors in this process.
[MESSI Approach] We present MESSI, the first in-MEmory data SerieS Index, which incorporates the state-of-the-art techniques in sequence indexing. MESSI effectively uses multicore and multi-socket architectures in order to concurrently execute the computations needed for both index construction and query answering and it exploits SIMD. More importantly though, MESSI features redesigned algorithms that lead to a further ∼4x speedup in index construction time, in comparison to an in-memory version of ParIS. Furthermore, MESSI answers exact 1-NN queries on 100GB datasets 6-11x faster than ParIS across the datasets we tested, achieving for the first time interactive exact query answering times, at ∼50msec.
When building ParIS, the design decisions were heavily influenced by the fact that the cost was mainly I/O bounded. Since MESSI copes with in-memory data series, no CPU cost can be hidden under I/O. Therefore, MESSI required more careful design choices and coordination of the parallel workers when accessing the required data structures, in order to improve its performance. This led to the development of a more subtle design for the construction of the index and on the development of new algorithms for answering similarity search queries on this index.
For query answering in particular, we showed that adaptations of alternative solutions, which have proven to perform the best in other settings (i.e., disk-resident data [7]), are not optimal in our case, and we designed a novel solution that achieves a good balance between the amount of communication among the parallel worker threads, and the effectiveness of each individual worker. For instance, the new scheme uses concurrent priority queues for storing the data series that cannot be pruned, and for processing these series in order, starting from those whose iSAX representations have the smallest distance to the iSAX representation of the query data series. In this way, the parallel query answering threads achieve better pruning on the data series they process. Moreover, the new scheme uses the index tree to decide which data series to insert into the priority queues for further processing. In this way, the number of distance calculations performed between the iSAX summaries of the query and data series is significantly reduced (ParIS performs this calculation for all data series in the collection). We also experimented with several designs for reducing the synchronization cost among different workers that access the priority queues and for achieving load balancing. We ended up with a scheme where workers use radomization to choose the priority queues they will work on. Consequently, MESSI answers exact 1-NN queries on 100GB datasets within 30-70msec across diverse synthetic and real datasets.
The index construction phase of MESSI differentiates from ParIS in several ways. For instance, ParIS was using a number of buffers to temporarily store pointers to the iSAX summaries of the raw data series before constructing the tree index [7]. MESSI allocates smaller such buffers per thread and stores in them the iSAX summaries themselves. In this way, it completely eliminates the synchronization cost in accessing the iSAX buffers. To achieve load balancing, MESSI splits the array storing the raw data series into small blocks, and assigns blocks to threads in a round-robin fashion. We applied the same technique when assigning to threads the buffers containing the iSAX summary of the data series. Overall, the new design and algorithms of MESSI led to ∼4x improvement in index construction time when compared to ParIS.
[Contributions] Our contributions are summarized as follows.
• We propose MESSI, the first in-memory data series index designed for modern hardware, which can answer similarity search queries in a highly efficient manner. • We implement a novel, tree-based exact query answering algorithm, which minimizes the number of required distance calculations (both lower bound distance calculations for pruning true negatives, and real distance calculations for pruning false positives). • We also design an index construction algorithm that effectively balances the workload among the index creation workers by using a parallel-friendly index framework with low synchronization cost. • We conduct an experimental evaluation with several synthetic and real datasets, which demonstrates the efficiency of the proposed solution. The results show that MESSI is up to 4.2x faster at index construction and up to 11.2x faster at query answering than the state-of-theart parallel index-based competitor, up to 109x faster at query answering than the state-of-the-art parallel serial scan algorithm, and thus can significantly reduce the execution time of complex analytics algorithms (e.g., k-NN classification).
II. PRELIMINARIES
We now provide some necessary definitions, and introduce the related work on state-of-the-art data series indexing.
A. Data Series and Similarity Search
[Data Series] A data series, S = {p 1 , ..., p n }, is defined as a sequence of points, where each point is associated to a real value v i and a position t i . The position corresponds to the order of this value in the sequence. We call n the size, or length of the data series. We note that all the discussions in this paper are applicable to high-dimensional vectors, in general.
[Similarity Search] Analysts perform a wide range of data mining tasks on data series including clustering [10], classification and deviation detection [11], [12], and frequent pattern mining [13]. Existing algorithms for executing these tasks rely on performing fast similarity search across the different series. Thus, efficiently processing nearest neighbor (NN) queries is crucial for speeding up the above tasks. NN queries are formally defined as follows: given a query series S q of length n, and a data series collection S of sequences of the same length, n, we want to identify the series S c ∈ S that has the smallest distance to S q among all the series in the collection S. (In the case of streaming series, we first create subsequences of length n using a sliding window, and then index those.) Common distance measures for comparing data series are Euclidean Distance (ED) [14] and dynamic time warping (DTW) [15]. While DTW is better for most data mining tasks, the error rate using ED converges to that of DTW as the dataset size grows [16]. Therefore, data series indexes for massive datasets use ED as a distance metric [6], [15]- [18], though simple modifications can be applied to make them compatible with DTW [16]. Euclidean distance is computed as the sum of distances between the pairs of corresponding points in the two sequences. Note that minimizing ED on z-normalized data (i.e., a series whose values have mean 0 and standard deviation 1) is equivalent to maximizing their Pearson's correlation coefficient [19].
[Distance calculation in SIMD] Single-Instruction Multiple-Data (SIMD) refers to a parallel architecture that allows the execution of the same operation on multiple data simultaneously [20]. Using SIMD, we can reduce the latency of an operation, because the corresponding instructions are fetched once, and then applied in parallel to multiple data. All modern CPUs support 256-bit wide SIMD vectors, which means that certain floating point (or other 32-bit data) computations can be up to 8 times faster when executed using SIMD. In the data series context, SIMD has been employed for the computation of the Euclidean distance functions [21], as well as in the ParIS index, for the conditional branch calculations during the computation of the lower bound distances [7].
B. iSAX Representation and the ParIS Index
[iSAX Representation] The iSAX representation (or summary) is based on the Piecewise Aggregate Approximation (PAA) representation [22], which divides the data series in segments of equal length, and uses the mean value of the points in each segment in order to summarize a data series. Figure 1(b) depicts an example of PAA representation with three segments (depicted with the black horizontal lines), for the data series depicted in Figure 1(a). Based on PAA, the indexable Symbolic Aggregate approXimation (iSAX) representation was proposed [16] (and later used in several different data series indices [6], [7], [11], [23], [24]). This method first divides the (y-axis) space in different regions, and assigns a bit-wise symbol to each region. In practice, the number of symbols is small: iSAX achieves very good approximations with as few as 256 symbols, the maximum alphabet cardinality, |alphabet|, which can be represented by eight bits [18]. It then represents each segment w of the series with the symbol of the region the PAA falls into, forming the word 10 2 00 2 11 2 shown in Figure 1(c) (subscripts denote the number of bits used to represent the symbol of each segment).
[ParIS Index] Based on the iSAX representation, the stateof-the-art ParIS index was developed [7], which proposed techniques and algorithms specifically designed for modern hardware and disk-based data. ParIS makes use of variable cardinalities for the iSAX summaries (i.e., variable degrees of precision for the symbol of each segment) in order to build a hierarchical tree index (see Figure 1(d)), consisting of three types of nodes: (i) the root node points to several children nodes, 2 w in the worst case (when the series in the collection cover all possible iSAX summaries); (ii) each inner node contains the iSAX summary of all the series below it, and has two children; and (iii) each leaf node contains the iSAX summaries of all the series inside it, and pointers to the raw data (in order to be able to prune false positives and produce exact, correct answers), which reside on disk. When the number of series in a leaf node becomes greater than the maximum leaf capacity, the leaf splits: it becomes an inner node and creates two new leaves, by increasing the cardinality of the iSAX summary of one of the segments (the one that will result in the most balanced split of the contents of the node to its two new children [6], [18]). The two refined iSAX summaries (new bit set to 0 and 1) are assigned to the two new leaves. In our example, the series of Figure 1(c) will be placed in the outlined node of the index (Figure 1(d)). Note that we define the distance of a query series to a node as the distance between the query (raw values, or iSAX summary) and the iSAX summary of the node.
In the index construction phase (see Figure 1(d)), ParIS uses a coordinator worker that reads raw data series from disk and transfers them into a raw data buffer in memory. A number of index bulk loading workers compute the iSAX summaries of these series, and insert <iSAX summary, file position> pairs in an array. They also insert a pointer to the appropriate element of this array in the receiving buffer of the corresponding subtree of the index root. When main memory is exhausted, the coordinator worker creates a number of index construction worker threads, each one assigned to one subtree of the root and responsible for further building that subtree (by processing the iSAX summaries stored in the coresponding receiving buffer). This process results in each iSAX summary being moved to the output buffer of the leaf it belongs to. When all iSAX summaries in the receiving buffer of an index construction worker have been processed, the output buffers of all leaves in that subtree are flushed to disk.
For query answering, ParIS offers a parallel implementation of the SIMS exact search algorithm [6]. It first computes an approximate answer by calculating the real distance between the query and the best candidate series, which is in the leaf with the smallest lower bound distance to the query. ParIS uses the index tree only for computing this approximate answer. Then, a number of lower bound calculation workers compute the lower bound distances between the query and the iSAX summary of each data series in the dataset, which are stored in the SAX array, and prune the series whose lower bound distance is larger than the approximate real distance computed earlier. The data series that are not pruned, are stored in a candidate list for further processing. Subsequently, a number of real distance calculation workers operate on different parts of this array to compute the real distances between the query and the series stored in it (for which the raw values need to be read from disk). For details see [7].
In the in-memory version of ParIS, the raw data series are In the rest of the paper, we use ParIS to refer to this in-memory version of the algorithm. Figure 2 depicts the MESSI index construction and query answering pipeline. The raw data are stored in memory into an array, called RawData. This array is split into a predetermined number of chunks. A number, N w , of index worker threads process the chunks to calculate the iSAX summaries of the raw data series they store. The number of chunks is not necessarily the same as N w . Chunks are assigned to index workers the one after the other (using Fetch&Inc). Based on the iSAX representation, we can figure out in which subtree of the index tree an iSAX summary will be stored. A number of iSAX buffers, one for each root subtree of the index tree, contain the iSAX summaries to be stored in that subtree.
III. THE MESSI SOLUTION
Each index worker stores the iSAX summaries it computes in the appropriate iSAX buffers. To reduce synchronization cost, each iSAX buffer is split into parts and each worker works on its own part 3 . The number of iSAX buffers is usually a few tens of thousands and at most 2 w , where w is the number of segments in the iSAX summaries of each data series (w is fixed to 16 in this paper, as in previous studies [6], [7]).
When the iSAX summaries for all raw data series have been computed, the index workers proceed in the constuction of the tree index. Each worker is assigned an iSAX buffer to work on (this is done again using Fetch&Inc). Each worker reads the data stored in (all parts of) its assigned buffer and builds the corresponding index subtree. Therefore, all index workers process distinct subtrees of the index, and can work in parallel and independently from one another, with no need for synchronization 4 . When an index worker finishes with the current iSAX buffer it works on, it continues with the next iSAX buffer that has not yet been processed.
When the series in all iSAX buffers have been processed, the tree index has been built and can be used to answer similarity search queries, as depicted in the query answering phase of Fig. 2. To answer a query, we first perform a search for the query iSAX summary in the tree index. This returns a leaf whose iSAX summary has the closest distance to the iSAX summary of the query. We calculate the real distance of the (raw) data series pointed to by the elements of this leaf to the query series, and store the minimum of these distances into a shared variable, called BSF (Best-So-Far). Then, the index workers start traversing the index subtrees (the one after the other) using BSF to decide which subtrees will be pruned. The leaves of the subtrees that cannot be pruned are placed into (a fixed number of) minimum priority queues, using the lower bound distance between the raw values of the query series and the iSAX summary of the leaf node, in order to be further examined. Each thread inserts elements in the priority queues in a round-robin fashion so that load balancing is achieved (i.e., all queues contain about the same number of elements).
As soon as the necessary elements have been placed in the priority queues, each index worker chooses a priority queue to work on, and repeatedly calls DeleteMin() on it to get a leaf node, on which it performs the following operations. It first checks whether the lower bound distance stored in the priority queue is larger than the current BSF: if it is then we are certain that the leaf node does not contain any series that can be part of the answer, and we can prune it; otherwise, the worker needs to examine the series contained in the leaf node, by first computing lower bound distances using the iSAX summaries, and if necessary also the real distances using the raw values. During this process, we may discover a series with a smaller distance to the query, in which case we also update the BSF. When a worker reaches a node whose distance is bigger than the BSF, it gives up this priority queue and starts working on another, because it is certain that all the other elements in the abandoned queue have an even higher distance to the query series. This process is repeated until all priority queues have been processed. During this process, the value of BSF is updated to always reflect the minimum distance seen so far. At the end of the calculation, the value of BSF is returned as the query answer.
Note that, similarly to ParIS, MESSI uses SIMD (Single-Instruction Multiple-Data) for calculating the distances of both, the index iSAX summaries from the query iSAX summary (lower bound distance calculations), and the raw data series from the query data series (real distance calculations) [7].
A. Index Construction
Algorithm 1 presents the pseudocode for the initiator thread. The initiator creates N w index worker threads to execute the index construction phase (line 2). As soon as these workers finish their execution, the initiator returns (line 3). We fix N w to be 24 threads (Figure 9 in Section IV justifies this choice). We assume that the index variable is a structure (struct) containing the RawData array, all iSAX buffers, and a pointer to the root of the tree index. Recall that MESSI splits RawData into chunks of size chunk size. We assume that the size of RawData is a multiple of chunk size (if not, standard padding techniques can be applied).
The pseudocode for the index workers is in Algorithm 2. The workers first call the CalculateiSAXSummaries function (line 1) to calculate the iSAX summaries of the raw data series and store them in the appropriate iSAX buffers. As soon as the iSAX summaries of all the raw data series have been computed (line 2), the workers call T reeConstruction to construct the index tree.
The pseudocode of CalculateiSAXSummaries is shown in Algorithm 3 and is schematically illustrated in Figure 3(a). Each index worker repeatedly does the following. It first performs a Fetch&Inc to get assigned a chunk of raw data series to work on (line 3). Then, it calculates the offset in the RawData array that this chunk resides (line 4) and starts processing the relevant data series (line 6). For each of them, it computes its iSAX summary by calling the ConvertToiSAX function (line 7), and stores the result in the appropriate iSAX buffer of index (lines [8][9]. Recall that each iSAX buffer is split into N w parts, one for each thread; thus, index.iSAXbuf f er is a two dimensional array. Each part of an iSAX buffer is allocated dynamically when the first element to be stored in it is produced. The size of each part has an initial small value (5 series in this work, as we discuss in the experimental evaluation) and it is adjusted dynamically based on how many elements are inserted in it (by doubling its size each time).
We note that we also tried a design of MESSI with no iSAX buffers, but this led to slower performance (due to the worse cache locality). Thus, we do not discuss this alternative further.
As soon as the computation of the iSAX summaries is over, each index worker starts executing the T reeConstruction function. Algorithm 4 shows the pseudocode for this function and Figure 3(b) schematically describes how it works. In T reeConstruction, a worker repeatedly executes the following actions. It accesses F b (using Fetch&Inc) to get assigned an iSAX buffer to work on (line 3). Then, it traverses all parts of the assigned buffer (lines 5-6) and inserts every pair iSAX summary, pointer to relevant data series stored there in the index tree (line 7-11). Recall that the iSAX summaries contained in the same iSAX buffer will be stored in the same subtree of the index tree. So, no synchronization is needed among the index workers during this process. If a tree worker finishes its work on a subtree, a new iSAX buffer is (repeatedly) assigned to it, until all iSAX buffers have been processed.
B. Query Answering
The pseudocode for executing an exact search query is shown in Algorithm 5. We first calculate the iSAX summary of the query (line 2), and execute an approximate search (line 3) to find the initial value of BSF, i.e., a first upper bound on the actual distance between the query and the series indexed by the tree. This process is illustrated in Figure 4(a).
During a search query, the index tree is traversed and the distance of the iSAX summary of each of the visited nodes to the iSAX summary of the query is calculated. If the distance of the iSAX summary of a node, nd, to the query iSAX summary is higher than BSF, then we are certain that the distances of all data series indexed by the subtree rooted at nd are higher than BSF. So, the entire subtree can be pruned. Otherwise, we go down the subtree, and the leaves with a distance to the query smaller than the BSF, are inserted in the priority queue.
The technique of using priority queues maximizes the pruning degree, thus resulting in a relatively small number of raw data series whose real distance to the query series must be calculated. As a side effect, BSF converges fast to the correct value. Thus, the number of iSAX summaries that are tested against the iSAX summary of the query series is also reduced.
Algorithm 5 creates N s = 48 threads, called the search workers (lines 6-7), which perform the computation described above by calling SearchW orker. It also creates N q ≥ 1 priority queues (lines [4][5], where the search workers place those data series that are potential candidates for real distance calculation. After all search workers have finished (line 8), ExactSearch returns the current value of BSF (line 9).
We have experimented with two different settings regarding the number of priority queues, N q , that the search workers use. The first, called Single Queue (SQ), refers to N q = 1, whereas the second focuses in the Multiple-Queue (M Q) case where N q > 1. Using a single shared queue imposes a high synchronization overhead, whereas using a local queue per thread results in severe load imbalance, since, depending on the workload, the size of the different queues may vary significantly. Thus, we choose to use N q shared queues, where N q > 1 is a fixed number (in our analysis N q is set to 24, as experiments our show that this is the best choice). q ← index such that queue[q] has not been processed yet; The pseudocode of search workers is shown in Algorithm 6, and the work they perform is illustrated in Figures 4(b) and 4(c). At each point in time, each thread works on a single queue. Initially, each queue is shared by two threads. Each search worker first identifies the queue where it will perform its first insertion (line 2). Then, it repeatedly chooses (using Fetch&Inc) a root subtree of the index tree to work on by calling T raverseRootSubtree (line 6). After all root subtrees have been processed (line 7), it repeatedly chooses a priority queue (lines 9, 13) and works on it by calling P rocessQueue (line 10). Each element of the queue array has a field, called f inished, which indicates whether the processing of the corresponding priority queue has been finished. As soon as a search worker determines that all priority queues have been processed (line 12), it terminates.
We continue to describe the pseudocode for T raverseRootSubtree which is presented in Algorithm 7 and illustrated in Figure 4(b). T raverseRootSubtree is recursive. On each internal node, nd, it checks whether the (lower bound) distance of the iSAX summary of nd to the raw values of the query (line 1) is smaller than the current BSF , and if it is, it examines the two subtrees of the node using recursion (lines [11][12]. If the traversed node is a leaf node and its distance to the iSAX summary of the query series is smaller than the current BSF (lines 4-9), it places it in the appropriate priority queue (line 6). Recall that the priority queues are accessed in a round-robin fashion (line 9). This strategy maintains the size of the queues balanced, and reduces the synchronization cost of node insertions to the queues. We implement this strategy by (1) passing a pointer to the local variable q of SearchW orker as an argument to T raverseRootSubtree, (2) using the current value of q for choosing the next queue to perform an insertion (line 6), and (3) updating the value of q (line 9). Each queue may be accessed by more than one threads, so a lock per queue is used to protect its concurrent access by multiple threads.
We next describe how P rocessQueue works (see Algorithm 8 and Figure 4(c)). The search worker repeatedly removes the (leaf) node, nd, with the highest priority from the priority queue, and checks whether the corresponding distance stored in the queue is still less than the BSF. We do so, because the BSF may have changed since the time that the leaf node was inserted in the priority queue. If the distance is less than the BSF, then CalculateRealDistance (line 3) is called, in order to identify if any series in the leaf node (pointed to by nd) has a real distance to the query that is smaller than the current BSF. If we discover such a series (line 4), BSF is updated to the new value (line 6). We use a lock to protect BSF from concurrent update efforts (lines 5, 7). Previous experiments showed that the initial value of BSF is very close to its final value [25]. Indeed, in our experiments, the BSF is updated only 10-12 times (on average) per query. So, the synchronization cost for updating the BSF is negligible.
In Algorithm 9, we depict the pseudocode for CalculateRealDistance. Note that we perform the real distance calculation using SIMD. However, the use of SIMD does not have the same significant impact in performance as in ParIS [7]. This is because pruning is much more effective in MESSI, since for each candidate series in the examined leaf node, CalculateRealDistance first performs a lower bound distance calculation, and proceeds to the real distance calculation only if necessary (line 3). Therefore, the number of (raw) data series to be examined is limited in comparison to those examined in ParIS (we quantify the effect of this new design in our experimental evaluation).
IV. EXPERIMENTAL EVALUATION
In this section, we present our experimental evaluation. We use synthetic and real datasets in order to compare the performance of MESSI with that of competitors that have been proposed in the literature and baselines that we developed. We demonstrate that, under the same settings, MESSI is able to construct the index up to 4.2x faster, and answer similarity search queries up to 11.2x faster than the competitors. Overall, MESSI exhibits a robust performance across different datasets and settings, and enables for the first time the exploration of very large data series collections at interactive speeds.
[Algorithms] We compared MESSI to the following algorithms: (i) ParIS [7], the state-of-the-art modern hardware data series index. (ii) ParIS-TS, our extension of ParIS, where we implemented in a parallel fashion the traditional tree-based exact search algorithm [16]. In brief, this algorithm traverses the tree, and concurrently (1) inserts in the priority queue the nodes (inner nodes or leaves) that cannot be pruned based on the lower bound distance, and (2) pops from the queues nodes for which it calculates the real distances to the candidate series [16]. In contrast, MESSI (a) first makes a complete pass over the index using lower bound distance computations and then proceeds with the real distance computations; (b) it only considers the leaves of the index for insertion in the priority queue(s); and (c) performs a second filtering step using the lower bound distances when popping elements from the priority queue (and before computing the real distances). The performance results we present later justify the choices we have made in MESSI, and demonstrate that a straight-forward implementation of tree-based exact search leads to sub-optimal performance. (iii) UCR Suite-P, our parallel implementation of the state-of-the-art optimized serial scan technique, UCR Suite [15]. In UCR Suite-P, every thread is assigned a part of the in-memory data series array, and all threads concurrently and independently process their own parts, performing the real distance calculations in SIMD, and only synchronize at the end to produce the final result. (We do not consider the non-parallel UCR Suite version in our experiments, since it is almost 300x slower.) All algorithms operated exclusively in main memory (the datasets were already loaded in memory, as well). The code for all algorithms used in this paper is available online [26].
[Datasets] In order to evaluate the performance of the proposed approach, we use several synthetic datasets for a fine grained analysis, and two real datasets from diverse domains. Unless otherwise noted, the series have a size of 256 points, which is a standard length used in the literature, and allows us to compare our results to previous work. We used synthetic datasets of sizes 50GB-200GB (with a default size of 100GB), and a random walk data series generator that works as follows: a random number is first drawn from a Gaussian distribution N(0,1), and then at each time point a new number is drawn from this distribution and added to the value of the last number. This kind of data generation has been extensively used in the past (and has been shown to model real-world financial data) [6], [16]- [18], [27]. We used the same process to generate 100 query series. For our first real dataset, Seismic, we used the IRIS Seismic Data Access repository [28] to gather 100M series representing seismic waves from various locations, for a total size of 100GB. The second real dataset, SALD, includes neuroscience MRI data series [29], for a total of 200M series of size 128, of size 100 GB. In both cases, we used as queries 100 series out of the datasets (chosen using our synthetic series generator).
In all cases, we repeated the experiments 10 times and we report the average values. We omit reporting the error bars, since all runs gave results that were very similar (less than 3% difference). Queries were always run in a sequential fashion, one after the other, in order to simulate an exploratory analysis scenario, where users formulate new queries after having seen the results of the previous one.
B. Parameter Tuning Evaluation
In all our experiments, we use 24 index workers and 48 search workers. We have chosen the chunk size to be 20MB (corresponding to 20K series of length 256 points). Each part of any iSAX buffer, initially holds a small constant number of data series, but its size changes dynamically depending on how many data series it needs to store. The capacity of each leaf of the index tree is 2000 data series (2MB). For query answering, MESSI-mq utilizes 24 priority queues (whereas MESSI-sq utilizes just one priority queue). In either case, each priority queue is implemented using an array whose size changes dynamically based on how many elements must be stored in it. Below we present the experiments that justify the choices for these parameters. Figure 5 illustrates the time it takes MESSI to build the tree index for different chunk sizes on a random dataset of 100GB. The required time to build the index decreases when the chunk size is small and does not have any big influence in performance after the value of 1K (data series). Smaller chunk sizes than 1K result in high contention when accessing the fetch&increment object used to assign chunks to index workers. In our experiments, we have chosen a size of 20K, as this gives slightly better performance than setting it to 1K. Figures 6 and 7 show the impact that varying the leaf size of the tree index has in the time needed for the index creation and for query answering, respectively. As we see in Figure 6, the larger the leaf size is, the faster index creation becomes. However, once the leaf size becomes 5K or more, this time improvement is insignificant. On the other hand, Figure 7 shows that the query answering time takes its minimum value when the leaf size is set to 2K (data series). So, we have chosen this value for our experiments. Figure 7 indicates that the influence of varying the leaf size is significant for query answering. Note that when the leaf size is small, there are more leaf nodes in the index tree and therefore, it is highly probable that more nodes will be inserted in the queues, and vice versa. On the other hand, as the leaf size increases, the number of real distance calculations that are performed to process each one of the leaves in the queue is larger. This causes load imbalance among the different search workers that process the priority queues. For these reasons, we see that at the beginning the time goes down as the leaf size increases, it reaches its minimum value for leaf size 2K series, and then it goes up again as the leaf size further increases. Figure 8 shows the influence of the initial iSAX buffer size during index creation. This initialization cost is not negligible given that we allocate 2 w iSAX buffers, each consisting of 24 parts (recall that 24 is the number of index workers in the system). As expected, the figure illustrates that smaller initial sizes for the buffers result in better performance. We have chosen the initial size of each part of the iSAX buffers to be a small constant number of data series. (We also considered an alternative design that collects statistics and allocates the iSAX buffers right from the beginning, but was slower.) We finally justify the choice of using more than one priority queues for query answering. As Figure 11 shows, MESSI-mq and MESSI-sq have similar performance when the number of threads is smaller than 24. However, as we go from 24 to 48 cores, the synchronization cost for accessing the single priority queue in MESSI-sq has negative impact in performance. Figure 13 presents the breakdown of the query answering time for these two algorithms. The figure shows that in MESSI-mq, the time needed to insert and remove nodes from the list is significantly reduced. As expected, the time needed for the real distance calculations and for the tree traversal are about the same in both algorithms. This has the effect that the time needed for the distance calculations becomes the dominant factor. The figure also illustrates the percentage of time that goes on each of these tasks. Finally, Figure 14 illustrates the impact that the number of priority queues has in query answering performance. As the number of priority queues increases, the time goes down, and it takes its minimum value when this number becomes 24. So, we have chosen this value for our experiments.
C. Comparison to Competitors
[Index Creation] Figure 9 compares the index creation time of MESSI with that of ParIS as the number of cores increases for a dataset of 100GB. The time MESSI needs for index creation is significantly smaller than that of ParIS. Specifically, MESSI is 3.5x faster than ParIS. The main reasons for this are on the one hand that MESSI exhibits lower contention cost when accessing the iSAX buffers in comparison to the corresponding cost paid by ParIS, and on the other hand, that MESSI achieves better load balancing when performing the computation of the iSAX summaries from the raw data series. Note that due to synchronization cost, the performance improvement that both algorithms exhibit decreases as the number of cores increases; this trend is more prominent in ParIS, while MESSI manages to exploit to a larger degree the available hardware.
In Figure 10, we depict the index creation time as the dataset size grows from 50GB to 200GB. We observe that MESSI performs up to 4.2x faster than ParIS (for the 200GB dataset), with the improvement becoming larger with the dataset size.
[Query Answering] Figure 11 compares the performance of the MESSI query answering algorithm to its competitors, as the number of cores increases, for a random dataset of 100GB (y-axis in log scale). The results show that both MESSI-sq and Note that the performance of MESSI-mq is better than that of MESSI-sq, so when we mention MESSI in our comparison below we refer to MESSI-mq. MESSI is 55x faster than UCR Suite-P and 6.35x faster than ParIS when we use 48 threads (with hyperthreading). In contrast to ParIS, MESSI applies pruning when performing the lower bound distance calculations and therefore it executes this phase much faster. Moreover, the use of the priority queues result in even higher pruning power. As a side effect, MESSI also performs less real distance calculations than ParIS. Note that UCR Suite-P does not perform any pruning, thus resulting in a much lower performance than the other algorithms. Figure 12 shows that this superior performance of MESSI is exhibited for different data set sizes as well. Specifically, MESSI is up to 61x faster than UCR Suite-p (for 200GB), up to 6.35x faster than ParIS (for 100GB), and up to 7.4x faster than ParIS-TS (for 50GB).
[Performance Benefit Breakdown] Given the above results, we now evaluate several of the design choices of MESSI in isolation. Note that some of our design decisions stem from the fact that in our index the root node has a large number of children. Thus, the same design ideas are applicable to the iSAX family of indices [4] (e.g., iSAX2+, ADS+, ULISSE). Other indices however [4], use a binary tree (e.g., DSTree), or a tree with a very small fanout (e.g., SFA trie, M-tree), so new design techniques are required for efficient parallelization. However, some of our techniques, e.g., the use of (more than one) priority queue, the use of SIMD, and some of the data structures designed to reduce the syncrhonization cost can be applied to all other indices. Figure 18 shows the results for the query answering performance. The leftmost bar (ParIS-SISD) shows the performance of ParIS when SIMD is not used. By employing SIMD, ParIS becomes 60% faster than ParIS-SISD. We then measure the performance for ParIS-TS, which is about 10% faster than ParIS. This performance improvement comes form the fact that using the index tree (instead of the SAX array that ParIS uses) to prune the search space and determine the data series for which a real distance calculation must be performed, significantly reduces the number of lower bound distance calculations. ParIS calculates lower bound distances for all the data series in the collection, and pruning is performed only when calculating real distances, whereas in ParIS-TS pruning occurs when calculating lower bound distances as well.
MESSI-mq further improves performance by only inserting in the priority queue leaf nodes (thus, reducing the size of the queue), and by using multiple queues (thus, reducing the synchronization cost). This makes MESSI-mq 83% faster than ParIS-TS.
[Real Datasets] Figures 15 and 16 reaffirm that MESSI exhibits the best performance for both index creation and query answering, even when executing on the real datasets, SALD and Seismic (for a 100GB dataset). The reasons for this are those explained in the previous paragraphs. Regarding index creation, MESSI is 3.6x faster than ParIS on SALD and 3.7x faster than ParIS on Seismic, for a 100GB dataset. Moreover, for SALD, MESSI query answering is 60x faster than UCR Suite-P and 8.4x faster than ParIS, whereas for Seismic, it is 80x faster than UCR Suite-P, and almost 11x faster than ParIS. Note that MESSI exhibits better performance than UCR Suite-P in the case of real datasets. This is so because working on random data results in better pruning than that on real data. Figures 17(a) and 17(b) illustrate the number of lower bound and real distance calculations, respectively, performed by the different query algorithms on the three datasets. ParIS calculates the distance between the iSAX summaries of every single data series and the query series (because, as we discussed in Section II, it implements the SIMS strategy for query answering). In contrast, MESSI performs pruning even during the lower bound distance calculations, resulting in much less time for executing this computation. Moreover, this results in a significantly reduced number of data series whose real distance to the query series must be calculated.
The use of the priority queues lead to even less real distance calculations, because they help the BSF to converge faster to its final value. MESSI performs no more than 15% of the lower bound distance calculations performed by ParIS.
[MESSI with DTW] In our final experiments, we demonstrate that MESSI not only accelerates similarity search based on Euclidean distance, but can also be used to significantly accelerate similarity search using the Dynamic Time Warping (DTW) distance measure [30]. We note that no changes are required in the index structure; we just have to build the envelope of the LB Keogh method [31] around the query series, and then search the index using this envelope. Figure 19 shows the query answering time for different dataset sizes (we use a warping window size of 10% of the query series length, which is commonly used in practice [31] that MESSI-DTW is up to 34x faster than UCR Suite-p DTW (and more than 3 orders of magnitude faster than the nonparalell version of UCR Suite DTW).
V. RELATED WORK
Various dimensionality reduction techniques exist for data series, which can then be scanned and filtered [32], [33] or indexed and pruned [6], [7], [11], [16], [17], [23], [24], [34], [35] during query answering. We follow the same approach of indexing the series based on their summaries, though our work is the first to exploit the parallelization opportunities offered by modern hardware, in order to accelerate in-memory index construction and similarity search for data series. The work closest to ours is ParIS [7], which also exploits modern hardware, but was designed for disk-resident datasets. We discussed this work in more detail in Section II.
FastQuery is an approach used to accelerate search operations in scientific data [36], based on the construction of bitmap indices. In essence, the iSAX summarization used in our approach is an equivalent solution, though, specifically designed for sequences (which have high dimensionalities).
The interest in using SIMD instructions for improving the performance of data management solutions is not new [37]. However, it is only more recently that relatively complex algorithms were extended in order to take advantage of this hardware characteristic. Polychroniou et al. [38] introduced design principles for efficient vectorization of in-memory database operators (such as selection scans, hash tables, and partitioning). For data series in particular, previous work has used SIMD for Euclidean distance computations [21]. Following [7], in our work we use SIMD both for the computation of Euclidean distances, as well as for the computation of lower bounds, which involve branching operations.
Multi-core CPUs offer thread parallelism through multiple cores and simultaneous multi-threading (SMT). Thread-Level Parallelism (TLP) methods, like multiple independent cores and hyper-threads are used to increase efficiency [39].
A recent study proposed a high performance temporal index similar to time-split B-tree (TSB-tree), called TSBw-tree, which focuses on transaction time databases [40]. Binna et al. [41], present the Height Optimized Trie (HOT), a generalpurpose index structure for main-memory database systems, while Leis et al. [42] describe an in-memory adaptive Radix indexing technique that is designed for modern hardware. Xie et al. [43], study and analyze five recently proposed indices, i.e., FAST, Masstree, BwTree, ART and PSL and identify the effectiveness of common optimization techniques, including hardware dependent features such as SIMD, NUMA and HTM. They argue that there is no single optimization strategy that fits all situations, due to the differences in the dataset and workload characteristics. Moreover, they point out the significant performance gains that the exploitation of modern hardware features, such as SIMD processing and multiple cores bring to in-memory indices.
We note that the indices described above are not suitable for data series (that can be thought of as high-dimensional data), which is the focus of our work, and which pose very specific data management challenges with their hundreds, or thousands of dimensions (i.e., the length of the sequence).
Techniques specifically designed for modern hardware and in-memory operation have also been studied in the context of adaptive indexing [44], and data mining [45].
VI. CONCLUSIONS
We proposed MESSI, a data series index designed for inmemory operation by exploiting the parallelism opportunities of modern hardware. MESSI is up to 4x faster in index construction and up to 11x faster in query answering than the state-of-the-art solution, and is the first technique to answer exact similarity search queries on 100GB datasets in ∼50msec. This level of performance enables for the first time interactive data exploration on very large data series collections. | 2020-05-28T09:16:33.450Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "64dce74b6914efc4a40c11fae8a929499e004340",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2009.00786",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "76dd8e79c196e5c80de6e10eb55173848a0b4d87",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
244416469 | pes2o/s2orc | v3-fos-license | THE VUCA ERA CREATES COVID-19 PANDEMIC IN INDONESIA BEING COMPLICATED
This article aims to analyze the confusion of information, uncertainty in the situation, and human puzzlement have overshadowed the actions of Indonesian citizens throughout 2020 until the end of January 2021 in line with the pandemic outbreak which have not yet ended and still cannot be predicted when it will end. Volatility, uncertainty, problems of complexity and ambiguity of the choices are conditions that have been described by the VUCA concept. Covid-19, which has infected more than one million people in the country, has caused a lot of excitement in various domains: economic, political, social, cultural, and various other aspects of life. By conducting a literature review from March 2020 to January 2021 and analyzing secondary data from Twitter and distributing short questionnaires through Google Form on August 2020, as 969 respondents in Java, Bali, Sumatera, Kalimantan, Sulawesi, Nusa Tenggara dan Papua Islands the authors found that the VUCA era facilitates various complexities in dealing with the Covid-19 pandemic. The clumsy policies create uncertainty way in the process of social transformation towards society who cares about health protocols. Some of the solutions offered by Bennet and Lemoine (2014) or Aura Codreanu (2016) are still insufficient to manage the changes that occur towards a better life. This article contributes ideas related to social action about new social care, togetherness, and responsibility for realizing a new normal life. Keyword: VUCA; Covid-19; health protocol; social transformation; Large-Scale Social Restrictions ERA VUCA MENCIPTAKAN KONDISI PANDEMI COVID-19 DI INDONESIA SEMAKIN KOMPLEKS ABSTRAK. Artikel ini bertujuan menganalisis kesimpangsiuran informasi, ketidakpastian situasi, serta kegalauan manusia yang membayangi tindakan warga Indonesia sepanjang tahun 2020 hingga akhir Januari 2021 seiring dengan pandemi yang belum berakhir dan tidak bisa diprediksi kapan akan selesai. Kerentanan, ketidakpastian, kompleksitas masalah dan ambiguitas dari berbagai pilihan-pilihan merupakan kondisi yang dijelaskan oleh konsep VUCA. Covid-19 yang telah menjangkiti lebih dari satu juta jiwa di tanah air menimbulkan banyak kehebohan di dalam berbagai ranah; ekonomi, politik, sosial, budaya dan berbagai aspek kehidupan lainnya. Melalui kajian kepustakaan sejak Maret 2020 hingga Agustus 2020 dan dengan melakukan analisis data sekunder dari Tweeter serta penyebaran kuesioner ringkas melalu google form kepada 969 responden di Pulau Jawa, Bali, Sumatera, Kalimantan, Nusa Tenggara, Sulawesi dan Papua penulis menemukan bahwa era VUCA memfasilitasi berbagai kerumitan dalam menghadapi Covid-19. Kebijakan kikuk dari pemerintah menciptakan jalan ketidakpastian dalam proses transformasi sosial menuju masyarakat yang peduli terhadap protokol kesehatan. Beberapa argumentasi yang ditawarkan oleh Bennet dan Lemoine (2014) atau Aura Codreanu (2016) masih belum cukup mampu mengelola perubahan yang terjadi menuju kehidupan yang lebih baik. Artikel ini memberikan sumbangsih gagasan terkait aksi sosial tentang kepedulian tatanan sosial baru, kebersamaan, dan tanggung jawab untuk mewujudkan kehidupan normal baru. Kata Kunci: VUCA; Covid-19; protokol kesehatan; transformasi sosial; Pembatasan Sosial Berskala Besar
INTRODUCTION
The big celebration at the end of the 2019 night was marked by various activities that were not much different from the same events as the previous years. Not long after the uproar, optimism began to fade after the plague that hit China quickly spread to several neighboring countries in Asia and European regions within a few weeks then infected 198 countries recorded on March 25 and continued to grow to 213 countries on June 20, 2020. Not only in quantity, but COVID-19 has also had a major impact on life (Sismondo, 2020;Paniati, 2020) and the dramatic changes for the Van Dijk civilization (2020) which can be seen from the many political, economic, sports, educational and other socio-cultural activities that have been postponed or even canceled.
Globalizing COVID-19 marked that globalization not only brings science, technology, (Castells, 2010) culture (Fukuyama, 1999) and politics (Ohmae, 1995) but also disease outbreaks -Coronavirus causes abnormalities in many sectors of activity in almost all countries in the world. While the spreading of outbreak in global is going to be unpredictable, the number of Indonesian people who contracted Covid-19 on September 29, 2020, has exceeded 270,000. Apart from the public, there are many state and government officials who have tested positive: initiated by the Minister of Transportation on March 14, 2020, and Mayor of Bogor, after returning from abroad. Then, many of senior officers of governments institutions, provinces, counties, and villages were infected even some of them has passed away.
Although the number of infected peoples by COVID-19 in Indonesia is rising, policy makers in first period of outbreak make uncoordinated responds. On March 9, 2020, the President appointed the Secretary of the Directorate General of Disease Prevention and Control of the Ministry of Health (echelon 2) as the government spokesman regarding Covid-19 in Indonesia replacing the role of the Minister of Health and other state officials who often made confusing statements. Furthermore, on March 13 a Task Force for handling Covid-19 was formed. The government also make hesitant attitude in implementing the health protocol that was urged by WHO at the end of January. In contrast, nongovernmental institutions try to take responsibility in facing the outbreak earlier. On March 13-14, namely the Universitas Indonesia followed by Universitas Gadjah Mada, Institute Pertanian Bogor, Universitas Airlangga and other universities to announce the Long-Distance Study to replace activities in class. On March 15, the Ministry of Education and Culture began giving instructions to schools and colleges to carry out Long Distance Study which was responded to by closing schools and returning students from Islamic Boarding Schools (Pesantren). The Response Task Force on March 17 issued a health protocol with three main concepts: washing hands, wearing a mask, and maintaining social distance.
Unpredictability of global outbreak, uncoordinated strategic communication by policy makers and experimentation of action by non-governmental institutions became three key aspects of COVID-19 situation in Indonesia that need to further study. This article tries to reveal how pandemic leads social transformation that constituted by Volatility (vulnerability), Uncertainty (uncertainty), Complexity (complexity) and Ambiguity (ambiguity) shortened as VUCA.
VUCA concept was first introduced by the United States Military (US) in the late 1980s after the end of the Cold War. (Bennett, Nathan & G. James Lemoine, 2014;Codreanu, 2016: van Tulder, Rob. Barbara Jankowska andAlain Verbeke, 2020). Threat challenges arise from various directions and with various scenarios that are sometimes unpredictable mathematically, but their emergence and its devastating effects must be overcome. This means that at any time the US military must be able to face increasing challenges with asymmetrical opponents such as non-state militias and others that are organized "fluidly", sometimes the enemy it faces is "virtual". (Casey, 2018). This important topic in military academies in the US is relevant to the shocks in the civilization of human life that have undergone great disruption (Fukuyama, 1999) and the digital era which is the backbone of daily activities (Castel, 2010). In fact, Dominique Boullier (2017) argues that society is formed by big data that spreads through social media. Regarding, this pandemic situation there were good examples about VUCA as below:
METHOD
To map and analyze VUCA during the Covid-19 tantrum, researchers conducted a study of mixed methods from various secondary data published by the National Disaster Management Agency (BNPB), the Task Force for the Acceleration of Handling Covid-19 / Covid-19 Handling Task Force, several regulations contained in many reports in the mass media as well as big data spread in social media and, also short quantitative research.
Data from social media was collected using workbenchdata.com. The website is used by data journalists to track public opinion on social media. In this study, the researcher decided to collect twitter data. Workbench as a third party facilitates access to Twitter by linking the researchers' twitter to request permission on Twitter to collect data. Twitter is more open than Facebook for several reasons which brings people through hashtags related to various issues and allows people to engage in broader debate. Twitter also makes it easier for its users to follow each other without having to add to their friends list. Twitter is also today the only social media that offers broad access to big data on conversations on Twitter rather than Facebook and Instagram.
Technically, data is collected through the keyword corona OR korona lang: in. this means that all tweets that mention these words in Indonesian will be collected. The workbench has a limitation in that the maximum data collected is 100 thousand rows. To reach the maximum limit, the researcher arranged data collection every 15 minutes. Data were collected from March 1-13. This timeframe is important because on March 2, the Indonesian government announced the first case of Covid-19. This means that this timeframe is the right time to track the earliest information circulating on Twitter about Covid-19 in Indonesia. The data collected was processed through manually categorizing the tweets into several categories.
Figure1. Twitter Data Trend
A quick quantitative survey was carried out by convenience sample technic using Google form and distributed to the WhatsApp Group network from September 22 to September 24, which is spread almost all over Indonesia which includes, Greater Jakarta (Jabodetabek), Surabaya, Bandung, Padang, Medan, Makassar, Bali, Aceh, Pontianak, Palangkaraya, Timika and Wakatobi with an age range of 15 -77 years old with a response of 969 people. To measure a person's level of concern about the dangers of the Covid-19 pandemic, a question is given with a Likert scale size which is modified into 7 sequences of ratio scale: from "1 strongly disagrees to "7 strongly agrees. The next two statements are submitted to the respondent with a recall memory of the respondent's feelings 3-4 months ago and 5-6 months ago. The analysis shown is using a comparison of the mean score.
RESULTS AND DISCUSSION
Besides influencing human health until death, disease outbreaks in the millennial era have had a huge impact on various aspects of life. With the world connected in a large global network (Ohmae, 1990) the events that develop in this country cannot be separated from events in others, especially information technology with increasingly influential social media (Castel, 2010) boosting various uncertainties amid hesitation to carry out activities in the next year. Clara González-Sanguinoa, et al. (2020) concluded in their study that the situation of alarm generated by Covid-19 has turned into a crisis with authentic consequences throughout the world. It was caused by failing match with health protocol. As happens with many of Indonesian particularly the young people behave like their reference people (Abdul Aziz, 2020) particularly American who do not obey with the health protocol. In several big cities such as Jakarta and its surroundings, the very high number of people who died related to Covid-19 has resulted in burial grounds becoming critical. So that the burial solution had appeared that could be implemented in a certain public cemetery by only providing a special area or carrying out an overlapping burial system. They were several unique situations regarding to stop Corona pandemic. As happened in the spread of HIV AID, (Ariadne, 2017) young people who are potentially affected have been socialized not to avoid free sex and use of needles without permission, but the danger of pandemic did not concern them. Despite the direction of social transformation because of Covid-19 towards a society that cares about health protocols, the dynamics of transmissions are undergoing a very complicated situations in the process of social transformation from life before the emergence of the pandemic.
Volatility
The disease carried by Covid-19 does not consider social status, religious traditions, fitness level or a person's age. The statement by Tom Hanks and his wife that they had infected by Covid-19 on March 13, 2020, in Australia while shooting a film has taken the world by storm. Then, several world leaders are also positive namely Bolivian President Jeanine Anez on July 9. British Prime Minister Boris Johnson on April 5, President of Brazil, Jail Bolsonaro on July 7, Prince Charles on March 25, 2020, Advisor to the Iranian Minister of Foreign Affairs who later died on March 5, 2020, as well as the President of Honduras on Tuesday 23 June. Amazingly, after being criticized by many countries the President of the USA officially withdrew from the WHO on Monday, July 6, 2020, and stated that America was stopping its routine assistance to the organization under the United Nations. Trump reasoned that the WHO had failed in dealing with the pandemic and accused the UN health agency of being a "puppet" of China.
Volatility affects not only individual health and safety, but also employment status and livelihoods. As stated by the Coordinating Minister for the Economy, the number of layoffs (PHK) which increased the unemployment rate, and the poverty rate experienced a drastic increase in mid-August. Data regarding layoffs has exceeded the figure of 2.1 million workers, increasing the poverty rate from 9.41 to 9.78 percent in just 6 months after the pandemic broke out. Even 34,100 migrant workers have returned to their homeland. If added to the informal sector workers who lost their jobs, there could be more than 3.5 million people (Kompas. com -10/08/2020, 06:54 WIB). Such data, of course, brings up the next following problem: social problems are on the rise.
Uncertainty
Various things related to Covid-19 and how to deal with it are full of uncertainty. The origin of the germs that spread throughout the world, bringing about great changes in human life, cannot be ascertained for its origin and the destination as well. Various theories with their rationalization emerged are believed by various groups of people: the initial speculation believed by the world community was related to the source of the virus which was a mutant of microorganisms derived from wild animals consumed by the inhabitants of Wuhan; can be from bats and can be from pangolin. Then a conspiracy theory emerged arguing that the city of Wuhan, which has sophisticated biological laboratories, is developing biological weapons, however, due to negligence, the developed virus escapes out. Then a counter conspiracy theory emerged which stated that it was the American soldiers who attended the military Olympics in Wuhan City in November 2019 who deliberately released the virus to hamper China's extraordinary economic and political growth. Speculation continues to grow until countries in the world no longer need to worry about where the virus originated.
On January 30, 2020, WHO declared the highest level of global emergency for the Covid-19 outbreak. At that time there were only 82 cases of this virus outside China, with 10 cases occurring in Europe and no cases in Latin America or Africa. Since then, the WHO Director has appeared on various television media and through social media networks to provide explanations about the dangers of Covid-19 and convey health protocols. Countries around the world would immediately pay attention to WHO's call in their own ways in overcoming the pandemic. However, several countries have refused the idea, such as in the United States and Germany. Indonesia chose the Large-Scale Social Restrictions (PSBB) policy instead of the term Regional Quarantine by implementing the health protocols compiled by the Covid-19 Handling Task Force. However, this term was renamed more than ten times with several reasons as well as consequences.
The issue of uncertainty started with the diminishing public credibility of WHO and then spread to various matters. In the first four months, the pandemic, which became a reference for various countries in the world, helped increase public confidence that the outbreak could be resolved immediately. However, towards the end of April, various speculations emerged against world health institutions, thereby dispelling confidence, especially rumors that WHO officials were running the vaccine business and were involved in the emergence and spread of Covid-19. Speculation regarding the emergence of this virus also has many variants; gene mutases from animals consumed in China such as bats, a biological weapon developed by China, by the United States and Israel.
Confusion of information has arrived at the most crucial stage, namely, how to treat those affected by diseases and drugs to prevent them. The President of Brazil and US President stated on April 18 that Hydroxy Chloroquine was taken to prevent Corona. High-ranking officials in Indonesia were also promoting this malaria drug as a Corona vaccine. Several ideas emerged how to prevent Corona disease such as the use of eucalyptus oil and others that were spread by social media. Information confusion is contributed significantly by social media which has grown beyond anyone's control. As the role of digital technology, which is the container for human life, is currently known as an information explosion (Dwivedi, et.al, 2018;Umeozor, 2019).
DKI Jakarta is the first province to respond to the Covid-19 pandemic in Indonesia. However, what was done in early March 2020, was considered by some people as an attempt by him to find a political stage for the 2024 Presidential Election and was labeled too over-acting on this virus, which is believed to have not yet been a case in Indonesia. Therefore, at the end of March, there were still many people who followed the carelessness of state officials, both Ministers and members of parliament, by making jokes that belittled the plague. In general, public perceptions are divided in two; some believed the governor's statement, and some believed the statement rejecting Covid-19 had entered the country.
What people share in their mind in first week of March 2020 were heavy of the unconsciousness of the danger of Corona Virus. The highest amount of content of the 300 most shared tweets is related with health advice. Although the largest number, this number is almost the same as content regarding Covid-19 information, criticism of the central government, religion, and conditions in other countries. This indicates that the information circulating among Twitter users in the early of this case in Indonesia was related to these 5 issues.
Meanwhile, other issues that appear in the "tweet" are related to humor, poor drug distribution, issues related to China, hoax issues and data security, as well as comparisons with other medical issues.
Source: researcher Based on the area graph above, content that criticizes central government policies has preceded other types of content and was most shared at the earliest before the announcement of Covid-19. This indicates public pressure and concern over the government's attitude influenced before the case is announced. On the following day, increased health advice content and its information were widely shared. One day after the announcement of the first case in Indonesia, Twitter had a lot of content regarding the difficulty of access to health services and PPE, followed by various speculations about the condition of Covid-19 in China and the Indonesian government's ambiguous attitude towards Chinese tourists and conditions in other countries, and the spread of personal identities of the patients. On the following day, Twitter was enlivened with content comparing other health issues such as dengue fever, influenza, smokers' health, and others. On March 7, the response from religious leaders became a concern. However, Content from the new president's Twitter account was widely shared on March 9, 2020.
In the second week of June, the ban on entry and exit from the region was abolished in the form of a government policy on easing the PSBB, which can be interpreted that the health protocol applied at the beginning of the Eid homecoming season in the second week of May to the first week of June is something that is excessive. As a result, the level of public awareness of health protocols has decreased On April 9, the Ministry of Transportation issued Regulation of the Minister of Transportation Number 18 of 2020 concerning Transportation Control in the Context of Preventing the Spread of Corona Virus Disease 2019 which was signed by the Minister of Transportation Ad Interim. However, in its implication, it creates confusion in the community: whether online motorcycle taxis can carry passengers as they have Health protocol such as tips on avoiding Covid-19, using hand sanitizer and masks are mostly shared on Twitter. These 2 times more shared than content that criticizes policies and statements from the central government in response to it. And 3 other issues such as content containing updates on this virus case, content on PPE distribution, and other countries policy responses are the same. Whereas other types of content are much less shared. Contents of the president's Twitter account also on the top 10, even though the lowest. This indicates the government's attitudes and statements become the center.
been. This regulation is followed by the DKI Governor Regulation No.33/2020 which was enforced on April 10, 2020, by allowing two-wheeled vehicles to only carry passengers if the driver and passenger have the same home address. This confuses about 1 million online motorcycle taxi drivers in Greater Jakarta (Jabodetabek) who operate in DKI and around that many workers or residents who previously used to use online motorcycle taxi services in Jakarta.
A few days later, namely on April 14, 2020, the government through the Ministry of Transportation (Kemenhub) allowed online motorcycle taxis to return to transport passengers in areas that apply the PSBB. The Ministry of Transportation's policy is seen to be contrary to the Ministry of Health's policy of Article 15 of Minister of Health Regulation Number 9 of 2020 concerning PSBB Guidelines for the Acceleration of Handling COVID-19 which states that online motorcycle taxis can only operate to transport goods, not people. This is intended so that all parties, especially online motorcycle taxi drivers, to be able to carry passengers, clearly violates the essence of physical distancing.
The Ministry of Transportation also prohibits homecoming activities from 24 April to 31 May 2020. This rule applies to all land transportation modes with strict sanctions for violators in the form of being forced to turn back a vehicle that tries to get out of the PSBB area like Jabodetabek or criminal sanctions and a fine of IDR 100 million. But it did not last long, the regulation was then changed, and all modes of transportation could resume operations on 7 May 2020, but with limited criteria. Likewise, the ban on going home (going in and out of an area) ended on June 7 with the issuance of Circular Letter (SE) No.7 of 2020 by the Task Force for the Acceleration of Handling Covid-19 replacing Circular Letter No. 5.
Complexity
Efforts to prevent and handle a pandemic are faced with a variety of strategic and practical interests from various elements in the country: development programs from the government, business interests of entrepreneurs, political interests from political parties and their leaders; weak coordination between institutions; and more importantly, the level of public awareness and compliance with health protocols that are less disciplined. Indonesia's economic growth, which experienced a contraction of 2.972% compared to the first quarter (y-o-y) of 2019 with the image that it will worsen, has prompted the President of the Republic of Indonesia on June 20, 2020 to disband the Task Force for the Acceleration of Handling Covid-19 and form a Task Force for Handling Covid-19 which is under the Coordinating Minister for Economic Affairs through Presidential Regulation Number 82 of 2020 concerning the Committee for Handling Covid-19 and National Economic Recovery. The rationality of handling the pandemic under the Coordinating Minister for the Economy looks right considering that in the second quarter the Indonesian economy experienced the worst contraction of 5.93% (y-o-y). Therefore, although on several occasions the President said that "public health" is the priority, in practice in the field it appears that economic-business activities are the main activities, and health protocols play a supporting role in these activities. The increase in the number of unemployed by 3.7 million people as well as the increase in the percentage of people below the poverty line has become a frightening ghost, which if not stopped could get bigger and have more dire impacts. Furthermore, great attention to the economy is not only for protecting people's lives but a reflection of the interests of elites in government and political parties that own businesses as well as buzzers who are clienteles of these figures.
Concern about the Covid-19 pandemic will transform into a recession and continue in the economic crisis had caused panic to high-ranking officials was reflected by The Coordinating Minister for Economic Affairs responded to the local policy. Such as, after the Governor of DKI announced that he would "pull the emergency brake" on September 10. The concerns of the figures were met with the panic of elite groups through social media that condemned the policy, which resulted in a negative response from the market and a fall in the Composite Stock Price Index (IHSG). The fall in the IHSG signals an imminent economic recession. After one day of experiencing a decline in the JCI, on Monday the index increased which raised questions. Is it true that the market is contracted due to the implementation of the Jakarta PSBB? An economic condition that is very sensitive to public discourse is said by Robert J. Shiller (2019) as narrative economic. Furthermore, the statement on the withdrawal of the emergency brake, because of the increase in Covid-19 cases in the country's capital, was responded loudly on social media which indicated the remnants of political battles in the 2017 DKI Jakarta Governor Election and the 2019 Presidential Election. Efforts to accelerate the handling of Covid-19 is often colored by emotional lingering in the political arena. The Head of BNPB attempted to confirm during the formulation of this PSBB policy that the Governor did not mention the word "PSBB Total" for DKI Jakarta. It turned out that after an investigation, the term "PSBB Total" The Vuca Era Creates Covid-19 Pandemic in Indonesia Being Complicated (Ricardi S Adnan, Fadlan Khaerul Anam, and Radhiatmoko Radhiatmoko) appeared in the mass media. Recently, it has often been inadequate in informing something because of a time competition with social media. This weakness makes the media often get stuck using terms that are not necessarily precise and accurate as real facts. On the other hand, the issue that has already gone viral on social media is exacerbated by the relatively low level of digital literacy in society.
Another very basic problem is the state's unpreparedness in the institution for handling it. Weak information and coordination between institutions has caused more than 100 doctors and medical personnel to die as the result of a shortage of personal protective equipment and equipment. The procurement of medical equipment which must be carried out quickly is hampered by various bureaucratic procedures that are not deftly completed by the parties concerned. Likewise, data and information that often differ between institutions has resulted in inaction in handling. Spiritually, medical personnel have conveyed that fatigue in dealing with the pandemic since the first two months of this incident exploded on the grounds that they had long left their families fighting for their patients' health. However, the appreciation of the incentives for medical personnel promised by the government only three months later can be disbursed and that too is in the process of being paid in installments.
People also feel tired and tired since the PSBB was relaxed in June. Tourist spots are raided to vent boredom at home and for recreation. Since this time, the behavior of public compliance with health protocols has begun to loosen along with loosening of control and law enforcement from the authorities towards offenders. It cannot be denied that the decline in the level of public compliance is due to confusion over various policies from the government, many of which are difficult to understand as well as the behavior of local elites who do not provide good examples, such as carrying out their children's wedding parties by holding Dangdut music concerts or political figures who invite the masses to join them actively involved in the Pilkada (region election).
During the current pandemic, hoaxes also spread, especially those that occurred on social media. Government policies tend to limit information reporting on the threat of Covid-19, both at home and abroad. Social media has in many ways replaced the role of mass media, but in real life it is an important means of creating a variety of uproar. Besides exacerbating differences in political views related to a policy, social media has also become a means of promotion for businesspeople to take advantage of the pandemic situation for economic interests.
Some viruses that interfere with health protocols are the issue that hospitals can easily convict patients who die from being infected due to Covid-19. The hospital is suspected of seeking profit / getting assistance from the government of 200 million rupiah for treating Covid-19 patients and 350 million rupiah if the patient dies. Public distrust increases when a family member dies while his family believes that the person concerned is not affected by Covid-19 forcibly picking up the body and wants to be buried normally in accordance with the traditions of the community concerned in many places such as in Manado, Makassar, Surabaya, Madura, Batam, Bekasi, and Lombok. Family-owned data is flowed up via social media to confront hospital statements. This fact proves that distrust in Indonesian society is still high as the results of research by Fahmi et.al (2019) regarding the level of individual and community trust in health.
Efforts to prevent the spread and handling of this pandemic were subsequently faced with the issuance of a dilemmatic policy, namely being able to prohibit the implementation of 3,000 local and regional election (pilkades) but still running simultaneous regional elections in December 2020. Although many have been criticized by some parties, the political decisions of the government show that there have been many violations of health protocols that have occurred during the registration period and at the beginning of the campaign for regional head candidates.
Ambiguity
Efforts to deal with the spread of Covid-19 are faced with several contrasting dichotomies: prioritizing health or maintaining the sustainability of economic activity; consistent with health protocol or prioritizing politics which has become the breath of development; prioritizing the interests of citizens in a region or national interests. Lockdown countries such as China, South Korea, New Zealand, and Italy can quickly reduce the number of the spread of Covid-19. However, this successful project was not possible for the Indonesian government. Due to the limited state budget, it is impossible to guarantee the availability of food and other people's needs during the lockdown period. The policies taken by the provincial government were not a few that were not considered by the central government. Likewise, the policies taken by the regional government are often considered bad by the provincial and central governments.
The Jakarta Governor's policy in early April 2020 which prohibited two-wheeled motorized vehicles from attracting passengers, limiting office working hours, closing shops and entertainment venues had a huge economic impact so that many people shouted to be reopened immediately. The reduction in the operational capacity of the Trans Jakarta and the MRT has created full passengers and then resulted in failing to implement one of the main principles of the health protocol "maintaining distance", and vice versa. PT KAI's policy of reducing operating hours and the number of commuter line carriages has had the effect of long queues which in fact caused the productivity of employees at work to decrease drastically. Provincial governments were also facing the same dilemma with the increasing number of positive cases infected with this deadly virus through enforcing strict discipline in adhering to health protocols or allowing people to work for food.
The case of ambiguity in an interesting area to be discussed occurred in East Java, namely the cross dispute between the Governor of East Java and the Mayor of Surabaya regarding the surge in positive cases of Covid-19 in Surabaya and efforts to overcome them. The Surabaya government seeks to limit vehicles entering from outside however, the Governor's response was to the contrary, hoping for coordination with the provinces regarding the restrictions on these vehicles. A disagreement between these governments had reflected in several cases such as the PT HM Sampoerna cigarette factory were reported dozen new positive cases. Parallelly the curve of positive corona cases has increased significantly in Surabaya making the area a black zone.
Social Transformation
There are new habits for human being after Covid-19 spreading out entire the world. It means that human being is change their habits and attitude and in social science called as social transformation. (Castle, 2003;Maton, 2000) The social structure affects the environment and vice versa. Giuseppe Feola (2015) stated that changes in society lead to changes in the environment as is suspected to be the cause of the emergence of Corona Virus in China. Changes that occur in the environment will bring changes back to society. As Cass (2018) Infrastructures-road, rail, electricity, gas, water, broadband-are typically conceptualized as 'large', extensive, and somewhat durable systems which are interdependence with societal transformation. Global environment change has a big deal for creates social transformation changing social structure. change brings the societal transformation. Social network contributes to the social adaptation and transformation (Barnes, 2017) Regarding the pandemic disease, at least in the short research, there is an increasing worries of society mind causing the pandemic situation in the third week of September 2020 as below figure 5.
Furthermore, Feola (2015) argued that "society finally need to adapt … with the most important thing being the adjustment of institutional aspects so that transformation can run smoothly". This opinion does not seem to fit the pandemic conditions in Indonesia. The current era of VUCA causes institutions, elite figures and the public appear to still not be aligned towards social transformation as it is believed to be a solution to peaceful living following health protocols and the spread of COVID-19 can be suppressed / there has been no increase. However, the expectations of many parties as conveyed by the President in May are still far from expectations. The trend of increasing Covid-19 cases continues to increase and does not decrease as has happened in other countries.
After seven months of the pandemic spreading in Indonesia, community is getting bored, the apparatus is getting tired (Tempo.co.id, Selasa, 13 Oktober 2020 17:22 WIB; Kompas TV Selasa, 13 Oktober 2020 17:22 WIB; Liputan 6, 03 Des 2020, 10:00 WIB), the economic burden is getting more difficult for the people so that compliance with health protocols begins to loosen. Even, Worley (2020) argued that oversized impact of the Corona Virus pandemic on communities of color suggests that too much of our leadership, organization design, and change research has slanted discussions away from uncomfortable realities. In fact, Worley (2020) argues that the biggest impact of the Corona Virus pandemic is too much leadership, complicated organization, thus distracting the discussion from uncomfortable realities. It is happened in Indonesia, public concern about this disease break became fade and so there has been an explosion in the number of people who have contracted the plague to more than 1000 cases almost every day since mid-January 2021. On the other hand, it was clear that the government was hesitant and nervous about the fact that this disease outbreak made various policies ineffective.
CONCLUSION
Pandemic Covid-19 in Indonesia has jumped to the VUCA consequences that build the situations getting more volatile, uncertainty, complex and ambiguity. There are no easy choices of public policy to guide the people in social transformation to be care about health protocol. De Roo (2017) mentions it as the "world of becoming", which is a form of environmental change caused by the high complexity in society. The social environment is not conducive to facilitate people living in the new normal. The suggestion of Nathan Bennett, & G. James Lemoine (2014) to increase agility, to manage a huge and good information, institutional restructuring and develop some experiments could not be implemented in this pandemic season. The complexity that occurs does not provide a guarantee that social transformation concerned with health protocols will become new habits as mentioned in the new normal concept. It is also the recommendation of Aura Codreanu (2016) cannot be applied well. There were some less of social solidities from people and government officers facing Covid-19. They mean that real challenges to transform society to live in peace without worry about the threat of Corona Virus. However, optimism began to emerge when the vaccine became available in the second week of January 2021, albeit in very limited quantities and with no guaranteed hundred percent effectiveness. Moreover, we believe that social concern, togetherness, and responsibility are the clues to arrive a new normal. Further study is needed to synthesize any difficulties to create a better solution and develop a sharpen theoretical framework. | 2021-11-20T16:20:21.067Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "34e0cac710e734244964c05bf500bd7d3c3609b8",
"oa_license": "CCBYSA",
"oa_url": "https://jurnal.unpad.ac.id/sosiohumaniora/article/download/29744/16404",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d8753754b56a88f3db59035778d0cbd5003c2c6e",
"s2fieldsofstudy": [
"Political Science",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
122371211 | pes2o/s2orc | v3-fos-license | ON THE EXISTENCE OF BUBBLE-TYPE SOLUTIONS OF NONLINEAR
Considered in this paper is a class of singular boundary value problem, arising in hydrodynamics and nonlinear field theory, when centrally bubble-type solutions are sought: (p(t)u) = c(t)p(t)f(u), u(0) = 0, u(+∞) = L > 0 in the half-line [0, +∞), where p(0) = 0. We are interested in strictly increasing solutions of this problem in [0,∞) having just one zero in (0, +∞) and finite limit at zero, which has great importance in applications or pure and applied mathematics. Sufficient conditions of the existence of such solutions are obtained by applying the critical point theory and by using shooting argument [1, 2] to better analysis the properties of certain solutions associated with the singular differential equation. To the authors’ knowledge, for the first time, the above problem is dealt with when f satisfies non-Lipschitz condition. Recent results in the literature are generalized and significantly improved.
Introduction
The singular problem which we investigate in this paper appears when Cahn−Hillard theory has been developed to study the behavior of nonhomogeneous fluid (fluid− fluid, fluid−vapor, fluid−gas, etc., see, e.g., [3,5] and references therein). If ρ is the density of the medium, µ(ρ) the chemical potential of a nonhomogeneous fluid and the motion of the fluid is absent, the state of the fluid in R N is described by the equation where γ and µ 0 are suitable constants. This equation can describe the formation of microscopical bubbles in a nonhomogeneous fluid, in particular, vapor inside one liquid. With this purpose, we add to Eq. (1) the boundary conditions for the bubbles. Follows from the central symmetry, it is necessary for the smoothness of solutions of (1) at the origin: ρ ′ (0) = 0. (2) Since the bubble is surrounded by an external liquid with density ρ l , the following condition holds at infinity: lim r→∞ ρ(r) = ρ l > 0.
From (3) it follows that µ 0 = µ(ρ l ). Whenever a strictly increasing solution to problem (1)−(3) exists, for some ρ(0) = ρ v , with 0 < ρ v < ρ l , then ρ v is the density of the gas at the center of the bubble and the solution ρ determines an increasing mass density profile [12]. In the case of plane or spherical bubbles Eq. (1) takes the form where N = 2 or N = 3, respectively, and is known as the density profile equation [3,6].
More general situation is that the constant coefficient 4λ 2 depends on the variable r, r N −1 and the nonlinear term are generalized by p(r) and f (ρ), respectively. Noting that the nonlinear boundary value problem (5), (6) has at least the solution ρ(r) ≡ ξ > 0. We are interested in solutions which are strictly increasing and have just exactly one zero in (0, ∞). If such solutions exist, many important physical properties of the bubbles depend on them (in particular, the gas density inside the bubble, the bubble radius and the surface tension). It is also interesting to remark that boundary value problems of same kind arise in nonlinear field theory [4]. Therefore, it becomes an interesting and challenging problem to study the more general system which we will discuss in this paper. We investigate in this paper a generalization of problem (5), (6), which refer to as the second order singular boundary value problem (BVP for short) in the half-line: u(0) = 0, u(+∞) = L > 0, (8) where p, c and f are given continuous functions satisfying some assumptions and p(0) = 0. We consider the existence of a strictly increasing solution of problem (7), (8) having just one zero in (0, ∞) and belonging to C 1 ([0, ∞)) C 2 ((0, ∞)). When c(t) ≡ 1, the singular BVP (7), (8) has been investigated in [15,16] and [17] by means of differential and integral inequalities, upper and lower functions approach, respectively. We mention here that if c(t) ≡ 1, some arguments in [15]- [17] are unavailable. Problem (7), (8) can be transformed into a problem about the existence of a strictly decreasing and positive solution in the positive half-line which is of significant importance in many disciplines of science such as engineering or pure and applied mathematics. For p(t) = t k , k ∈ N or k ∈ (1, ∞), such a problem was solved by shooting argument combined with variational methods in [1] and [2], respectively. It is worth pointing out that in this paper, if p(t) reduces to t k , we can extend k ∈ (0, ∞). As for BVP (5) and (6), analytical−numerical investigation and numerical simulation of the problem can be found in [12] and [3,8], respectively. We emphasize, if BVP (7), (8) reduces to BVP (5), (6), some sufficient conditions obtained in this paper are also necessary. It should be mentioned here that the critical point theory is a powerful tool to deal with the boundary value problems of differential equations on the bounded and unbounded domain, see for instance [11,13,14]. In particular, the existence of homoclinic solutions of differential equations has been extensive and intensive studied, see [7,9,18,19] and the references listed therein for information on this subject. Note that the strictly increasing solutions of BVP (7), (8) having just one zero in (0, ∞) and finite limit at zero can also be called homoclinic solutions [15]- [17].
When f satisfies non-Lipschitz condition, as far as authors know, there is no research about the existence of monotone solutions of BVP (7), (8) or some other similar problems. In the present paper, we are interested in the case that f satisfies non-Lipschitz condition, and motivated mainly by the papers [2,15,16], we consider the more general problem (7), (8) by applying the critical point theory and by using shooting argument [1,2] to better analysis the properties of certain solutions associated with the singular differential equation. Recent results in the literature are generalized and significantly improved. Now we present the basic assumptions in order to obtain the main results in this paper: In addition, we need the following hypothesis on the function p to announce the first result in this paper: where c 2 and F are given by (H 1 ) and (9), respectively.
Remark 1.1. We will show in Appendix that under the condition (H 4 ), the explicit condition is a sufficient condition for (H 6 ). There are many functions satisfy (H 4 ) − (H 6 ), for example, Up until now, we can state our first main result. (7), (8) possesses at least one strictly increasing solution u with just one zero and u(0) ∈ [L 0 , 0).
Remark 1.2.
In the particular case that c(t) ≡ 1, assume that f ∈ Lip([L 0 , L]), (H 3 ), (H 4 ) and (11) hold. In [16], Rachůnková et al. obtained the existence of escape solutions of BVP (7), (8) which can be used to find the strictly increasing solution having just one zero in (0, ∞) (also called homoclinic solution) for BVP (7), (8). Note that, the homoclinic solutions were obtained under similar conditions in [15] by means of differential and integral inequalities and also obtained under stronger conditions in [17] by using upper and lower functions approach. However, in Theorem 1.1, we do not require f satisfies Lipschitz condition. Moreover, as we will show in Appendix, if function p satisfies (11), then (H 6 ) holds, but the reverse is not true. In fact, if p(t) = e t − 1 for e L−L0 < 1/F 0 + 1, then it is easy to check that (H 4 ) − (H 6 ) hold but (11) does not. Therefore, we generalize and improve the results in [15]- [17] in some sense.
In addition, if we substitute c 2 /c 1 < 1 + F (L)/F (L 0 ) by f ′ (0) < 0, then we have the following theorem. (7), (8) can be transformed into problems concerning the existence of a strictly decreasing and positive solution which have been considered in [1] and [2] with p(t) = t k for k ∈ N and k ∈ (1, ∞), respectively. However, in this paper, if p(t) reduces to t k , we do not need any requirement on k expect that k ∈ (0, ∞) by using an original decomposition technique to better estimate functions in a new function space which we construct in Section 2. In this point of view, we improve and generalize the results in [1,2]. Remark 1.4. When BVP (7), (8) reduces to the problem (5), (6), after a simple calculation, we get that the conditions of Theorem 1.1 and 1.2 are satisfied if and only if 0 < ξ < 1. On the other hand, according to [12](Proposition 4), 0 < ξ < 1 is also a necessary condition for the existence of at least a strictly increasing solution having exactly one zero in (0, ∞) for problem (5), (6). Now we give the main idea of this paper. Similar to [15,16], consider an auxiliary equation wheref It is obvious that if BVP (12) and (8) has a strictly increasing solution u having just one zero with u(0) ∈ [L 0 , 0), then u is also a solution of BVP (7), (8) with required properties. Therefore, we only need to consider the problem (12) and (8) in the rest of the paper. Firstly, we consider the case that f satisfies Lipschitz condition. Motivated mainly by the papers [15,16], we discuss the initial value problem (IVP for short) (12) with initial value condition by means of contraction mapping theorem, differential and integral inequalities. On the other hand, motivated by [2], by using variational method and estimating the values of the variational functional (24) at critical points, we carry out a study of the existence and properties of solutions to BVP (12) with boundary value condition It is suffices then to invoke the shooting argument of [1] to obtain the existence results for BVP (12), (8) and thus for BVP (7), (8) in the case that f satisfies Lipschitz condition. Let us describe the main idea of the proof. Set I = (L 0 , 0) and let I i (i = 1, 2) be the subset of I, consisting of all B such that the solution of IVP (12), (14) corresponding to B is type (i), i = 1, 2, see Proposition 4.1 for the precise definitions of type (i), i = 1, 2, 3. We then prove that I i (i = 1, 2) are disjoint, nonempty open sets, from which we can conclude that there exist elements B ∈ I which belong neither to I 1 nor to I 2 . We conclude the proof by showing that such an element B yields a solution of BVP (12) and (8) with required properties.
Finally, we study BVP (12), (8) under non-Lipschitz condition. Motivated by [10], we first construct a sequence of Lipschitz functionsf n which gives a nice approximation of continuous functionf in (−∞, +∞) as n → ∞. Then we consider the problem (12) and (8) withf replaced byf n . By using the results obtained in Section 4, we have a certain sequence of strictly increasing functions u n in (0, ∞) with just one zero and u n (0) ∈ (L 0 , 0). We prove that, as n → ∞ the limit of u n exists and is the solution of BVP (12), (8) with required properties.
Part of the difficulty in treating the non-Lipschitz case is caused by the fact that in order to use the results obtained in Section 4, we need that the properties off n are similar tof in (−∞, +∞). Moreover, the limit of the functions u n should also be a solution of BVP (12), (8) which satisfies required properties.
The remaining of this paper is organized as follows. In Section 2, we develop a Hilbert space and exhibit a variational functional for BVP (12), (15). Some inequalities and properties are proven which yield the basis for the subsequent use of critical point theory in what follows. In Section 3, in the case that f ∈ Lip([L 0 , L]), some basic properties of solutions of IVP (12), (14) and BVP (12), (15) are discussed, and also in Section 4, several existence criteria of strictly increasing solutions of BVP (12), (8) having just one zero with initial values belong to (L 0 , 0) are obtained under the Lipschitz condition. In Section 5, the case that f satisfies non-Lipschitz condition is discussed and the proofs of Theorem 1.1 and 1.2 are given.
Throughout of this paper, we denote by C some positive constant that may change from line to line.
2. Variational structure for BVP (12) and (15) We shall make use of a variational setting, where solutions of BVP (12), (15) are in correspondence with critical points of a functional. The main idea of this section comes from [2], in which the Sobolev space with weight t k , k > 1 has been considered. As we pointed out in the Introduction, if p(t) reduces to t k , we can extend k ∈ (0, ∞).
Given T ∈ (0, ∞). We introduce a function space H(0, T ) consisting of u absolutely continuous in [0, T ] such that is finite and u(T ) = 0. The right-hand side of (16) defines the square of a norm in this space.
It is easy to verify that H(0, T ) is a reflexive and separable Banach space. In fact, Let {u n } be a Cauchy sequence in H(0, T ). Then the completeness of , which means that u n → u in H(0, T ) as n → +∞. The reflexivity and separability of H(0, T ) can be verified by standard theories.
Here we define the inner product over H(0, T ) by and H(0, T ) is a Hilbert space respect to the inner product.
We can now give some useful estimates.
holds, where α is given in (H 5 ) and C > 0 is a constant depends on T .
Proof. For any u ∈ H(0, T ), noting that u(T ) = p(0) = 0 and 0 < α < 1, we have Therefore, by using Cauchy−Schwartz inequality we obtain According to (H 4 ) an (H 5 ), we know that there exists a constant C such that Combining this with (18), we obtain (17) and the proof is complete.
Proof. For any u ∈ H(0, T ) and t ∈ [0, T ], we compute by applying Cauchy− Schwartz inequality and (17) The proof is complete.
Proof. For any u ∈ H(0, T ) and t ∈ [0, T ], since (H 4 ) and (H 5 ), we may write as product of two functions. Notice that the second function of the right-hand side is bounded for t ∈ [0, T ], we then apply (17) to conclude. The proof is complete. These properties have as an immediate consequence the following proposition. Then Proof. Consider the function set with the norm · p defined by Then (C p (0, T ), · p ) is a Banach space by the similar arguments we used to discuss H(0, T ). According to (19), the injection of . By the Banach−Steinhaus theorem, {u k } is bounded in H(0, T ) and, hence, in C p (0, T ). Moreover, the sequence {pu k } is equi-uniformly continuous since, for 0 ≤ t 1 < t 2 ≤ T , by applying (17) and in view of (20), we have By the Ascoli−Arzela theorem, {pu k } is relatively compact in C([0, T ]), and thus going to a subsequence if necessary, we may assume that pu k → u * in C([0, T ]). Hence, u k → u * /p in C p (0, T ). By the uniqueness of the weak limit in C p (0, T ), every uniformly convergent subsequence of {u k } converges uniformly on [0, T ] to u in C p (0, T ), which means that pu k → pu in C([0, T ]) and this completes the proof.
We are now in a position to establish a variational structure which enables us to reduce the existence of solutions of BVP (12), (15) to the one of finding critical points of corresponding functional defined on the space H(0, T ).
First of all, Consider BVP , it is obvious that u is a solution of BVP (22) if and only if u + L is a solution of BVP (12), (15). Therefore, we seek a solution of BVP (12), (15) If (H 1 ), (H 4 ) and (H 5 ) are satisfied, then (i) the functional ϕ : is continuously differentiable on H(0, T ) and for any u, v ∈ H(0, T ), we have (ii) ϕ is weakly lower semicontinuous and coercive on H(0, T ); and is a solution of BVP (22).
Proof. The Proof of (i) follows from the standard line (see, e.g., [13](Theorem 1.4)), using Proposition 2.2−2.4, so we omit it. For the proof of (ii), ϕ is weakly lower semi-continuous functional on H(0, T ) as the sum of a convex continuous function [13](Theorem 1.2) and of a weakly continuous one [13](Proposition 1.2). In fact, according to On the other hand, it follows from Proposition 2.2 that Hence, ϕ is coercive. For the proof of (iii), we first note that a critical point u of ϕ satisfies u(T ) = 0 and < ϕ ′ (u), v >= 0 for any v ∈ H(0, T ), and of course for Integrating (26) between t 1 > 0 and t 2 > t 1 , using the boundness of functions c and f , we conclude that pu ′ satisfies the Cauchy condition at t = 0 and t = T , so that p(t)u ′ (t) has a finite limit as t → 0 + or t → T − . We shall show that p(t)u ′ (t) → 0 as t → 0 + . Multiplying (26) by u and integrating between 0 and T , we get Noting that p(t) is increasing in (0, ∞) and the boundness of c,f , we have where C > 0 is a constant and the assertion follows. The proof is complete.
Let us conclude Section 2 with some remarks. Consider the F andF which we defined by (9) and (23), respectively. Assumptions (H 2 ) and (H 3 ) yield the following results.
(iii) For each b > 0 and each ǫ > 0, there exists δ > 0 such that for any Here u i is the unique solution of IVP (12), (14) with B = B i , i = 1, 2.
Proof. Noting thatf is Lipschitz and bounded in (−∞, +∞) and (H 1 ) implies the boundness of c. The proof of (i) is similar to that of [16](Lemma 4) and the method is contraction mapping theorem. The proof of (ii) follows the arguments of step 2 and step 3 in [15](Lemma 3). The proof of (iii) is similar to that of [16](Lemma 7) and the technical tool is Gronwall inequality. The proof is complete.
We can prove as in the proof of (i) in Proposition 3.1 that IVP (12), (31) has a unique solution in [a, +∞). In particular, for C = 0, C ≥ L or C ≤ L 0 , the unique solution of IVP (12), (31) is u ≡ C.
The following result is similar to [16](Lemma 6), while the main idea of the proof borrowed from [2](Proposition 11) which is different from that of [16](Lemma 6).
If u satisfies (ii), then u has a unique zero θ > 0. Multiplying u ′ and integrating (32) over [0, θ], we get and thus we have On the other hand, integrating (32) over [θ, t], we obtain for t > θ Therefore, letting t → ∞, we get u ′ 2 (θ) ≥ 2c 1 F (L) by (35). This together with (34) implies c 1 F (L) ≤ c 2 F (B), which is a contradiction as B is sufficiently close to 0. The proof is complete.
We are now in a position to consider BVP (12), (15). As we pointed out in Section 2, we only need consider BVP (22) and according to Proposition 2.5, we know that in order to find a solution of BVP (22), it suffices to obtain the critical point of functional ϕ given by (24).
In what follows we shall make use of a function w : [0, ∞) → R defined by whereb is given by (H 6 ). According to Remark 2.2,F (L 0 − L) < 0. Proof. Since ϕ is coercive and weakly lower semicontinuous according to Proposition 2.5(ii), it follows from [13](Theorem 1.1) that ϕ attains its minimum at some point in H(0, T ), say u. Hence, by Proposition 2.5(iii) and Remark 3.1, it suffices to show that the critical point u is a nonzero function so that BVP (22) is solvable. In fact, for any T ≥b − L 0 + L, according to Remark 2.1 and 2.2 and noting thatF (L 0 − L) < 0, we compute by (H 6 ) where w is given by (36). It is easy to see that (37) together with ϕ(0) = 0 implies that u is nonzero. The proof is complete. . Let t 2 ∈ (t 0 , T ) be the smallest number in this interval with u(t 2 ) = u(t 0 ). we can assume that u(t 1 ) = min t∈[t0,t2] u(t). If otherwise, we conclude again ϕ(v) < ϕ(u) which lead to a contradiction. The proof is complete.
Proof. Noting that (33), (13), (H 1 ) − (H 4 ) imply u is strictly increasing for t > 0 as long as u(t) ∈ (L 0 , 0), Remark 3.1 and (H 3 ) indicate that u can not be a constant in any interval of (0, ∞). These together with proposition 3.2 show the properties stated. The proof is complete. Let I i (i = 1, 2) be the set of all B ∈ (L 0 , 0) such that the corresponding solutions of IVP (12), (14) are type (i)(i = 1, 2) in Proposition 4.1. It is obvious that I i (i = 1, 2) are disjoint. Then, we have the following result, and some ideas of the proof are taken from that of [15](Theorem 14 and Theorem 20). Proof. We divide the proof into two steps.
Step 1 . Let B 0 ∈ I 1 and u 0 be a solution of IVP (12), (14) with B = B 0 . So, u 0 is the first type in Proposition 4.1. By proposition 3.1(iii), if B ∈ (L 0 , 0) is sufficiently close B 0 , then the corresponding solution u of IVP (12), (14) must be the first type, as well.
Let B 0 ∈ I 2 and u 0 be a solution of IVP (12), (14) with B = B 0 . So, u 0 is the second type in Proposition 4.1. In the case that u attains a local maximum which belongs to (0, L) at some pointt > 0 and u is strictly increasing in (0,t), then proposition 3.1(iii) and (33) guarantee that if B is sufficiently close to B 0 , the corresponding solution u of IVP (12), (14) has also its first local maximum in (0, L) at some pointt 1 > 0 and u is strictly increasing in (0,t 1 ).
Step 2 . We are now in a position to consider the case that u 0 is strictly increasing in (0, ∞) with lim t→∞ u 0 (t) = 0. Noting that c 2 /c 1 < 1 + F (L)/F (L 0 ), we can choose c 0 > 0 sufficiently small such that Since u 0 fulfils (32) and noting thatf (u 0 (t)) ≥ 0, u ′ 0 (t) ≥ 0, for t ∈ (0, ∞), we get by integration over [0, t] For t → ∞ we get, by the fact that u 0 (t) → 0, u ′ 0 (t) → 0 as t → ∞ Therefore we can find b > 0 such that Let δ > 0 and M = M (b, B 0 , δ) be the constants from Proposition 3.1(ii). Choose ǫ ∈ (0, c 0 /2M ). Assume that B ∈ (L 0 , 0) and u is a corresponding solution of IVP (12), (14). Using Proposition 3.1 and the continuity of F , we can findδ ∈ (0, δ) such that if |B − B 0 | <δ, then Suppose that u is not the second type in Proposition 4.1. Then there exists θ > 0 which is the first zero (in fact the only one zero) of u in (0, ∞). Then there are two possibilities. If u is the first type, there is b 0 > 0 such that u(b 0 ) = L and by Remark 3.1, If u is the third type, then We now rule out possibilities (43) and (44). Integrating (32) over [0, t] and using (39) − (42), we get for t > max{b 0 , b} In view of (38) and the Monotonicity of F , we get for t > max{b 0 , b}. Duo to (28) in Remark 2.1, we have sup{u(t), t > max{b, b 0 }} < L, which contradicts (43) and (44). The proof is complete.
Proof. The arguments are similar to that of Lemma 4.1. Therefore, we just briefly sketch it. It follows from Proposition 3.2 that in the second type of Proposition 4.1, the case that the solution u of IVP (12), (14) is strictly increasing in (0, ∞) with lim t→∞ u(t) = 0 is impossible. Therefore, same arguments as Step 1 in the proof of proposition 4.2 show that I i (i = 1, 2) are nonempty. The rest of the proof is the same as that of Lemma 4.1 and the proof is complete.
Therefore, according to Lemma A and (H 2 ), we can construct a sequence of functions {f n , n ≥ K} given by (45) and combing with (H 3 ), we have following properties off n .
We are now in a position to prove thatf n (L 0 ) = 0 for n sufficiently large. Suppose thatf n (L 0 ) = 0, thenf n (L 0 ) < 0 by (H 3 ) and Lemma A(ii). Therefore, since |f n (x)| ≤ K, x ∈ R and for n ≥ (K + 1)/L 0 we have Our task is now to provef n (x) = 0 for x ∈ (−∞, L 0 ]. In fact, noting that f n (L 0 ) = 0, by using the similar arguments as we did to obtain (49), we can get thatf Therefore by Lemma A(ii), we have 0 ≤f n (x) ≤f (x) = 0, x ∈ (−∞, L 0 ] and the result follows.
This together with Lemma
The proof is complete. According to Proposition 5.1 and 5.2, we can consider the auxiliary problems for n sufficiently large We are now in a position to prove our main results given in Introduction. Proofs of Theorem 1.1 and Theorem 1.2. According to Lemma A(iv) and Proposition 5.1, we can get that for n sufficiently large, there exists a constant b > 0, such that b 0 p(t)dt > 1 + 2c 2Fn (L n ) where c 2 andF n are given by (H 1 ) and (46), respectively.
It is obvious that u n is bounded uniformly in n. We now show that the sequence {u n } is equicontinuous on any bounded interval. Noting that (53) and (54) imply that u n satisfies the following integral equation For any bounded interval [0, b] and 0 ≤ t 1 < t 2 ≤ b, we have by the boundness of c,f n and the monotonicity of p u n (t 2 ) − u n (t 1 ) = t2 t1 1 p(s) s 0 c(τ )p(τ )f n (u n (τ ))dτ ds ≤ C(t 2 − t 1 ), where C depends on b. Therefore, take a sequence {T k } k≥1 such that T k < T k+1 and T k → ∞ as k → ∞, we conclude that the sequence {u n } is equicontinuous and uniformly bounded on every interval [0, T k ]. Hence, it has a uniformly convergent subsequence on every [0, T k ].
So let {u 1 ni } be a subsequence of {u n } that converges on [0, T 1 ]. Consider this subsequence on [0, T 2 ] and select a further subsequence {u 2 ni } of {u 1 ni } that converges uniformly on [0, T 2 ]. Repeat this procedure for all k, and then take a diagonal sequence {u ni }, which consists of u 1 n1 , u 2 n2 , u 3 n3 , · · · . Since the diagonal sequence u p np , u p+1 np+1 , · · · is a subsequence of {u p ni } for any p ≥ 1, it follows that it converges uniformly on any bounded interval to a function u. Without loss of generality, we still denote {u ni } by {u n }.
Finally, we need to show that lim t→∞ u(t) = L and u(t) = L for t ∈ (0, ∞). In fact, lim t→∞ u(t) = L is obvious since lim t→∞ u n (t) = L n and u n → u, L n → L as n → ∞. Suppose that there exists t 1 ∈ (0, ∞) such that u(t 1 ) = L. There are two possibilities. If u ′ (t 1 ) ≤ 0, contradicts to u ′ (t) > 0 on any bounded interval of (0, ∞). If u ′ (t 1 ) > 0, the fact that u is strictly increasing in (0, ∞) implies u(t) > L for t > t 1 , contradicts to lim t→∞ u(t) = L. The proof is complete. | 2019-04-20T13:11:14.318Z | 2011-06-04T00:00:00.000 | {
"year": 2011,
"sha1": "95df94dcd39288bfda1573ae7ce98c896309c6d8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.11948/2011015",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bb6dd2cf085b1403bce36d91bd735ebfeb52ffa7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
220669644 | pes2o/s2orc | v3-fos-license | MEN 2A syndrome – Multiple endocrine neoplasia with autosomal dominant transmission
Highlights • Every case essential, because MEN 2A is reported in 500–1000 families worldwide.• This case reflects correct clinical steps in avoidance of possible complications.• It contributes to the existing limited literature reports.• MTC can be confirmed before macroscopic changes through evaluation of calcitonin.
Introduction
Multiple endocrine neoplasiass (MEN) are rare and complex autosomal dominant inherited syndromes caused by germline RET (Rearranged in Transfection) mutation and characterized by the association of tumors of two or more endocrine glands in the same patient. It was described by Erdheim in 1903 and classified in MEN type 1 (Wermer syndrome) and MEN type 2 with three subtypes -MEN 2A (Sipple syndrome), MEN 2B (Shimcke syndrome) and familial medullary thyroid carcinoma (FMTC) [1,2].
MEN 2A is an autosomal dominant disease caused by multiple mutations in the RET gene, located on chromosome 10q11.2, which leads to the damage of C cells derived from the neural crest. Hyperplasia of C cells appears early in life and can be considered as a precursor lesion for medullary thyroid carcinoma (MTC) [3].
The diagnosis can be confirmed before the appearance of macroscopic changes, by the increased value of calcitonin, considered a tumor specific marker for MTC (norm <10 pg. / mL) [5,6].
Imaging methods (ultrasound, CT, MRI) can be used to determine the extent of a tumor, and the possible existence of metastasis. Genetic confirmation is mandatory when the Sipple syndrome is suspected. The presence of RET gene mutation requires screening of first degree relatives [6,7].
The moment to perform prophylactic thyroidectomy for medullary thyroid carcinoma is a difficult decision due to clinical variability between different families with the same RET muta- tion. The treatment of choice is thyroidectomy with postoperative follow-up by the assessment of calcitonin and CEA [8,9].
Thyroid cancer is the most common cause of death in the majority of patients with MEN 2A. Pheochromocytoma occurs in about 50% of patients with this pathology, and 15% of them develop parathyroid tumors (hyperplasia or adenoma) [3,4].
Pheochromocytoma is a catecholamine secreted tumor that is characterized by adrenal hyperplasia, but clinical symptoms and signs of the disease occur (biochemical tests and / or imaging methods) in only 50% of patients [10].
Pheochromocytomas in MEN2 are almost always benign but tend to be bilateral in 50-80% of cases and most frequently affects the age of 35-45 years, children under 10 can also be impacted [10,11].
In up to 25% of cases pheochromocytoma is the first manifestation of the disease, after MTC, which is reported to be the onset syndrome in 40% of patients, at the same time in 35% of cases MTC and pheochromocytoma are diagnosed simultaneously [10].
The most sensitive test in the diagnosis of pheochromocytoma is the concentration of the plasma and urinary metanephrines; CT, MRI, positive emission tomography (PET) are also indicated [12]. Surgical treatment (uni-or bilateral adrenalectomy) is being performed depending on the extent of the disease at the time of diagnosis. Laparoscopy is the choice of approach in surgical treat-ment. Limitations only arise because of technical difficulties or tumor size.
Hyperparathyroidism rarely occurs in MEN 2A, and manifests more frequently by hyperplasia than by parathyroid adenoma. Treatment consists in subtotal (3 1/2 gland) or total parathyroidectomy with a subcutaneous autograft of a portion of a gland [4].
We report the clinical case of a patient with MTC and bilateral pheochromocytoma, a familial form, having first degree relatives (mother) with pheochromocytoma. Genetic testing was not performed due to the death of the first degree relatives.
Case report
We present the clinical case of patient M.S. 20 years old, with MEN 2A syndrome, which has been manifested by bilateral pheochromocytoma and medullary thyroid carcinoma.
In 2015, after a check-up, suspicion of an adrenal gland tumor was raised. The patient presented the following symptoms: hypertension with often crises, episodic headaches, palpitations, irritability, sweating, pallor.
The diagnosis was confirmed by biochemical tests (increased plasma and urinary metanephrines levels) and MRI, where an ovoid tumor (46 × 37 × 43 mm with regular contour, homogeneous structure) has been visualized. According to the patient, her mother suffered from hypertension, episodic headache and palpitations during pregnancy. The diagnosis of eclampsia has been established. During birth she had a hypertonic crisis again and died within hours of the birth of profuse uterine bleeding. Necropsy revealed adrenal tumor on her left and histopathological examination confirmed multicentric, predominantly alveolar pheochromocytoma.
In 2015 our patient underwent laparoscopic adrenalectomy on her left. Histopathological examination revealed multicentric, predominantly alveolar pheochromocytoma.
During the years 2015-2018, after the intervention, her wellbeing improved, with hypertonic crises only once a month. In July 2018 more intense, frequent hypertensive crises appeared with characteristic signs.
Thus, on 11.07.2018 a CT of the abdomen was performed, where a tumor (27 × 32 mm) of the right adrenal has been detected (benign origin according to the character of contrasting).
15.12.2018 PET-CT: Increased metabolic activity of FDG (mean SUV = 11.5) at the level of a tumor of the right adrenal gland (36 × 29 mm), possibly pheochromocytoma increased diffuse accumulation of radiopharmaceutical in the bone marrow, more evident in the thoracic segment of the spine and pelvic bones (Fig. 1).
All laboratory tests for the years 2014-2020 are presented in Table 1.
In April 2019, the patient underwent laparoscopic total adrenalectomy on the right. This intervention was done by head surgeon with thirty years' experience. Macroscopic examination revealed multinodular solid adrenal tumor, the biggest nodule being 41 × 32 mm. Histopathological examination described multicentric, predominantly alveolar pheochromocytoma ( Fig. 2A, B).
In February 2020, the patient is hospitalized for scheduled total thyroidectomy, showing symptoms of a permanent lump in the throat, difficulty swallowing, feeling of suffocation in a supine position, general weakness, fatigue.
Total thyroidectomy is the recommended treatment for all medullary thyroid carcinoma in MEN 2A syndrome, confirmed by elevated calcitonin levels and imaging data.
CT of the thyroid gland with contrast 11.21.2018: areas with nodular lesions in both thyroid lobes with dimensions of 5 mm on the right and 12 × 9 mm on the left.
Total thyroidectomy was performed, macroscopic examination revealed multinodular thyroid gland with dimensions 8 × 3 cm (Fig. 4). This intervention was done by head surgeon with twenty years' experience. Given that regional lymphadenopathy was determined neither preoperatively nor intraoperatively, and thyroid gland with small nodules without invasion into a capsule, it was decided not to perform neck lymph nodes dissection.
Histopathological examination of the thyroid tissue revealed non-encapsulated medullary carcinoma with trabecular pattern, amyloidosis of the tumor stroma, low degree of pleomorphism and minimal mitotic activity, without LVI-0 and 0 lymphovascular invasion, surgical resection margins negative to tumor (R0), no invasion into thyroid capsule, pT1aNxMx LVI-0 Pn-0 R0. Histochemical investigation with Congo red for detecting amyloid deposits in the tumor stroma was positive (Fig. 5A, B).
The postoperative evolution of this patient is favorable. She is satisfied with the received treatment and is being supervised by an endocrinologist, undergoing hormone replacement therapy. Currently the patient receives 25 mg of cortisone in the morning and the fourth part at noon, prednisolone tablets 2.5 mg in the morning, under the monitoring of the arterial pressure, and L-thyroxin tablets 50 mg.
Discussions
Sipple syndrome is the most common type of multiple endocrine neoplasias MEN 2 (80% of cases). It is characterized by the presence of medullary thyroid carcinoma (MTC), uni-or bilateral pheochromocytoma (in over 50% of cases) and primary hyperparathyroidism resulting from parathyroid cells hyperplasia or adenoma (15-30% of cases) [3,4].
The case emphasizes the importance of the radical approach to MEN 2A syndrome from both a therapeutic and surgical point of view. It is necessary a strong collaboration with the physician and endocrinologist for the evaluation of family members of patients with multiple endocrine neoplasia in the absence of the possibility of their genetic testing.
Imaging check-up in combination with annual monitoring of calcitonin, chromogranin A, and metanephrines in a patient with MEN 2A syndrome is a practical way to supervize the case and make timely decisions for surgical intervention and to prevent complications.
Prior to surgery, the presence of a functional pheochromocytoma should be ruled out by appropriate biochemical analysis in all MEN 2A and MEN 2B patients. If a pheochromocytoma is detected, adrenalectomy should be performed before thyroidectomy or other surgery to avoid intraoperative catecholamine release. Laparoscopic adrenalectomy is the gold standard in the treatment of pheochromocytoma. Limitations only arise because of technical difficulties or tumor size. Long-term treatment with alpha and beta blockers should only be used in patients with unresectable tumors.
Pheochromocytoma occurs in about 50% of patients MEN 2A, is almost always benign but tends to be bilateral in 50-80% of cases and most frequently affects the age of 35-45 years, children under 10 can also be impacted. In up to 25% of cases pheochromocytoma is the first manifestation of the disease, after MTC, which is reported to be the onset syndrome in 40% of patients, at the same time in 35% of cases MTC and pheochromocytoma are diagnosed simultaneously [9].
In our case we had the bilateral, benign pheochromocytoma, which appeared almost at the same time as the medullary thyroid carcinoma. In cases when the pheochromocytoma is the first manifestation of the disease we must be careful about the development of MTC which is an aggressive tumor with rapid metastasis.
Conclusions
The radical approach to MEN 2A syndrome is very important from both a therapeutic and surgical point of view. Imaging check-up in combination with annual monitoring of calcitonin, chromogranin A, and metanephrines in a patient with MEN 2A syndrome is a practical way to supervise the case and make timely decisions for surgical intervention and to prevent complications.
If a pheochromocytoma is detected, adrenalectomy should be performed before thyroidectomy or other surgery to avoid intraoperative catecholamine release.
Laparoscopy is the choice of approach in surgical treatment. Limitations only arise because of technical difficulties or tumor size.
Declaration of Competing Interest
No conflict of interest.
The work has been reported in line with the SCARE 2018 criteria [13].
Funding
There was no funding for the case report submitted.
Ethical approval
The case report is exempt from ethnical approval in our institution.
Consent
Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request. All authors discussed the results and commented on the manuscript.
Registration of research studies
The case report is not a research. | 2020-07-16T09:04:49.379Z | 2020-07-15T00:00:00.000 | {
"year": 2020,
"sha1": "fe821696d707971cddd49f427fe4f985ea034d3b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ijscr.2020.07.015",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d7d6916a49755beabc91e01097f355c8131ca11",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
227255565 | pes2o/s2orc | v3-fos-license | Altmetrics Attention Scores for Randomized Controlled Trials in Total Joint Arthroplasty Are Reflective of High Scientific Quality: An Altmetrics-Based Methodological Quality and Bias Analysis
Introduction: The Altmetric Attention Score (AAS) has been associated with citation rates across medical and surgical disciplines. However, factors that drive high AAS remain poorly understood and there remains multiple pitfalls to correlating these metrics alone with the quality of a study. The purpose of the current study was to determine the relationship between methodologic and study biases and the AAS in randomized controlled trials (RCTs) published in total joint arthroplasty journals. Methods: All RCTs from 2016 published in The Journal of Arthroplasty, The Bone and Joint Journal, The Journal of Bone and Joint Surgery, Clinical Orthopedics and Related Research, The Journal of Knee Surgery, Hip International, and Acta Orthopaedica were extracted. Methodologic bias was graded with the JADAD scale, whereas study bias was graded with the Cochrane risk of bias tool for RCTs. Publication characteristics, social media attention (Facebook, Twitter, and Mendeley), AAS, citation rates, and bias were analyzed. Results: A total of 42 articles were identified. The mean (±SD) citations and AAS per RCT was 17.8 ± 16.5 (range, 0 to 78) and 8.0 ± 15.4 (range, 0 to 64), respectively. The mean JADAD score was 2.6 ± 0.94. No statistically significant differences were observed in the JADAD score or total number of study biases when compared across the seven journals (P = 0.57 and P = 0.27). Higher JADAD scores were significantly associated with higher AAS scores (β = 6.7, P = 0.006) but not citation rate (P = 0.16). The mean number of study biases was 2.0 ± 0.93 (range, 0 to 4). A greater total number of study biases was significantly with higher AAS scores (β = −8.0, P < 0.001) but not citation rate (P = 0.10). The AAS was a significant and positive predictor of citation rate (β = 0.43, P = 0.019). Conclusion: High methodologic quality and limited study bias markedly contribute to the AAS of RCTs in the total joint arthroplasty literature. The AAS may be used as a proxy measure of scientific quality for RCTs, although readers should still critically appraise these articles before making changes to clinical practice.
T he impact of research has traditionally been measured by citation rate. 1 Although a commonly used metric, article citation rate has inherent limitations such as failing to take into account scholarly impact transmitted through other outlets such as social media platforms and also may require years of citation accrual until the impact of an article is apparent. 2,3 Furthermore, citation rates do not necessarily correlate with quality. Given these limitations, Altmetric, a data science company, developed the Altmetric Attention Score (AAS) that can track and quantify the social media attention an individual article receives and subsequently its impact in real-time. [4][5][6] These benefits have led to a large increase in the number of studies investigating the AAS and its relation to citation rate and impact across multiple fields ranging from cardiology 7 and neurology 8 to orthopaedic surgery. 9,10 In fact, many journals in orthopaedic surgery now display the AAS "donut" on each article's page such that it is easily accessible. Given the increasing visibility of Altmetrics and its role in understanding research in an age where social media platforms have become mediums through which research is disseminated, there is a need to better understand the factors that drive AAS.
Previous studies have focused on evaluating the relationships between AAS and citation rate, as well as article characteristics that are associated with higher AAS. 9 Although defining these relationships are important, the influence of external factors that may be associated with higher AAS, such as the methodologic quality or number of biases in a study, is less understood. The audience reached by social media, whether consisting of individuals in academics or the general public, may disproportionately increase the AAS of articles that discuss trending topics without regard for their methodology. In an analysis of dementia biomarker studies, MacKinnon et al 11 determined that neither the impact factor nor the citation rate was associated with methodologic quality, whereas this group was unable to analyze AAS because of incomplete data. Nonetheless, these relationships may vary by field and the tools used to determine methodological quality. For example, the JADAD scale that is frequently used in the orthopaedic surgery literature to appraise randomized controlled trials (RCTs) may assess components of methodologic quality that influence the AAS to a greater or lesser degree. 12 For RCTs specifically, this information would be particularly useful when interpreting the AAS and would help delineate whether the AAS of such articles could be used as a proxy for well-constructed scientific methodology or whether it is being influenced by other factors. This is imperative to understand because RCTs often generate findings that directly affect and change clinical practices. 13,14 Despite emerging research on AAS and the widespread use of AAS in orthopaedic surgery and total joint arthroplasty (TJA) journals, the relationship between AAS and methodologic quality or bias remains poorly understood. Given the increasing emphasis on quality within orthopaedic surgery and findings that come from RCTs, it is imperative to determine whether such biases may contribute to the AAS and the attention that an article receives. Therefore, the purpose of the current study was to determine the relationship between methodologic quality and study biases and the AAS in RCTs published in TJA journals. The authors hypothesized that there would be no statistically significant relationship between the methodological quality or study biases of RCTs and the AAS within the TJA literature.
Journal and Article Selection
Institutional board approval was not required to perform the current study. A total of seven prominent journals that publish TJA literature were queried for all RCTs published in the year 2016. These journals included The Bone and Joint Journal, The Journal of Bone and Joint Surgery, Clinical Orthopaedics and Related Research, The Journal of Arthroplasty, The Journal of Knee Surgery, Acta Orthopaedica, and Hip International.
The year 2016 specifically was chosen as previous bibliometric studies have recommended a period of 3 to 4 years for analysis of citation rates and AAS to allow for an appropriate period of citation and AAS accumulation after publication. 9,15,16 Furthermore, these seven journals were chosen specifically given that these journals represented those with the highest impact factors in the year 2016 and routinely published TJA articles. 17,18 All RCTs were extracted from these journals, regardless of the subject and follow-up, because the primary aim of this study was to study the effect of methodologic and study bias on the AAS.
Methodologic Quality Assessment
The primary outcomes of interest were the AAS, the methodologic bias of each study as quantified through the JADAD scale, and the type and number of study biases in each article. The JA-DAD Scale 19 consists of a five-point questionnaire used to critically evaluate the methodologic quality of RCTs. The following questions are used to assess each study: (1) Was the study described as random; (2) Was the randomization scheme described and appropriate; (3) Was the study described as double blind; (4) Was the method of double blinding appropriate; and (5) Was there a description of dropouts and withdrawals? The scale is graded from 0 to 5 (a score of greater than or equal to 3 indicates a highquality study, whereas a score less than 3 is considered to be low-quality).
Study Bias Assessment
Study bias was assessed using the Cochrane risk of bias (RoB) tool for RCTs. This RoB tool was developed by Cochrane to introduce consistency and transparency in RCTs and has been subsequently validated. To make the process of assessing RoB more consistent and transparent, Cochrane developed and validated the Cochrane RoB tool. 20 The most recent version of the Cochrane RoB tool, 21 which was used in the current study, consists of six types of potential study biases which are classified in one of three ways: low RoB, high RoB, and unclear RoB. The six potential types of biases and the method by which they are assessed are shown in Table 1 and include (1) selection bias, (2) performance bias, (3) detection bias, (4) attrition bias, (5) reporting bias, and (6) other bias. For the current study, we considered high RoB to represent bias that was present in a study. If the RoB was low or unclear, the study was not documented as being influenced by that particular bias.
Altmetric Attention Score
Altmetric provides analyses of activity on various platforms social media platforms, which include Twitter, Facebook, news outlets, online blogs, Mendeley, Wikipedia, and others. 22 The AAS is calculated through weighted scores of social media attention that a given published article receives and is updated in realtime. 23 The score is subsequently quantified through an automated algorithm created by Altmetric. Given the dynamic nature of the AAS, all scores from the included RCTs were collected in a span of two days using the Altmetric Bookmarklet. 24 The number of citations for each study was extracted from the Dimensions citation database, which is a platform affiliated with Altmetric. Dimensions reports the total number of times a work is cited and has been used in previous literature and deemed appropriate for collection of article citations. 25
Secondary Outcomes
In addition to the primary outcomes, predetermined bibliometric and social media-related variables were extracted for each RCT in accordance with previously published altmetrics studies. 9 These variables included (1) the highest degree of first author, (2) total number of authors, (3) geographic region of origin of the publication, (4) disclosure of any conflict of interest (the presence or absence of general self-reported conflict of interest), (5) number of academic institutions, (6) involved joint (hip, knee, or both), (7) study topic, (8) study design, (9) sample size, (10) number of referenced studies, (11) number of Twitter mentions, (12) number of Facebook mentions, (13) number mentions by news outlets, (14) number of times referenced on Wikipedia, and (15) number of reads on Mendeley. These specific variables were chosen based on of factors found to be associated with citation rate and the AAS in previous literature. 26
Statistical Analysis
Normality was determined with the Shapiro-Wilks test, and subsequently continuous variables were presented as means with SDs or ranges where appropriate, and categorical outcomes were presented as frequencies with percentages. One-way analysis of variance with Bonferroni corrections for multiple comparisons or chisquared tests of association were performed to compare bibliometric and Altmetrics characteristics, as well as AAS and citation rates, among journals. Univariate analysis with Pearson correlation coefficients and linear regression analysis was performed to determine the influence of methodologic and study biases on the AAS and citation rates. Multivariate linear regression was used to determine predictors of the AAS and citation rates for all RCTs. All statistical analyses were performed with Stata version 16.1 (StataCorp, College Station, TX). Statistical significance was defined as P , 0.05. Table 2.
Influence of Methodological Bias on Altmetrics Attention Score and Citation Rate
The mean JADAD score among all 42 RCTs was 2.6 6 0.94. No statistically significant differences were observed in the JADAD score when compared across the seven journals (P = 0.57). Therefore, the journal variable was not controlled for in regression analysis. The linear regression model ( Figure 1) was statistically significant (R 2 = 0.17, P , 0.001) and demonstrated that higher JADAD scores were significantly and positively associated with higher AAS scores (b = 6.7, 95% confidence interval [CI], 2.0 to 11.4; P = 0.006). No significant association was found between citation rate and methodologic bias (P = 0.16).
Influence of Study Bias on Altmetrics Attention Score and Citation Rate
The mean total number of study biases among all 42 RCTs was 2.0 6 0.93 (range, 0 to 4). No statistically significant differences were observed in the mean number of total study biases when compared across the seven journals (P = 0.27). Therefore, the journal variable was not controlled for in regression analysis. The linear regression model ( Figure 2) was statistically significant (R 2 = 0.24, P = 0.001) and demonstrated that a greater total number of study biases was significantly and negatively associated with higher AAS scores (b = 28.0, 95% CI, 212.6 to 23.5; P , 0.001). No significant association was found between total number of study biases and citation rate (P = 0.10).
The most frequent type of study bias found among the 42 RCTs was performance bias, with 39 (92.9%) studies demonstrating this bias. A high proportion of RCTs also demonstrated detection bias (n = 20, 47.6%) and attrition bias (n = 18, 43.9%), whereas only five studies (11.9%) demonstrated selection bias. No studies demonstrated reporting bias, although it was rated as an "unclear risk" in seven (16.7%) studies. Pearson correlation analysis demonstrated that performance bias had the strongest association with the AAS (r = 20.58, P = 0.001) and in regression was significantly and negatively correlated with the AAS (b = 234.4, 95% CI, 249.8 to 219.1; P , 0.001).
Predictors of Social Media Attention in Total Joint Arthroplasty Randomized Controlled Trials
All 11 bibliometric characteristics were tested for the magnitude of correlation with a study being mentioned on Twitter, Facebook, news outlets, blogs, Mendeley, and Wikipedia. No statistically significant associations were found between bibliometric characteristics and a RCT being mentioned on one of the aforementioned social media platforms.
Discussion
The main findings of the current study were (1) higher JADAD scores were markedly and positively associated with higher AAS, indicating that the AAS is reflective of excellent methodologic quality in RCTs in the TJA literature; (2) fewer study biases were markedly and positively associated with higher AAS in RCTs in the TJA Figure 1 Linear regression model demonstrating relationship between the JADAD methodological quality score for RCTs versus the AAS for RCTs in seven total joint arthroplasty journals. AAS = Altmetric Attention Score, CI = confidence interval, RCT = randomized controlled trial
0.003
Wikipedia mentions 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) -literature, suggesting that the AAS is reflective of the findings that are not influenced by bias; (3) AAS was markedly associated with citation rates for RCTs published in journals that routinely publish TJA research. The JADAD score was markedly associated with the AAS, with higher JADAD scores being markedly and positively predictive of higher AAS. This finding suggests that more sound methodologic quality leads to higher AAS and social media attention for RCTs pertaining to TJA. Methodologic quality assessments of RCTs are imperative to avoid the incorporation of low-quality or potentially misleading recommendations into clinical practice. 27 A large empirical study based on F1000Prime data recently suggested that Altmetrics was related to the quality of articles as evaluated by the postpublication peer-review system of F1000Prime assessments. 28 Because Altmetrics continues to gain popularity among funding bodies, researchers cause wider research impact ranging from social media to production of policy documents, the current study substantiates the legitimacy of these roles by demonstrating that it reflects studies with reliable methodologic quality. TJA surgeons, researchers, and other individuals who choose to explore research based on Altmetrics can be confident in the findings presented by these RCTs because the AAS represents studies with high methodologic quality. However, we still recommend the critical appraisal of RCTs regardless of presumed quality before adoption of particular findings that may change clinical practice.
RCTs pertaining to hip and knee arthroplasty that were not subjected to inherent study biases such as attrition and performance bias were found to accrue more interest on social media platforms and had higher AAS. Interestingly, performance bias was the most frequent type of study bias, with the results of 92.9% of RCTs being influenced by this bias. Study biases may also negatively influence the results and should be considered in the interpretation of study results. [29][30][31] This is especially true when concerning RCTs because these often generate practice-altering findings. 13,14,32 Interestingly, articles with low AAS tended to have a greater number of total study biases, although the average number of total biases per RCT was low. Given that Altmetrics represents high methodologic quality and low study biases, we recommend the use of Altmetrics as both a screening tool for high-quality articles and as a measure of scientific quality for RCTs in the TJA literature. Linear regression model demonstrating relationship between AAS RCTs versus the citation rate for RCTs in seven total joint arthroplasty journals. AAS = Altmetric Attention Score, CI = confidence interval, RCT = randomized controlled trial.
Figure 2
Linear regression model demonstrating relationship between the total number of study biases for RCTs versus the AAS for RCTs in seven total joint arthroplasty journals. AAS = Altmetric Attention Score, CI = confidence interval, RCT = randomized controlled trial Citation rate was markedly and positively influenced by the AAS, demonstrating that RCTs in the TJA literature that had high AAS also had greater academic impact. This finding is in accordance with previous Altmetricsbased studies in the literature. Kunze et al 9 investigated the relationship between the AAS and citation rate of articles from five different orthopaedic sport medicine journals and found that a greater AAS score significantly predicted a greater citation rate (b = 0.16; P , 0.001). The current study determined a stronger relationship between the AAS and citation rate (b = 0.43, P = 0.019), suggesting that for every one-point increase in the AAS of an RCT in the TJA literature, approximately 0.5 citations will be gained accordingly. This finding has notable implications for journals, authors, and funding bodies because it suggests that the promotion of highquality RCTs is associated with greater academic impact and ultimately more article citations.
This study has several limitations. First, only RCTs were included, and the relationship between AAS and methodologic and study bias may not be generalizable to studies with lower levels of evidence. However, the study of RCTs was a specific purpose of the current study because they designed to represent the highest quality evidence and often generate findings that change clinical practice. Second, we limited the current analysis to the use of only two quality appraisal toolsthe JADAD scale and Cochrane RoB tools for RCTs. Although these are both validated tools that are routinely used in the orthopaedic literature, relationships between quality and AAS may not be generalizable with the use of other tools. Third, the current analysis represents that of a single year of RCTs, although previous Altmetrics studies have demonstrated that analyses from a single year are appropriate. 9 Finally, Altmetrics does not currently disclose information related to self-promotion by authors and journals, and the current study could not control for this or for random article clicks and shares.
Conclusion
High methodologic quality and limited study bias markedly contribute to the AAS of RCTs in the TJA literature. The AAS may be used as a proxy measure of scientific quality for RCTs, although readers should still critically appraise these articles before making changes to clinical practice. | 2020-12-04T05:07:44.685Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "558ecb74aac12bceaba2d1e23f268b7ff7f3ede5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5435/jaaosglobal-d-20-00187",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "558ecb74aac12bceaba2d1e23f268b7ff7f3ede5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17957051 | pes2o/s2orc | v3-fos-license | The Cosmic Lens All-Sky Survey parent population - I. Sample selection and number counts
We present the selection of the Jodrell Bank Flat-spectrum (JBF) radio source sample, which is designed to reduce the uncertainties in the Cosmic Lens All-Sky Survey (CLASS) gravitational lensing statistics arising from the lack of knowledge about the parent population luminosity function. From observations at 4.86 GHz with the Very Large Array, we have selected a sample of 117 flat-spectrum radio sources with flux densities greater than 5 mJy. These sources were selected in a similar manner to the CLASS complete sample and are therefore representative of the parent population at low flux densities. The vast majority (~90 per cent) of the JBF sample are found to be compact on the arcsecond scales probed here and show little evidence of any extended radio jet emission. Using the JBF and CLASS complete samples we find the differential number counts slope of the parent population above and below the CLASS 30 mJy flux density limit to be -2.07+/-0.02 and -1.96+/-0.12, respectively.
(typically ∼ 1 arcsec) with high sensitivity instruments such as the Very Large Array (VLA).
The Cosmic Lens All-Sky Survey 2 (CLASS; S4.85 30 mJy; Myers et al. 2003;Browne et al. 2003) forms the largest, statistically complete sample of radio-loud gravitational lens systems currently available. A complete sample of 11 685 flat-spectrum radio sources (the exact selection criteria for this parent population sample is given in Section 2) was observed with the VLA at 8.46 GHz in A configuration (resolution of ∼ 0.2 arcsec). Those sources which were found to have multiple components with Gaussian full width at half maximum (FWHM) 170 mas, flux density ratios 10:1 and separated by 300 mas in the CLASS 8.46 GHz VLA images were followed-up as potential gravitational lensing candidates. Further observations with optical telescopes and high resolution radio arrays confirmed the lensing hypothesis for 22 gravitational lens systems during the course of CLASS. Of these systems, 13 form a well-defined statistical sample of gravitational lenses from a parent population of 8958 flat-spectrum radio sources. This results in a CLASS lensing rate of 1:689. Further details of the CLASS gravitational lens systems, and the procedures used to discover them, can be found in Browne et al. (2003).
A thorough analysis of the CLASS gravitational lensing statistics found, for a flat-universe with a classical cosmological constant (w = −1), ΩΛ = 0.69 +0.14 −0.27 at the 68 per cent confidence level (Chae et al. 2002;Chae 2003). This result, which is consistent with the findings from SN1a (e.g. Riess et al. 2004), large-scale structure (e.g. Cole et al. 2005) and cosmic microwave background (e.g. Spergel et al. 2006) data, provides further independent evidence for the cosmological concordance model. Furthermore, the CLASS gravitational lensing statistics have also been used to investigate the global properties of the lensing galaxy population. Chae et al. found the characteristic velocity dispersion for the early-and late-type galaxy populations to be σ (e) * = 198 +58 −37 km s −1 and σ (l) * = 117 +45 −31 km s −1 at the 95 per cent confidence level (see also Chae 2003; Davis, Huterer & Krauss 2003). The projected mean ellipticity for the early-type population, based on the relative numbers of quadruple and doubly imaged CLASS gravitational lens systems, was found to bef < 0.83.
The analyses described above required the number density of the parent population as a function of flux density to be established. This is because the derived constraints on ΩM − ΩΛ depend on a knowledge of the lensing optical depth as a function of the background source redshift (e.g. Turner et al. 1984). Unfortunately, the flat-spectrum radio source luminosity function was not well known, and measuring the redshifts of the 11 685 sources in the CLASS complete sample was not practical. Therefore, subsamples of flat-spectrum radio sources, selected in a similar manner to the CLASS complete sample, were formed within progressively lower flux density bins. At high flux densities the parent population redshift information was taken from the Caltech-Jodrell Bank Flatspectrum survey (CJF; S4.85 350 mJy; Taylor et al. 1996). The complete CJF sample consists of 293 flat-spectrum radio sources, for which, 261 redshifts have been obtained (Vermeulen & Taylor 1995;Vermeulen et al. 1996;Henstock et al. 1997;unpublished). A redshift survey of 69 sources from the JVAS sample by Falco, Kochanek & Muñoz (1998) has provided 55 redshifts in the intermediate flux density range, 200 to 250 mJy at 4.85 GHz (see also Muñoz et al. 2003). Redshift information for the parent population at the CLASS flux density limit was reported by Marlow et al. (2000), who measured 27 redshifts from a sample of 42 flat-spectrum radio sources with 4.85-GHz flux densities between 25 and 50 mJy. The mean redshift of each of these flatspectrum radio source samples isz ∼ 1.25; suggesting little change in the mean redshift with flux density.
However, since gravitational lensing increases the apparent flux density of the background source, many lensed sources will come from a population of radio sources with flux densities below the CLASS flux density limit. Therefore, our knowledge of the flatspectrum radio source luminosity function must be extended below 25 mJy to a few mJy (based on the source magnifications calculated from lens galaxy mass modelling). We have therefore undertaken a study of the flat-spectrum radio source population at the mJy level; hereafter referred to as the Jodrell Bank Flat-spectrum (JBF) radio source survey. The aim of this study is to reduce the uncertainties in the CLASS gravitational lensing statistics arising from the parent population luminosity function.
Since this project began, Muñoz et al. (2003) have extended their work on the redshift distribution of flat-spectrum radio sources down to ∼ 3 mJy. They find the mean redshift of their sample of 33 flat-spectrum radio sources with ∼ 5 GHz flux densities between 3 and 20 mJy to bez ∼ 0.75 (42 per cent completeness). This mean redshift is significantly lower than the trend reported from the sub-samples of flat-spectrum radio sources selected from the CJF, JVAS and CLASS surveys. The implications of such a low mean redshift for the parent population at low flux densities on the CLASS lensing statistics is to push ΩΛ to ∼ 1 for a flat Universe, which is inconsistent with the concordance model. In a companion paper (McKean et al. in preparation), we will present the optical and near infrared follow-up of a small sub-sample of JBF sources which will show that the mean redshift of the parent population is nearerz ∼ 1.2 at low flux densities. The focus of this paper, which is the first in a series of papers investigating the flat-spectrum radio source population at the mJy level, is to present the selection of the JBF sample and the number counts of the CLASS parent population.
In Section 2 we review the strict selection criteria of the CLASS complete and statistical samples. New 4.86 and 8.46 GHz observations from a VLA pseudo-survey that were used to select the JBF sample are presented in Section 3. In Section 4 we discuss the radio morphologies of the 117 flat-spectrum radio sources in the JBF sample. We also present our analysis of the CLASS parent population differential number counts and discuss the implications for the CLASS gravitational lensing statistics in Section 4. We end with a summary of our findings in Section 5.
THE CLASS COMPLETE AND STATISTICAL SAMPLES
To be truly representative of the CLASS parent population, the JBF sample needed to be selected in an identical manner to the flatspectrum radio sources observed by CLASS. Therefore, we first present a brief review of the selection criteria for the CLASS complete and statistical samples before discussing the selection of the JBF sample. The well defined CLASS complete sample was selected using the 1.4 GHz NVSS (National Radio Astronomy Observatory Very Large Array Sky Survey; Condon et al. 1998) and the 4.85 GHz GB6 (Green Bank 6 cm; Gregory et al. 1996) catalogues to find all flat-spectrum radio sources with, The CLASS complete sample was selected by finding all sources with S4.85 30 mJy from the GB6 catalogue in the area of sky defined above. These sources were then cross-correlated with the NVSS catalogue (CATALOG39). All 1.4 GHz flux density within 70 arcsec of the GB6 position was summed and used to determine the two-point spectral index of each source. There are 11 685 flatspectrum radio sources in the CLASS complete sample within a sky region of 4.96 sr. This sample was then observed with the VLA in A configuration at 8.46 GHz during CLASS. Those sources which were found to have a total 8.46 GHz flux density of S8.46 20 mJy formed the CLASS statistical sample. The 20 mJy cut-off was applied to ensure that all sources with multiple components and flux density ratios less than 10:1 would be detected by the VLA. There are 8958 sources in the CLASS statistical sample. The difference in the number of sources in the complete and statistical samples is mainly due to the 20 mJy cut-off (2418 sources). Bandwidth smearing (217 sources), extended sources (81 sources) and failed observations (11 sources) account for the remainder. A full discussion of the selection of the CLASS complete and statistical samples, and the subsequent CLASS VLA 8.46 GHz observations can be found in Myers et al. (2003).
SAMPLE SELECTION
Due to the magnification of the background source by gravitational lensing, we needed to determine the number counts and redshift distribution of the parent population below the CLASS 30 mJy flux density limit at 4.85 GHz. Therefore, we selected a representative sample of faint flat-spectrum radio sources which is complete to 5 mJy. We now discuss the selection of the JBF sample.
The NVSS selected sample
GB6 could not be used as the primary source catalogue because the JBF sample would include flat-spectrum radio sources with ∼ 5 GHz flux densities down to 5 mJy (recall that the GB6 catalogue is flux-density limited to S4.85 18 mJy). Ideally, we would carry out our own, deeper sky survey at ∼ 5 GHz to identify a flux-density limited sample of faint flat-spectrum radio sources. However, this process would be observationally expensive. Therefore, using the VLA at 4.86 GHz, we undertook a targeted pseudosurvey of a well defined sample of radio sources selected from the NVSS catalogue (S1.4 2.5 mJy) within a restricted region of the sky. From these 4.86 GHz pseudo-survey observations we established a sample of NVSS-selected radio sources which met the CLASS two-point spectral index criteria (α 4.86 1.4 −0.5) and had S4.86 5 mJy. This process is slightly different to the one used for the selection of the CLASS complete sample (see Section 2). Therefore, we now discuss any possible bias which the 4.86 GHz pseudo-survey may have introduced.
The NVSS S1.4 2.5 mJy limit was chosen to ensure that a sample of flat-spectrum radio sources with S4.86 5 mJy was selected. However, this limit also imposed on the pseudo-survey a bias against faint and highly inverted flat-spectrum radio sources with α 4.86 1.4 0.56 (e.g. for a 5 mJy source at 4.86 GHz). Assuming that the spectral index distribution of the flat-spectrum radio sources found by the pseudo-survey is the same as for the CLASS complete sample (see Figure 3 in Myers et al. 2003), we would expect 9.4 per cent of the sources to have α 4.86 1.4 0.5. This does not mean that the pseudo-survey would not detect any of these inverted radio sources; as we will see in Section 4.1, 6 per cent have α 4.86 1.4 0.5. It is only the few highly-inverted radio sources (3.4 per cent) at the 5 mJy limit of the pseudo-survey which would be missed.
The GB6 survey was conducted with the old 300 ft (91 m) Telescope at Green Bank which had a beam size of ∼ 3.5 arcmin, whereas our 4.86 GHz pseudo-survey observations were carried out using the VLA, with a beam size of only a few tens of arcseconds. This change is resolution will result in two effects. First, the increase in resolution introduced the possibility of the pseudosurvey observations resolving several discrete sources that would have been identified as a single radio source by GB6. When this occurred, we summed the 4.86 GHz flux-density of the separate sources to make a single 'radio' source (the details of this process are given in Section 3.3). Second, the higher resolution provided by our interferometic VLA observations could result in extended radio emission being partially or completely resolved out. However, since the aim of this project is to select a sample of flat-spectrum radio sources, which are typically compact, we expect this to have a negligible effect on our sample completeness.
The number of NVSS radio sources with S1.4 2.5 mJy is approximately 44 sources deg −2 . Therefore, to define a complete low flux density sample which was also straightforward to followup at optical wavelengths, sources were selected from 16 circular fields with radii ranging from 0.3 to 1 degrees within the region of sky 13 h α 8 h and δ ∼ 0 • . Where possible, fields were chosen to coincide with the Anglo Australian Observatory 2dF Galaxy Redshift Survey (Folkes et al. 1999) in the hope that some of the flat-spectrum radio sources would have measured redshifts. There are 1299 sources in the complete S1.4 2.5 mJy sample within a sky area of 29.3 deg 2 (≡ 8.93 × 10 −3 sr).
VLA 4.86 GHz pseudo-survey observations
The complete NVSS selected S1.4 2.5 mJy sample was observed at 4.86 GHz with the VLA in CnD configuration on 1999 March 02 (6 h) and 1999 March 05 (4 and 3 h), and in D configuration on 1999 May 21 (12 h). Each source was observed for 45 or 50 s, using a 10 s correlator integration time. The data were taken through two 50 MHz IFs, which were centred at 4.835 GHz and 4.885 GHz, respectively. 3C286 and 3C48 were used as the primary flux density calibrators and suitable phase reference calibrators, selected from the JVAS catalogue, were observed every 15 to 30 minutes. The typical beam size was ∼ 20 × 13 arcsec 2 with an rms map noise ∼ 300 µJy beam −1 . A summary of the VLA 4.86 GHz pseudosurvey observations is given in Table 1.
The data were calibrated and edited in the standard way using the AIPS (Astronomical Image Processing Software) package. To ensure that the imaging of the data was carried out in an efficient and consistent manner, all 1299 pointings were mapped within the Caltech VLBI difference mapping package (DIFMAP; Shepherd 1997) using a modified version of the CLASS mapping script (Myers et al. 2003). The script automatically detected and cleaned surface brightness peaks above 1.5 mJy beam −1 which had a signal-to-noise ratio greater than 6 (typically 2.4 mJy beam −1 ), within a sky region of 2048 × 2048 arcsec 2 in size around the phase centre. Natural weighting was used throughout to maximize the overall signal-to-noise and elliptical Gaussian model components were fitted to the data. Table 2. The JBF 4.86 GHz catalogue. The survey name of each flat-spectrum radio source is given in column 1. The J2000 right ascension and declination are listed in columns 2 and 3, respectively. For each source, the peak surface brightness (column 4) and the integrated flux density (column 5) from model fitting to the uv-data is reported. The radio morphology of each JBF source has been classified as either unresolved (U) or extended (E) in column 6. The particulars of the 4.86 GHz observation of each object are given in columns 7 to 10. The 1.4 GHz NVSS flux density within 70 arcsec of the JBF position (column 11) has been used to calculate the 1.4-4.86 GHz spectral index of each source in column 12.
The JBF sample
The pseudo-survey observations were carried out to emulate what was done for the GB6 survey using the old 300 ft (91 m) Telescope at Green Bank. However, the GB6 survey has a beam size of ∼ 3.5 arcmin, which is significantly larger than the pseudosurvey 20 × 13 arcsec 2 beam size. This introduced the possibility of the pseudo-survey observations resolving discrete sources that would have otherwise been identified as a single radio source by GB6. This issue was also confronted during the selection of the CLASS complete sample where the NVSS beam size (45 arcsec) was ∼ 4 times smaller than the GB6 beam size. To overcome this relative beam size problem, Myers et al. (2003) added all the NVSS 1.4 GHz flux density within 70 arcsec of the GB6 position to determine the 1.4 GHz flux density of each 'source'. We have adopted the same strategy for the pseudo-survey. The 4.86 GHz radio emission from those pseudo-survey sources within 70 arcsec of each other were added together to make a single radio source and entered into the 4.86 GHz pseudo-survey catalogue. As the pointings for the 4.86 GHz pseudo-survey observations were taken from the NVSS catalogue there was the possibility that a source was detected in more than one field. When this occurred the data from the nearest pointing was used. The 4.86 GHz catalogue was then cross-referenced with the NVSS catalogue. As with the selection of the CLASS complete sample, the total 1.4 GHz flux density within 70 arcsec of the 4.86 GHz position was added and used to determine the two-point spectral index of each source.
The pseudo-survey catalogue contains 736 sources detected at 4.86 GHz with the VLA. Of these sources, 418 are in the flux density limited sample of S4.86 5 mJy. This results in a source density above 5 mJy of about 14 sources deg −2 . For the pseudosurvey, this equates to one source every 10 3 -10 4 beam areas. For a radio source population whose differential number counts are described by a power-law with an index of 2 (see Section 4.2), we would expect confusing sources (i.e. those at a density of 1 per 20 beam areas) to contribute about 0.1 mJy to the flux-density of a 5 mJy source. This is well within the observational uncertainties of the pseudo-survey flux densities. Therefore, source confusion will have a negligible effect on the pseudo-survey catalogue at the 5 mJy flux density limit. The total number of flat-spectrum radio sources defined by the CLASS two-point spectral index criteria within the S4.86 5 mJy flux density limited sample is 117 sources. It is these 117 flat-spectrum radio sources which form the JBF sample. A summary of the number of sources observed, detected and found Table 3. The JBF 8.46 GHz VLA data. Columns 1 to 5 are the same as in Table 2. The rms noise in of each map is given in column 6. The 1. GHz spectral indices of each source are given in columns 7 and 8, respectively. referencing was carried out with suitable JVAS sources. The typical beam size was ∼ 0.7 × 0.2 arcsec 2 and the rms map noise was ∼ 180 µJy beam −1 . The data were reduced using AIPS. Mapping and self-calibration were carried out within DIFMAP. Natural weighting was used and elliptical Gaussian model components were fitted to the data.
All 59 sources were detected and have compact structures (Gaussian FWHM 170 mas). The positions, flux densities and spectral indices for each source are given in Table 3. Only one source was found to have multiple components. JBF.041 has two compact components (Gaussian FWHM of 60 and 120 mas) separated by 1.47 arcsec. Independently of this work, JBF.041 was identified as a gravitational lens candidate from the PMN survey (Parkes-Massachusetts Institute of Technology-National Radio Astronomy Observatory; Griffith & Wright 1993). Extensive radio and optical observations by Winn et al. (2002) have shown PMN J1632−0033 (JBF.041) to be a gravitational lens system, with three lensed images of a z = 3.42 quasar (see also Winn, Rusin & Kochanek 2003, 2004.
Radio morphologies and extended emission
We have investigated the morphological properties of the JBF sample by classifying each flat-spectrum radio source as either unresolved (U) or extended (E) in Table 2. Unresolved radio sources are those consisting of a single radio component (within a 70 arcsec search radius from the brightness peak) with a model Gaussian FWHM which is smaller than the observed beam size of the VLA (also given in Table 2). The remainder are considered extended.
Our analysis of the 4.86 GHz VLA model fitting data finds 85 per cent of the JBF sample to have unresolved structures. Evidence for extended emission is found in 15 per cent of the radio sources. The large fraction of unresolved point sources in the JBF sample is not unexpected -the high selection frequency, coupled with the tight constraint on the source spectral index should have produced a sample of core-dominated radio sources. In Fig. 1 we show the spectral index distribution of the complete JBF sample (solid line). The α 4.86 1.4 −0.5 spectral index cut can be clearly seen in the distribution. Of the full JBF sample, 32 per cent have a rising radio spectrum between 1.4 and 4.86 GHz (i.e. α 4.86 1.4 0) and only 6 per cent are highly inverted (i.e. α 4.86 1.4 0.5). The total sample of 117 flat-spectrum radio sources has a mean spectral index of −0.09 with an RMS of 0.31 and a median spectral index of −0.15. We also show in Fig. 1, with the broken line, the spectral index distribution of those sources which are considered extended. The broken line effectively divides each spectral index bin into the contribution from unresolved and extended radio sources. It is apparent that the extended radio sources tend to have on average steeper radio spectra (mean spectral index is −0.22 with an RMS of 0.25; median spectra index is −0.23) when compared to the unresolved population (mean spectral index is −0.07 with an RMS of 0.32; median spectra index is −0.10). The steeper spectra are likely caused by the presence of jet emission in the extended sources, or due to contamination from another independent (steep spectrum) radio source within 70 arcsec of the brightness peak.
We have searched for any evidence of extended jet emission in the JBF sample by inspecting the radio maps of those sources observed during the course of the 1.4 GHz FIRST survey (Faint Images of the Radio Sky at Twenty centimeters; Becker, White & Helfand 1995; beam size ∼5 arcsec). We found that only 33 of the 117 JBF sources have FIRST radio maps available due to the limited sky coverage of the FIRST survey. The mean spectral index of these 33 JBF sources is −0.11, with 18 per cent (6 sources) defined as extended in Table 2. Therefore, the 33 sources appear to form a representative sub-sample of the JBF catalogue (c.f. with the mean spectral index and extended source fraction of the full JBF sample given above). The 33 sources which make up the FIRST-subsample are JBF.013 to JBF.031 and JBF.104 to JBF.117. We define sources as unresolved at 1.4 GHz if they consist of a single radio component with a deconvolved FWHM of less than 4 arcsec within 30 arcsec of the JBF position in the FIRST radio maps. Note that during the selection process of the JBF sample we used a search radius of 70 arcsec inorder to remain consistent with the selection process used by CLASS. Here, we only consider radio emission within 30 arcsec of the JBF position because we are now looking for evidence for jet emission associated with each JBF source. Using the above criteria we find that only 3 JBF sources (JBF.025, JBF.026 and JBF.031) show signs of extension in the FIRST radio maps. These 3 sources were also identified as extended by the 4.86 GHz pseudo-survey observations. The 3 other extended sources from the 4.86 GHz pseudo-survey imaging (JBF.020, JBF.108 and JBF.117) had compact structures in the FIRST maps, but were found to have another independent radio source between 30 and 70 arcsec from the JBF position. The FIRST images of JBF.025, JBF.026 and JBF.031 are given in Fig. 2 and a brief description of each source is given below.
JBF.025 appears as a single extended radio source with a FIRST 1.4 GHz flux density of 7.6 mJy and a deconvolved FWHM of 4.67 arcsec. The radio structure appears unremarkable with a slight extension to the north. There is another FIRST radio source ∼45 arcsec toward the east.
JBF.026 shows clear extended structure elongated toward the south-west. The 1.4 GHz flux density measured by the FIRST survey is 12.1 mJy and the deconvolved FWHM is 7.97 arcsec JBF.031 has the most interesting radio structure of the three extended JBF sources. JBF.031 consists of three radio components extending in a north-south direction separated by 27.5 arcsec. The most southern component, JBF.031a, has the largest 1.4 GHz fluxdensity of the three radio components (12.3 mJy) and is the most compact (deconvolved FWHM is 1.28 arcsec). Also, JBF.031a is the only radio component to be detected at 8.46 GHz during the pseudo-survey observations (see Table 3). The spectral index of JBF.031a between 1.4 (FIRST) and 8.46 GHz (JBF) is flat/rising (α 8.46 1.4 = +0.13±0.06). Therefore, we associate JBF.031a as the radio core of JBF.031. The remaining two components to the north, JBF.031b and JBF.031c, have 1.4 GHz flux-densities of 3.5 and 5.1 mJy and deconvolved sizes of 4.57 and 4.96 arcsec, respectively. Both JBF.031b and JBF.031c have structures consistent with a radio jet.
Assuming that the FIRST-subsample is representative of the whole JBF sample, we find that only 9 per cent of the JBF sample show evidence for extended jet emission, with the vast majority being unresolved and compact. Of course, further 1.4 GHz imaging of the remaining 84 JBF sources not observed by FIRST could confirm this result. In general, we find that the JBF sample is composed of compact radio sources with little or no evidence of extended jet emission on the arcsecond scales probed here.
Radio source number counts
The differential number counts of the CLASS parent population have been determined by combining the JBF and CLASS complete samples. We excluded from our analysis the number counts data from the JBF sample at S 25 mJy because i) the small number of JBF sources with flux densities above 25 mJy led to large uncertainties in the number counts per flux density bin (60 to 100 per cent), and ii) the CLASS complete sample provides excellent number counts information over the 30 mJy to ∼ 1 Jy flux density range. Fig. 3 shows the differential number counts of flat-spectrum radio sources as a function of flux density. The JBF number counts follow on smoothly from those obtained with the CLASS complete sample. Using a least-squares fitting technique, we find the differential number counts of flat-spectrum radio sources with S4.85 5 mJy are described by the power law, n(S) = (6.91 ± 0.42) S4.85 100 mJy −2.06±0.01 The reduced χ 2 of the fit is 1.31 and the number of degrees of freedom (ndf) is 21. Clearly, this power-law fit has been heavily weighted by the CLASS complete sample data which has very small uncertainties in the number of sources per flux density bin. As the CLASS gravitational lensing statistics will be particularly sensitive to any change in the differential number counts slope, η where n(S) ∝ S η , below 30 mJy, two separate power-laws have been fitted to the parent population data above and below the CLASS 30 mJy flux density limit. We find from the resulting least squares fits, n(S) = (7.97 ± 2.23) S4.85 100 mJy −1.96±0.12 mJy −1 sr −1 , (2) for 5 S < 30 mJy (reduced χ 2 = 0.31; ndf = 5) and, for S 30 mJy (reduced χ 2 = 1.73; ndf = 14). The large uncertainty in the slope below 30 mJy is due to the small number of sources in the JBF sample.
The differential number counts slope below 30 mJy presented here is slightly different to the result reported by Chae et al. (2002) (η = −1.97±0.14). The small change in η below 30 mJy is due to a recent update of the NVSS catalogue in 2004 which led to an increase in the number of flat-spectrum radio sources within the JBF sample. This change in η has a negligble effect on the CLASS gravitational lensing statistics, with ΩΛ unchanged from the result published by Chae et al. (2002).
Fraction of radio sources with flat radio spectra
In Fig. 4 the percentage of radio sources with flat radio spectra (α 4.85 1.4 −0.5) as a function of flux-density is presented. Those data above 30 mJy come from the combination of the NVSS and GB6 catalogues, and those data below 30 mJy are taken from the 4.86 GHz pseudo-survey. There is a clear change in the spectral composition of the radio source population with flux density. At high flux-densities (> 1 Jy), the radio source population is dominated by the powerful flat-spectrum quasars. As the quasar population declines with flux density (e.g. Falco et al. 1998;Marlow et al. 2000;Muñoz et al. 2003), so does the fraction of sources with flat radio spectra. From ∼10 to 100 mJy, the fraction remains constant with about 1 in 4 radio sources having flat spectra. Also, those data from the pseudo-survey appear to closely match the results from NVSS and GB6 at the transition point around 30 mJy, although the uncertainties in the fraction of sources with flat spectra from the pseudo-survey are quite large. Interestingly, there is a hint of an increase in the fraction of radio sources with flat radio spectra below (Becker et al. 1995) of the extended radio sources from the JBF sample. (left) JBF.025 shows a slight extension to the north and another (possibly independent) radio source 45 arcsec to the east. The contour levels are (−3, 3, 6, 12, 24, 48)×170 µJy beam −1 . (centre) JBF.026 shows extension toward the south-west. The contour levels are (−3, 3, 6, 12, 24)×157 µJy beam −1 . (right) JBF.031 consists of three radio components extending to the north; a core (A) and two jet components (B and C). The contour levels are (−3, 3, 6, 12, 24, 48)×142 µJy beam −1 . The grey-scales in each map are in units of mJy beam −1 .
10 mJy to about 1 in 3 radio sources. A possible explanation for this increase is that the pseudo-survey observations partially or completely resolved out extended steep-spectrum radio sources which would have otherwise been detected by the ∼3.5 arcmin beam of the GB6 survey. Although this does not affect the number of compact flat-spectrum radio sources found by the VLA pseudo-survey, it could result in an increase in the fraction of radio sources identified with flat spectra at the survey limit (∼5 mJy). Alternatively, the fraction of radio sources with flat radio spectra may be genuinely increasing. However, a much larger survey of the mJy level radio source population using a radio array/telescope with a greater sensitivity to extended emission will need to be carried out to confirm this intriguing result.
CONCLUSIONS
The selection of the JBF sample from a 4.86 GHz VLA pseudosurvey has been presented. We find the vast majority of the 117 flat-spectrum radio sources within JBF to be compact and unresolved over the arcsecond scales probed here. Using the JBF and CLASS complete samples we have determined the differential number counts slope of the CLASS parent population above and below 30 mJy to be −2.07±0.02 and −1.96±0.12, respectively. The parent population number counts information presented here forms a vital part of the CLASS gravitational lensing statistics.
However, these number counts must be coupled with complete redshift information for the JBF sample because the lensing optical depth is strongly dependent on the redshift of the background source (e.g. Turner et al. 1984). The analysis of the CLASS gravitational lensing statistics performed by Chae et al. (2002) assumed that the mean redshift of the flat-spectrum radio source population below 25 mJy wasz = 1.27 i.e. the same as for brighter samples of flat-spectrum radio sources (e.g. Marlow et al. 2000). If the true mean redshift of the flat-spectrum radio source population below 25 mJy differs from 1.27 by ±0.1, this would result in a change of ∓ 0.06 in the value of ΩΛ obtained from the CLASS gravitational lensing statistics (see Figure 10 of Chae 2003). As such, it is crucial we establish the redshift distribution of faint flat-spectrum 10 100 10 100 1000 10000 Sources with flat radio spectra (per cent) S (mJy) Figure 4. The percentage of radio sources with flat radio spectra at 4.85 GHz as a function of flux density. The data above 30 mJy (red crosses) have been calculated using the NVSS and GB6 catalogues. The data below 30 mJy (green circles) have been taken from the VLA pseudo-survey. radio sources below the CLASS flux-density limit. In a companion paper to this one (McKean et al. in preparation), we will present the optical/infrared followup of a small subsample of JBF sources with flux densities between 5 and 15 mJy. Our preliminary results, based on a combination of redshifts obtained from spectroscopy and photometry, suggest that the mean redshift of the JBF selected subsample isz ∼ 1.2. Therefore, we expect little change in the value of ΩΛ once the redshift information for the parent population below 25 mJy is incorporated into the CLASS gravitational lensing statistics analysis. | 2014-10-01T00:00:00.000Z | 2007-02-13T00:00:00.000 | {
"year": 2007,
"sha1": "1b5b3b1a74d38152a6ac8f2f5e188f548fd76fad",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/377/1/430/5774835/mnras0377-0430.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "db8237e434f9291d0eab4d449b754edaf1cd9587",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
1843479 | pes2o/s2orc | v3-fos-license | Toward Standardizing a Lexicon of Infectious Disease Modeling Terms.
Disease modeling is increasingly being used to evaluate the effect of health intervention strategies, particularly for infectious diseases. However, the utility and application of such models are hampered by the inconsistent use of infectious disease modeling terms between and within disciplines. We sought to standardize the lexicon of infectious disease modeling terms and develop a glossary of terms commonly used in describing models' assumptions, parameters, variables, and outcomes. We combined a comprehensive literature review of relevant terms with an online forum discussion in a virtual community of practice, mod4PH (Modeling for Public Health). Using a convergent discussion process and consensus amongst the members of mod4PH, a glossary of terms was developed as an online resource. We anticipate that the glossary will improve inter- and intradisciplinary communication and will result in a greater uptake and understanding of disease modeling outcomes in heath policy decision-making. We highlight the role of the mod4PH community of practice and the methodologies used in this endeavor to link theory, policy, and practice in the public health domain.
approaches used to analyze a model, and outdated definitions that do not reflect current knowledge. Ambiguities caused by inconsistent use of terms in models can have enormous impact on model design, usefulness for public health, and understanding and comparability of model outcomes. Standardization of the terms used in modeling could improve this process and its applications. In recent years, considerable efforts have been made to develop unity of thought about modeling approaches in health sciences, including the particular case of infectious diseases (5). For example, there have been efforts to improve the use of disease modeling terminology through the creation of topic specific glossaries (6,7). While such glossaries define commonly used terms and provide an understanding of the vocabulary and methods used in the disease modeling literature (3,4,6,7), they do not aim at standardizing a common lexicon based on infectious disease terminology. Furthermore, the existing modeling glossaries do not take into account the spectrum of definitions of terms used by various health disciplines such as public health, epidemiology, and clinical medicine.
To address this gap, we developed a multidisciplinary dialog between the members of a recently established virtual "Community of Practice, " called mod4PH (Modeling for Public Health) (8), whose goal is to enhance the understanding of modeling, and its applications for public health and clinical decision-making related to infectious disease prevention and control. The mod4PH group, a LinkedIn forum, consists of individuals from different disciplines including disease modeling, public health, infectious disease epidemiology, and policy analysis. The objective of the current study in the mod4PH group was to produce a "glossary of terms, " which describes the standard use and definition of infectious disease modeling terms. This glossary and the guidelines for the use of terms were developed based on literature review, online discussion among mod4PH participants, and subsequent consensus. Here, we discuss the process by which the glossary was created and provide a summary of the online forum discussions surrounding the usage and definition of key terms in modeling infectious diseases.
MATeRiALS AND MeTHODS
A comprehensive literature review of infectious disease terms used in modeling studies was combined with an online discussion in "mod4PH, " a predominately social media platform complemented by a face-to-face annual meeting. Considering the importance of a standard lexicon of terms in understanding modeling outcomes and their comparability, we discussed several key terms that are commonly used in disease models.
mod4PH Community of Practice
Sponsored by the National Collaborating Center for Infectious Diseases (NCCID), Canada, mod4PH represents a virtual community of practice that promotes the best procedures for research activities and collaborations and integrates resources and expertise for knowledge translation to improve the uptake of infectious disease modeling (8). The mod4PH community of practice currently hosts members from various disciplines and geographic locations and recruits individuals with the required expertise and experience relevant to its mandate.
The mod4PH forum was initially established with a number of participants from the face-to-face 2014 Pan-InfORM workshop (9). Following the inception of mod4PH, NCCID announced the forum on Twitter, electronic mail, and other media in order to reach out to modelers and medical and public health professionals to enhance the depth and breadth of expertise in the forum. At the time of this study, there were 77 mod4PH members of whom approximately 50% were modelers, and the remaining were from other relevant disciplines as indicated by information included in their LinkedIn profiles. The majority of mod4PH members are located in North America (Canada and the United States). Members from Western and Southern Europe, South Central Asia, South America, and Australia and New Zealand constitute approximately one-fifth of the forum.
Development of Common Lexicon
The literature review was conducted for specific terms that are used in peer-reviewed published articles in a wide array of journals published in English. The search engines "PubMed, " "Google Scholar, " "Web of Science, " and "Scopus" were used to find terms used in both mathematical epidemiology and public health with an ambiguous and discrepant definition. We also consulted a previous review of terms in modeling influenza infection, which relied on a variety of sources including systematic reviews, peerreviewed published articles, books, advisory health reports, and websites of public health agencies and organizations (e.g., World Health Organization, U.S. Centers for Disease Prevention and Control, Public Health Agency of Canada, and European Center for Disease Control) (4).
We considered essential terms that are used often in epidemiological models of infectious diseases with two main criteria: (i) a term was defined differently between articles or (ii) two different terms were used interchangeably, with the threshold that one of the criteria is met in at least two peer-reviewed articles. Terms and definitions identified in the review of relevant studies were classified as "discussion topics" based on their definitions and usage. Each week, a discussion topic and the associated references were posted on the mod4PH forum (8), which remained active for the period of study between November 2015 and April 2016. The topics typically opened with a question or comment on inconsistent terms that would draw members of the forum with relevant expertise into the discussion. A total of 25 terms were discussed in 13 discussion sessions (10).
Forum Discussion
For the purpose of this study, a convergent discussion process was followed (11), which involved a technique that allowed participants to not only provide feedback on the discussion topic but also propose questions with the flexibility to probe and explore emerging issues in different contexts (e.g., epidemiological, clinical, public health, modeling, and specific disease or population). Convergence was achieved by asking probing questions that became progressively more detailed and specific in order to clarify the definitions and appropriate use of terms in models. The general nature of questions and ensuing discussions led the forum participants to highlight the relevance of each term to the conceptual modeling frameworks, identify the most commonly used lexicon with reference to published studies in different disciplines, and challenge, change, or confirm emerging interpretations to develop a glossary of terms that can be used to standardize the vocabulary in models. New topics were introduced each week and remained open for discussion for the duration of the project to: (i) increase understanding and clarity of modeling terms for their definition and use, (ii) challenge emerging issues in closely relevant concepts and terminologies of infectious diseases, and (iii) ensure that the study of terms was not prematurely closed.
MODeLiNG TeRMS, PARAMeTeRS, AND THeiR ReLATiONS
Models of infectious disease dynamics are developed to reflect: (i) the biology of the infectious agent and (ii) the physiological processes and attributes of the disease at both the individual and population levels. These are defined by: (a) the time course of the stages of disease progression through the infection process, from exposure to recovery or death (which constitutes the area of clinical medicine) and (b) a time course of states of transmission potential from exposure to post infectiousness (which constitutes the area of public health and epidemiology) (12). The biology of the infectious agents and the pathophysiological processes include the disease statuses of the individuals, which determine the susceptibility of individuals to infection or transmissibility of the disease. The change of status described in models is often linked to the spread of disease characterized by the populationlevel phenomena of disease incidence or prevalence, as well as parameters affecting these phenomena such as the generation interval and serial interval. In the following sections, we report on the outcomes of discussions from the online mod4PH forum on the relevant modeling terms.
Compartmental Models
In most disease dynamic models, a compartmental structure is developed that divides the population into several classes of individuals according to their epidemiological statuses. These include: susceptible (S), exposed (E), infectious (I), and recovered (R), and their relationship describes a basic disease transmission dynamic model, referred to as the classical SEIR model (Figure 1) (13). Although other context and disease-specific compartments may be added, in this paper, we restrict our attention to this basic framework. Individuals may transition between these classes as a change of status occurs due to disease-related processes. These models may be used to compute quantities such as the basic reproduction number and the prevalence and incidence of disease in the population (13).
Susceptible, infected, and Recovered
Susceptible refers to a non-infected individual (or population) who may become infected through contact with individuals or environmental organisms that can transmit the disease. Individuals may show varying degrees of susceptibility based on a number of host factors (e.g., immunity; see "Individual Immunity and Herd Immunity"). Successful transmission of the pathogen to a susceptible individual will result in the susceptible individual becoming infected, either clinically or subclinically. However, even in the absence of successful transmission, the susceptible individual is referred to as "exposed" (as used in epidemiology and other health-related disciplines). In contrast, in compartmental models of disease transmission, the exposed compartment is used to label individuals as "infected, " with the assumption of successful transmission. It is assumed that individuals in the exposed class are incapable of transmitting the disease, unlike those in the infectious compartment.
In disease dynamic models, infected individuals are often considered to be infectious, that is, they are capable of transmitting the disease. When this classification is made, it is important for the model to clearly state that these individuals are both "infected and infectious" since the assumption of infectivity is inconsistent with the epidemiological observation that an infected individual is not necessarily infectious.
For a disease in which the causative pathogen can be eliminated from the infected person or become dormant, the infectious stage is followed by a non-infectious stage. Models of disease transmission dynamics refer to this non-infectious stage as "recovered, " since the individuals in this stage can no longer transmit the disease. However, we note that, in the epidemiological and clinical contexts, a non-infectious individual is not necessarily pathogen-free.
exposed and Latent
In most disease transmission dynamic models, the term "exposed" is used to refer to individuals who are infected but are not yet capable of transmitting the disease (14,15). However, this concept is inconsistent with the observation used in epidemiology referring to susceptible individuals who have been in contact with infectious individuals, but the success of transmission is not determined. When the exposure to the infectious individual occurs with successful pathogen transmission to a susceptible individual, this leads to colonization or infection. To improve a model that includes an exposed compartment, it is suggested to use the term "latent" to represent the class of individuals who have been infected following exposure to disease but are not yet capable of transmitting the disease.
The term "latent" describes an infected individual who cannot transmit the pathogen, regardless of the time for exposure to the pathogen. For a number of infectious diseases (e.g., influenza), the individual becomes latent immediately following the exposure to pathogen when transmission occurs (14,15). However, it is important to note that an individual may be latent at a different stage of disease. For example, individuals with tuberculosis can have active infection in which they shed bacteria or latent infection (possibly after an effective course of treatment) in which there are no clinical symptoms and no pathogen transmission, but the bacteria is still present in the body (16,17). We note that, in clinical infectious diseases, latency can occur before symptoms, without symptoms, or after symptoms.
Periods: Latent, incubation, and infectious
The latent period refers to the period of time between exposure to a disease with successful transmission and the onset of infectiousness. During the latent period, infected individuals do not transmit the disease. This period has commonly been referred to as the "exposed period" in infectious disease modeling; however, the exposed period may suggest a period of time during which the event of exposure to disease continually occurs, rather than a period of time following exposure. It is therefore suggested that the "latent period" should be used as the standard lexicon in models of disease transmission.
The incubation period is defined as the period of time between exposure to the disease (if transmission occurs) and the onset of clinical symptoms. For diseases in which the onset of infectiousness coincides with the appearance of symptoms, the latent and incubation periods are the same. However, in a number of diseases (e.g., influenza, measles, and varicella), the infected individual may become infectious before the onset of symptoms. For such diseases, the models may account for the incubation period by including the latent and presymptomatic periods (see "Presymptomatic and Asymptomatic").
The infectious period is defined as the time interval in which the infected individual is capable of transmitting the disease.
Since this period may overlap with the incubation period, it may be difficult to obtain accurate estimates of the infectious period.
incidence and Prevalence
Disease incidence is defined by both epidemiologists and modelers as the number of new cases in a population generated within a certain time period. In SEIR models, it is often calculated as the product of the number of susceptible individuals, the number of infected individuals (who are considered infectious and can transmit the disease), and the transmission rate per contact per unit of time. In continuous time models, the incidence of a disease is measured at a single time point and typically represents the rate at which the infected population changes. However, in discrete time models (18), the incidence is considered over a time period, which may represent the average generation interval (see "Generation Interval and Serial Interval"). If the generation interval is sufficiently small, then the definitions of incidence in continuous and discrete time models converge. Due to the discrepancy in these definitions (4), it has been suggested that, in continuous models, referring to the measure as "instantaneous incidence" may be more appropriate than incidence.
Disease prevalence is defined as the number of cases of a disease at a single time point in a population. In compartmental models, the prevalence is represented by the number of infectious individuals at any time.
The Basic Reproduction Number (R 0 )
The basic reproduction number is defined as the average number of secondary cases caused by a single infectious individual in a totally susceptible population (19). This quantity is often used in deterministic compartmental models to determine whether an epidemic (in a demographic-free model) or endemic (in a model with demographics) will occur (R0 > 1) or if the disease will go extinct before causing an epidemic or endemic (R0 < 1). The case of R0 = 1 is referred to as a disease endemic, in which each case generates on average one additional case (20).
While there is a broad agreement on the definition of the reproduction number, recent studies show that the methodology and type of model used to calculate R0 may lead to different estimates (21,22). Furthermore, variability in these estimates has raised concern about whether R0 should be used to determine the occurrence of an epidemic or endemic (21). In the context of simple susceptible-infectious-recovered (SIR) models, R0 is often derived from a relationship between the transmission rate and the infectious period (23 where I0 is the initial number of infections at the onset of the epidemic. This solution provides an approximation for the exponential growth of the prevalence at the early stages of an epidemic when R0 > 1. In order to illustrate the relationship between R0, the prevalence, and the disease incidence, we rewrite the prevalence in the form of I′ = γR0I − γI, and add this equation to the equation for recovery, which gives the incidence by I′ + R′ = γR0I. This leads to the relation (24): incidence = γR0 (prevalence), and therefore,
Presymptomatic and Asymptomatic
The term "presymptomatic" describes a stage of disease in which an infected individual is infectious and can transmit the disease without presenting with symptoms. Individuals in the presymptomatic stage will proceed to become symptomatic at some point of time in the future. In contrast, asymptomatic refers to a disease stage in which individuals are infectious but are not exhibiting disease symptoms.
For example, influenza has various stages, including an asymptomatic stage in which individuals are capable of transmitting the disease but may have no knowledge of being infectious as they are not symptomatic. If these individuals develop a symptomatic infection, then the time period before the onset of symptoms is referred to as presymptomatic. A schematic representation of disease stages is illustrated in Figure 1.
Generation interval and Serial interval
In modeling, the generation interval refers to the period of time between the onset of the infectious period in a primary case to the onset of the infectious period in a secondary case infected by the primary case (25). In epidemiology, the serial interval is defined as the period of time between the onset of symptoms in a primary case to the onset of symptoms in a secondary case infected by the primary case (26). The generation interval is not an observable period based on symptoms since an infected individual may become infectious and transmit the disease before symptoms appear. However, the serial interval is an observable period that is determined with the onset of symptoms. When the infectious period starts with the onset of symptoms (i.e., the latent and incubation periods have the same duration), the generation interval coincides with the serial interval.
The use of these terms in modeling is disease-specific and depends on the transmission characteristics of the pathogen. For example, in influenza, the onset of infectiousness and symptoms may differ in an infected person, and therefore, the duration of the serial interval and the generation interval may not be the same. Using these measures, as has been done in published studies (25,27), to estimate some of the key epidemiological parameters, such as the reproduction number, may result in different outcomes in terms of transmissibility of the disease as well as different recommendations for public health interventions. However, since models are often based on the infectious period and not the onset of symptoms, it may be more appropriate to use the generation interval for understanding the transmission dynamics and estimating related parameters such as the basic reproduction number (28).
individual immunity and Herd immunity
Immunity refers to protection against a disease. It can be acquired naturally in an individual by experiencing the disease and recovering from infection or by other means such as active vaccination or passive transfer of maternal antibody across the placenta (29,30). Herd immunity refers to the protection level of a population as a result of individuals' immunity against infection. Immunity in individuals (involving innate and adaptive T and B cell immune responses) can provide a wide range of protection from full (which completely prevents the occurrence of infection temporarily or permanently) to partial (which may not prevent infection but may reduce its severity and mitigate outcomes). The degree of an individual's immunity determines their susceptibility to infection or reinfection. Immunity can be partial or full, as previously stated, based on host or pathogen characteristics, and can wane over time. Compartmental transmission dynamic models often include the protection effects of immunity by averaging the levels of immune protection in individuals. It is, however, important to note that this average may not correspond to the potential disease transmission or disease outcomes at the individual level. Studies in immuno-epidemiology models are addressing these issues (31,32).
Attack Rate
The attack rate describes the proportion of the population that becomes infected over a specified period of time. For diseases for which not all infected individuals develop symptoms (e.g., influenza), it may be more informative to calculate the "clinical attack rate" (which measures the proportion of the population that develops disease symptoms as a result of an infection).
vaccine efficacy and effectiveness
In epidemiological and clinical studies, vaccine efficacy refers to the percentage reduction in the attack rate of the vaccinated cohort compared to the unvaccinated cohort as observed in a randomized controlled (field) trial. Vaccine effectiveness refers to the ability of a vaccine to prevent infection or related outcomes in the population in real-world conditions. The inclusion of vaccine-induced immunity in disease dynamic models is generally governed by the reduction of disease transmission to vaccinated individuals. This reduction is often based on a parameter that quantifies vaccine efficacy or effectiveness. While these two terms have different meanings and are measured using distinct methods (33), many models have used them interchangeably to imply the average protection level of individuals in the population. It is important to clearly distinguish between the two terms of efficacy and effectiveness, and models should consider vaccine effectiveness in the study of disease transmission dynamics in the population.
While neither vaccine efficacy nor effectiveness is measured with respect to time, the effect of a vaccine in individuals may change over time. This is reflected in the waning of vaccineinduced immunity at the individual level and, as a result, the decline of herd immunity in the population. While there are various ways of including waning immunity in a model, it is important to note that a decline of immunity at the individual level over time is not equivalent to a decrease in vaccine efficacy or effectiveness.
DiSCUSSiON
Infectious disease modeling is an important epidemiological tool to inform strategies for disease control and prevention. Tracing its historical roots from the pioneering work of Daniel Bernoulli on smallpox in the 1760s (34) to the classical compartmental approach of Kermack and McKendrick in the 1920s (35,36), modeling has evolved to incorporate demographic, geographic, and individual level characteristics, in addition to the most current knowledge of epidemiology, immunology, vaccines and drugs, and other public health interventions. This evolution is typified by advanced modeling and computational technologies, including metapopulation (37,38), network (39), and agent-based simulation models (40,41). From an epidemiological perspective, there are two important considerations that underlie the use of modeling tools, including disease progression-related terms (i.e., asymptomatic, presymptomatic, and symptomatic) and disease transmission-related terms (i.e., latent, infectious, and noninfectious) (12). Given the complexity of the newer generation of models that comes with their flexibility, a standardized lexicon of terms can play an important role in understanding their outcomes and establishing effective communication within and between disciplines involved in infectious disease management.
While a number of studies exist that aim to clarify and define modeling terms and methodologies in the context of infectious diseases (3,4,6,7) to the best of our knowledge, this is the first attempt to develop a glossary of terms through the expertise of a virtual community of practice. This community allowed for an increased accessibility and participation by international participants, as well as easy and convenient access to the ongoing discussions. This initiative serves a larger goal envisioned for mod4PH, that is, to develop an international capacity and unified infrastructure that is capable of informing complex decision-making and improving health practices through the use of quality data, evidence, and scientific knowledge. The diversity of expertise in this community can help identify appropriate methods and modeling tools that can be used to assess the results and their comparability, share information on how models can best be used to inform policy, and enhance knowledge generation and translational activities in public and population health. Furthermore, this interdisciplinary nexus of constructive dialog can expand the training and research capacity beyond the traditional boundaries in academics and can enable consensus building and use of knowledge in a wider scientific community and international audience.
The diversity of the individuals in a community of practice does not come without its limitations. Various disciplines and backgrounds often use terms differently, and the lack of a common jargon-free language makes the process and intragroup discussions especially challenging. Furthermore, technical barriers to online forums (e.g., restrictions on the number of characters per post) may limit the extent of discussion that can take place in the platform. To improve communication between the mod4PH members, we implemented a convergent discussion process, enabling the discussion topics to remain active following their initiation in the online forum. This form of interdisciplinary engagement will ensure that, as our knowledge and understanding of infectious disease mechanisms enhances, discussion topics are improved and definitions of pertinent terms and their use in modeling efforts are adapted and informed.
The resulting glossary of terms from the current initiative of mod4PH is published as an online resource by the NCCID (10). As the usage and definition of the terms discussed here evolve, the online glossary will be updated to mirror the necessary changes and ensure that it remains a current reference for infectious disease modeling as an important tool that is increasingly applied to effectively respond to public health threats. This initiative is concordant with a key element of the 2016 "World Health Organization Research and Development Blueprint" for infectious diseases with epidemic potential (42), which calls for the international community to invest to improve its collective ability to respond to new threats and to prepare itself with a novel research and development paradigm to address future epidemics of newly emerging or re-emerging diseases.
While this study serves to bridge the gaps in modeling efforts and forge strong links between theory, policy, and practice, there remain many terms for ongoing discussion toward standardizing terminology, which are integral to the application of modeling outcomes to policy decision-making. We hope to use this study as a starting point to foster bidirectional communication between the involved disciplines, and improve the utility of modeling as an invaluable tool in the fight against persistent and emerging infectious diseases.
AUTHORS NOTe
The authors write for mod4PH (Modeling for Public Health).
AUTHOR CONTRiBUTiONS
RM and SM conceived the study and wrote the first draft of the manuscript. AS, JA, JH, AH, DS, EG, MH-B, HI-K, and JL contributed to the discussion, materials, and reviewing the paper. | 2017-05-05T10:00:11.222Z | 2016-09-28T00:00:00.000 | {
"year": 2016,
"sha1": "9c63abedad679c0abd4d09ef7ae1a0515603e599",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2016.00213/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9c63abedad679c0abd4d09ef7ae1a0515603e599",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
49212854 | pes2o/s2orc | v3-fos-license | Thiopyrano[2,3-d]Thiazoles as New Efficient Scaffolds in Medicinal Chemistry
This review presents the up to date development of fused thiopyranothiazoles that comprise one of the thiazolidine derivatives classes. Thiazolidine and thiazolidinone-related compounds belong to the widely studied heterocycles from a medicinal chemistry perspective. From the chemical point of view, they are perfect heterodienes to undergo hetero-Diels–Alder reaction with a variety of dienophiles, yielding regio- and diastereoselectively thiopyranothiazole scaffolds. The annealing of thiazole and thiopyran cycles in condensed heterosystem is a precondition for the “centers conservative” creation of the ligand-target binding complex and can promote a potential selectivity to biotargets. The review covers possible therapeutic applications of thiopyrano[2,3-d]thiazoles, such as anti-inflammatory, antibacterial, anticancer as well as aniparasitic activities. Thus, thiopyrano[2,3-d]thiazoles may be used as powerful tools in the development of biologically active agents and drug-like molecules.
Hetero-Diels-Alder Reaction as a Key Approach for the Synthesis of Thiopyrano[2,3-d]Thiazole Derivatives
The most effective approach to thiopyrano [2,3-d]thiazole system design is the use of the hetero-Diels-Alder reaction. Mentioned approach has been described for the first time by I.D. Komaritsa [18], N.A. Kassab et al. [19,20] who had successfully used 5-arylidene-4-thioxo-2-thiazolidinones (5arylideneisorhodanines) and 5-arylidene-2,4-thiazolidinedithiones (5-arylidenethiorhodanines) as heterodienes. Mentioned reagents contain in their structure α,β-unsaturated thiocarbonyl fragment similar to the 1-thio-1,3-butadiene which leads to their high reactivity in the [4+2]-сycloaddition reactions ( Figure 2). According to the molecular orbital theory, Diels-Alder reaction is based on the overlay of the diene's "HOMO" and dienophile's "LUMO". The important condition for this reaction is the presence of strong dienophile with electron acceptor properties to decrease energy difference between diene's "HOMO" and "LUMO" or "HOMO" of the dienophile. For these reasons reactions are highly regioselective and form products according to the molecular orbital theory.
Hetero-Diels-Alder Reaction as a Key Approach for the Synthesis of Thiopyrano[2,3-d]Thiazole Derivatives
The most effective approach to thiopyrano [2,3-d]thiazole system design is the use of the hetero-Diels-Alder reaction. Mentioned approach has been described for the first time by I.D. Komaritsa [18], N.A. Kassab et al. [19,20] who had successfully used 5-arylidene-4-thioxo-2-thiazolidinones (5-arylideneisorhodanines) and 5-arylidene-2,4-thiazolidinedithiones (5-arylidenethiorhodanines) as heterodienes. Mentioned reagents contain in their structure α,β-unsaturated thiocarbonyl fragment similar to the 1-thio-1,3-butadiene which leads to their high reactivity in the [4+2]-cycloaddition reactions ( Figure 2). According to the molecular orbital theory, Diels-Alder reaction is based on the overlay of the diene's "HOMO" and dienophile's "LUMO". The important condition for this reaction is the presence of strong dienophile with electron acceptor properties to decrease energy difference between diene's "HOMO" and "LUMO" or "HOMO" of the dienophile. For these reasons reactions are highly regioselective and form products according to the molecular orbital theory.
Sci. Pharm. 2018, 86, x 2 of 24 mentioned Michael acceptors properties ( Figure 1) [15][16][17]. The combination of thiazole and thiopyran cycles in condensed heterosystem is a precondition for the creation of "centers conservative" of the ligand-target binding complex and promotes potential selectivity to biotargets. Considering the mentioned arguments, the directed search for new chemotherapeutic agents among thiopyrano [2,3-d]thiazole derivatives is a justified and promising direction in modern medicinal chemistry. In this review, we tried to systematize the data on chemistry and pharmacology of thiopyrano [2,3-d]thiazoles from the perspective of medicinal and pharmaceutical chemistry.
Hetero-Diels-Alder Reaction as a Key Approach for the Synthesis of Thiopyrano[2,3-d]Thiazole Derivatives
The most effective approach to thiopyrano [2,3-d]thiazole system design is the use of the hetero-Diels-Alder reaction. Mentioned approach has been described for the first time by I.D. Komaritsa [18], N.A. Kassab et al. [19,20] who had successfully used 5-arylidene-4-thioxo-2-thiazolidinones (5arylideneisorhodanines) and 5-arylidene-2,4-thiazolidinedithiones (5-arylidenethiorhodanines) as heterodienes. Mentioned reagents contain in their structure α,β-unsaturated thiocarbonyl fragment similar to the 1-thio-1,3-butadiene which leads to their high reactivity in the [4+2]-сycloaddition reactions ( Figure 2). According to the molecular orbital theory, Diels-Alder reaction is based on the overlay of the diene's "HOMO" and dienophile's "LUMO". The important condition for this reaction is the presence of strong dienophile with electron acceptor properties to decrease energy difference between diene's "HOMO" and "LUMO" or "HOMO" of the dienophile. For these reasons reactions are highly regioselective and form products according to the molecular orbital theory.
The reaction of 5-arylideneisorhodanines with 2(5H)furanone yields mixtures of endo/exo adducts 16,17. (Scheme 5). Considering moderate diastereoselectivity of the process, the reaction can occur through endo or exo transition states resulting in different positions of the protons at C-8 of core heterocycle. Thus, the endo transition state leads to anti configuration, while the exo geometry results in syn configuration of the H-8 respectively. Endo and exo adducts can be separated by column chromatography [32]. Currently, the list of dienophiles for the synthesis of thiopyrano [2,3-d]thiazole derivatives has significantly expanded. Thus, the use of cynnamic acids [26] and their amides [27], aroylacrylic [28] and arylidene pyruvic [29] acids as well as dimethyl acetylenedicarboxylate [30], propiolic acid and its ethyl ester [26], acroleine [31], 2-norbornene [15] and 5-norbornene-2,3-dicarboxylic acid imides [16] as dienophiles allowed to obtain new thiopyrano[2,3-d]thiazoles 8-15 as promising biologically active compounds based on the "thiazolidinone" matrix (Scheme 4). It should be noted that the presence of chiral centers in the structure of thiopyrano [2,3-d]thiazole cycle causes certain features of stereochemistry in the hetero-Diels-Alder reaction. The given issue became the subject of an intense study considering the current trends in organic and medicinal chemistry. It was found that the above-mentioned [4+2]-cycloadditions are regio-and diastereoselective.
The Michael Reaction and Related Processes in the Synthesis of Thiopyrano[2,3-d]Thiazoles
The Michael reaction is one more effective approach to the synthesis of thiopyrano[2,3d]thiazoles (Scheme 15). Thus, the interaction of arylmethylene malononitrile and 3-substituted isorhodanines in the medium of absolute ethanol at the presence of triethylamine gave 5-amino-2oxo-7-phenyl-3,5,6,7-tetrahydro-2Н-thiopyrano[2,3-d]thiazole-6-carbonitriles 53 [42]. When studying the peculiarities of Michael addition of bicyclic 5-arylideneiso(thio)rhodanines 56 with malonodinitrile bis-thiopyrano[2,3-d]thiazole derivative 58 was obtained, which early was synthesized in the reaction of 1,4-bis-(2,2′-dicyanovinyl)-benzene 57 with two equivalents of isorhodanine. In the second case, formation of the derivative 58 occured as two-stage process When studying the peculiarities of Michael addition of bicyclic 5-arylideneiso(thio)rhodanines 56 with malonodinitrile bis-thiopyrano[2,3-d]thiazole derivative 58 was obtained, which early was synthesized in the reaction of 1,4-bis-(2,2′-dicyanovinyl)-benzene 57 with two equivalents of isorhodanine. In the second case, formation of the derivative 58 occured as two-stage process including an initial Michael reaction with further cyclization of the intermediate by the attack of cyano group with mercapto group of thiazole cycle (Scheme 17) [38]. Unexpectedely, Zhang and coauthors obtained thiopyranoid scaffold 60 (Scheme 18) exploring the divergent organocatalitic Michael-Michael-aldol cascade reaction of isorhodanine with α,βunsaturated aldehydes. Whilst the same conditions in the reaction of thiazolidinedione and rhodanine with enals led to spiro compounds, in the case of isorhodanine usage Michael cyclization took place. Optimizing the reaction conditions authors had used toluene medium and organic catalyst 59 at room temperature [44].
Synthesis of Polycondensed Thiopyrano[2,3-d]Thiazole Derivatives as Potentially Biological Active Compounds
The tandem and "domino" processes based on [4+2]-cycloaddition reaction is a powerful and effective tool in the synthesis of thiopyrano[2,3-d]thiazole derivatives. This type of reactions allows the synthesis of structurally complex molecules with high selectivity, while the consumption of solvents, reagents, adsorbents and energy is significantly reduced comparing with traditional multistage synthetic approaches. Moreover, most of the tandem and "domino" reactions products have drug-like structure and probably may possess interesting pharmacological effects that is important point in the modern process of drugs development.
Synthesis of Polycondensed Thiopyrano[2,3-d]Thiazole Derivatives as Potentially Biological Active Compounds
The tandem and "domino" processes based on [4+2]-cycloaddition reaction is a powerful and effective tool in the synthesis of thiopyrano[2,3-d]thiazole derivatives. This type of reactions allows the synthesis of structurally complex molecules with high selectivity, while the consumption of solvents, reagents, adsorbents and energy is significantly reduced comparing with traditional multistage synthetic approaches. Moreover, most of the tandem and "domino" reactions products have drug-like structure and probably may possess interesting pharmacological effects that is important point in the modern process of drugs development.
The tandem and "domino" processes based on [4+2]-cycloaddition reaction is a powerful and effective tool in the synthesis of thiopyrano[2,3-d]thiazole derivatives. This type of reactions allows the synthesis of structurally complex molecules with high selectivity, while the consumption of solvents, reagents, adsorbents and energy is significantly reduced comparing with traditional multistage synthetic approaches. Moreover, most of the tandem and "domino" reactions products have drug-like structure and probably may possess interesting pharmacological effects that is important point in the modern process of drugs development.
Domino Reactions as a Systematic Approach to the Synthesis of Fused Thiopyrano[2,3-d]Thiazoles
In addition to tandem reactions, domino reactions also play an important role in the synthesis of thiopyrano[2,3-d]thiazoles of complex structure. A domino reaction involves two or more transformations, which result in the formation of bonds (usually C-C bonds) and occur under the same reaction conditions without adding new reagents and/or catalysts. In this process the subsequent reactions take place as a consequence of the functionality formed in the previous step [52].
Domino Reactions as a Systematic Approach to the Synthesis of Fused Thiopyrano[2,3-d]Thiazoles
In addition to tandem reactions, domino reactions also play an important role in the synthesis of thiopyrano[2,3-d]thiazoles of complex structure. A domino reaction involves two or more transformations, which result in the formation of bonds (usually C-C bonds) and occur under the same reaction conditions without adding new reagents and/or catalysts. In this process the subsequent reactions take place as a consequence of the functionality formed in the previous step [52].
Biological Activity of Thiopyrano[2,3-d]Thiazole Derivatives
One of the efficient and freguently used directions of search for new active compounds is based on the principle of privileged structures annealing in the condensed systems. This approach involves combination of different heterocyclic pharmacophores in one molecule and can be successfully illustrated by thiopyrano [2,3-d]thiazoles. Taking into account that thiopyrano [2,3-d]thiazole derivatives are cyclic isosteric mimetics of 5-ene-4-thiazolidinones without typical Michael acceptors properties, the study of possible biological activity of these compounds is of great interest.
Biological Activity of Thiopyrano[2,3-d]Thiazole Derivatives
One of the efficient and freguently used directions of search for new active compounds is based on the principle of privileged structures annealing in the condensed systems. This approach involves combination of different heterocyclic pharmacophores in one molecule and can be successfully illustrated by thiopyrano [2,3-d]thiazoles. Taking into account that thiopyrano [2,3-d]thiazole derivatives are cyclic isosteric mimetics of 5-ene-4-thiazolidinones without typical Michael acceptors properties, the study of possible biological activity of these compounds is of great interest.
One of the promising and quite new directions of thiazolidinone derivative investigations is the search for potent anti-parasitic agents, namely compounds exhibiting antitrypanosomal activity. Trypanosomiasis belongs to the so called world's neglected diseases caused by Trypanosoma spp. [64]. Among spiro thiopyrano [2,3-d]thiazole 117 derivatives an active compound inhibiting growth of Trypanosoma brucei brucei and Trypanosoma brucei gambiense (the causative agent of African trypanosomiasis) with the IC 50 values of 0.26 µM and 0.42 µM, respectively, was identified [48]. Interesting is dual anti-leukemic (log GI 50 = −5.16, −5.59) and trypanocidal effects observed for thiopyranothiazole 118 bearing norbornane moiety that may be used for establishing molecular modes of action for this class of compounds ( Figure 8) [63]. antituberculosis activity (compound 98, Figure 5) and low acute toxicity [17]. High anticancer activity was identified in the row of 3,7-dithia-5,14-diazapenthacyclo[9.5.1.0 2,10 .0 4,8. 0 12,16 ]heptadecenes. The most active were the hit-compounds 114 and 115, herewith 114 selectively inhibited growth of leukemia cell lines CCRF-CEM (Log GI50 = −6.40) and SR (Log GI50 = −6.06) [16]. 7-Phenyl-2-oxo-7phenyl-3,5,6,7-tetrahydro-2H-thiopyrano[2,3-d]thiazole-6-carbaldehyde 116 showed high level of antimitotic activity against leukemia with mean GI50/TGI values 1.26/25.22 μM [31]. One of the promising and quite new directions of thiazolidinone derivative investigations is the search for potent anti-parasitic agents, namely compounds exhibiting antitrypanosomal activity. Trypanosomiasis belongs to the so called world's neglected diseases caused by Trypanosoma spp. [64]. Among spiro thiopyrano [2,3-d]thiazole 117 derivatives an active compound inhibiting growth of Trypanosoma brucei brucei and Trypanosoma brucei gambiense (the causative agent of African trypanosomiasis) with the IC50 values of 0.26 μM and 0.42 μM, respectively, was identified [48]. Interesting is dual anti-leukemic (log GI50 = −5.16, −5.59) and trypanocidal effects observed for thiopyranothiazole 118 bearing norbornane moiety that may be used for establishing molecular modes of action for this class of compounds ( Figure 8) [63]. Summarising all the above, fused thiopyranothiazoles can be used as a source for new antibacterial as well as antiviral agents. They also inhibited parasites growth. These results correlate with established anticancer profiles of the thiopyranothiazoles. Moreover, such fused heterocycles can be investigated as potent non-steroidal anti-inflammatory agents. Some structure-activity relationships are outlined in the Figure 9. One of the promising and quite new directions of thiazolidinone derivative investigations is the search for potent anti-parasitic agents, namely compounds exhibiting antitrypanosomal activity. Trypanosomiasis belongs to the so called world's neglected diseases caused by Trypanosoma spp. [64]. Among spiro thiopyrano [2,3-d]thiazole 117 derivatives an active compound inhibiting growth of Trypanosoma brucei brucei and Trypanosoma brucei gambiense (the causative agent of African trypanosomiasis) with the IC50 values of 0.26 μM and 0.42 μM, respectively, was identified [48]. Interesting is dual anti-leukemic (log GI50 = −5.16, −5.59) and trypanocidal effects observed for thiopyranothiazole 118 bearing norbornane moiety that may be used for establishing molecular modes of action for this class of compounds ( Figure 8) [63]. Summarising all the above, fused thiopyranothiazoles can be used as a source for new antibacterial as well as antiviral agents. They also inhibited parasites growth. These results correlate with established anticancer profiles of the thiopyranothiazoles. Moreover, such fused heterocycles can be investigated as potent non-steroidal anti-inflammatory agents. Some structure-activity relationships are outlined in the Figure 9. Summarising all the above, fused thiopyranothiazoles can be used as a source for new antibacterial as well as antiviral agents. They also inhibited parasites growth. These results correlate with established anticancer profiles of the thiopyranothiazoles. Moreover, such fused heterocycles can be investigated as potent non-steroidal anti-inflammatory agents. Some structure-activity relationships are outlined in the Figure 9.
Conclusions
The efficient approaches to the thiopyranothiazoles scaffolds synthesis are outlined in this review. One of the most studied synthetic protocol for thiopyranothiazoles is the hetero-Diels-Alder [4+2]-cycloaddition being rather fast and efficient method that yields good outcomes and stereoselectivity of the products. The tandem processes based on hetero-Diels-Alder and Michael
Conclusions
The efficient approaches to the thiopyranothiazoles scaffolds synthesis are outlined in this review. One of the most studied synthetic protocol for thiopyranothiazoles is the hetero-Diels-Alder [4+2]-cycloaddition being rather fast and efficient method that yields good outcomes and stereoselectivity of the products. The tandem processes based on hetero-Diels-Alder and Michael reactions used for the thiopirano[2,3-d]thiazoles synthesis have also been discussed. In contrast to the well discribed various synthetic routes of thiopyranothiazoles synthesis, biological activity of these derivatives have not been studied that much. Nevertheless, they are considered as 5-ene-4-thiazolidinone synthetic biomimetics that save pharmacological profile without revealing Michael acceptors properties. Among established biological activities of the thiopyrano[2,3-d]thiazole derivatives, the anti-inflammatory, antibacterial, anticancer as well as aniparasitic activities are the most prominent and need further in-depth studies. Considering all the above, the directed search for new drug-like molecules and possible chemotherapeutic agents among thiopyrano[2,3-d]thiazole derivatives is justified and promising direction in the medicinal chemistry. Moreover, the way of annealing of thiazolidine core into thiopyranothiazole analogs is used as one of the molecular optimization directions to decrease the toxicity and/or avoid the Michael acceptor properties as well.
Author Contributions: R.L. conceived and designed the review; O.R. and A.L. analyzed the literature data; R.L. and A.K. wrote the paper. All authors read and approved the final manuscript. | 2018-06-30T17:31:10.497Z | 2018-06-14T00:00:00.000 | {
"year": 2018,
"sha1": "a1ddcd1a96b072bef649cc450c4bf5bc850a6957",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/scipharm86020026",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a1ddcd1a96b072bef649cc450c4bf5bc850a6957",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
252496565 | pes2o/s2orc | v3-fos-license | How does the structure of data impact cell–cell similarity? Evaluating how structural properties influence the performance of proximity metrics in single cell RNA-seq data
Abstract Accurately identifying cell-populations is paramount to the quality of downstream analyses and overall interpretations of single-cell RNA-seq (scRNA-seq) datasets but remains a challenge. The quality of single-cell clustering depends on the proximity metric used to generate cell-to-cell distances. Accordingly, proximity metrics have been benchmarked for scRNA-seq clustering, typically with results averaged across datasets to identify a highest performing metric. However, the ‘best-performing’ metric varies between studies, with the performance differing significantly between datasets. This suggests that the unique structural properties of an scRNA-seq dataset, specific to the biological system under study, have a substantial impact on proximity metric performance. Previous benchmarking studies have omitted to factor the structural properties into their evaluations. To address this gap, we developed a framework for the in-depth evaluation of the performance of 17 proximity metrics with respect to core structural properties of scRNA-seq data, including sparsity, dimensionality, cell-population distribution and rarity. We find that clustering performance can be improved substantially by the selection of an appropriate proximity metric and neighbourhood size for the structural properties of a dataset, in addition to performing suitable pre-processing and dimensionality reduction. Furthermore, popular metrics such as Euclidean and Manhattan distance performed poorly in comparison to several lessor applied metrics, suggesting that the default metric for many scRNA-seq methods should be re-evaluated. Our findings highlight the critical nature of tailoring scRNA-seq analyses pipelines to the dataset under study and provide practical guidance for researchers looking to optimize cell-similarity search for the structural properties of their own data.
Introduction
Single-cell RNA-sequencing (scRNA-seq) methods provide a means to investigate the heterogeneity of complex cell populations. High-resolution transcriptional profiles in scRNA-seq data can be used to discover signature genes and their expression that denotes specific cellular processes [1], states [2] and types [3]. Proximity metrics, such as Euclidean distance, are used to measure the cell-cell similarity of these transcriptional profiles, from which clustering algorithms attempt to identify subpopulations of cells within the dataset [4][5][6].
Cluster analysis of scRNA-seq data is challenging because of the way scRNA-seq data is structured. A primary example is the high rate of dropouts resulting in sparse and noisy datasets [7]. When paired with the capacity to measure thousands of features per cell, this sparsity results in increasingly high-dimensional (HD) data spaces with unique properties and limitations [8]. Furthermore, common clustering algorithms for scRNA-seqperform best when there are discrete groups of cells present in the data [4]. While these discretely structured datasets do exist (e.g. terminally differentiated cell-types) [9][10][11], datasets of continuous structure are also common. Continuously structured datasets are composed of contiguous groupings of cells which experience multifaceted gradients of gene expression, encompassing dynamic processes such as embryonic development [12,13] and cell differentiation [14,15]. Heiser and Lau [16] identified that a dataset's structure is the primary determinant of dimensionality reduction (DR) performance, finding poorer preservation of structure in discrete datasets than in continuous ones. The assumption of discrete cell types in scRNA-seq clustering also poses challenges for identifying rare cell populations because rare cells may differ from more abundant, stable cell populations by only a small number of genes [17][18][19]. Despite their low abundance, rare cell populations are critically important because they often are highly specialized cell states or sub-types and therefore provide valuable insights into core processes such as differentiation, migration, metabolism and cancer [20][21][22][23]. It is also thought that the origin of a disease may be sourced to a subpopulation of cells or perhaps even a single cell. While this claim remains under debate, it emphasizes the importance of being able to confidently capture rare cell populations for clinical applications of scRNA-seq [24].
Similarities between cells based on gene expression are assessed using a proximity metric and this step forms the basis for all clustering algorithms. However, while the performance of clustering methods has been evaluated extensively with respect to structural properties of data [4,5,[25][26][27][28][29][30][31][32][33][34][35], evaluation of which proximity metric to choose have remained limited, often producing varied recommendations and lacking key design considerations. For example, Skinnider et al. [36] recommend proportionality-based metrics, whilst Kim et al. [37] recommend correlation-based metrics, specifically Pearson. However, Sanchez-Taltavull et al. [38] recommend Bayesian correlation over Pearson. Despite the different findings of previous works with respect to specific proximity metrics, they are largely in agreement that metric performance is highly dataset-dependent [4,37,[39][40][41]. This conclusion remains unworkable however, as the specific structural properties of the scRNA-seq datasets included in these evaluations are rarely addressed in detail or evaluated in a systematic manner.
Consequently, our study aims to address the important question of how the properties of scRNA-seq datasets inf luence the performance of proximity metrics (including true distance, correlation, proportionality, binary and dissimilarity measures) in scRNA-seq cell clustering. To the best of our knowledge, such an investigation has yet to be performed and may be a reason why previous attempts that have been more limited and unable to yield actionable conclusions. Our study evaluates the impact of 17 different proximity metrics on clustering performance for datasets that are Continuous and Discrete. Levels of cell-rarity, sparsity and dimensionality are varied to ref lect the variability of scRNA-seq data. Our findings demonstrate that there are clear differences in the performance of these metrics depending on the structure of the data. Therefore, accounting for structural properties of the dataset when planning and executing an analysis pipeline leads to substantial improvements in performance. We believe similar performance gains may be possible in other parts of the analysis pipeline that depend on a proximity metric, such as DR and trajectory inference. Consequently, we provide readers with practical guidelines for selecting a preferred proximity metric and neighbourhood size with respect to the structural properties of their own datasets. Furthermore, our evaluation framework is available as a python package, scProximitE, to allow users to evaluate the performance of proximity metrics for their own datasets and structural properties of interest.
scRNA-seq data collection
A representative dataset was constructed for the Discrete structure from the CellSIUS benchmarking dataset [32,42] and included cells from eight human cell lines. The Continuous structure category was represented by a subset of five erythrocyte differentiation cell types from the Fetal Liver Haematopoiesis dataset from Popescu et al. [43,44]. For each dataset provided cell-type annotations were used as the ground truth to evaluate clustering performance (see Supplementary-Primary Analysis).
Within the Continuous and Discrete datasets, a subclass was defined to ref lect the balance of cell-type proportions. A dataset is Abundant if the majority of cell populations are present at a relatively high level, specifically, a proportion of ≥5% of the total cell number. The first subset, Discrete-Abundant, contains seven cell lines at proportions of low (5.4%) to high (32%) abundance, and one moderately rare population (2%), whilst in the Continuous-Abundant dataset, all five cell populations were present at high proportions (20%). In contrast, a dataset is Rare if the majority of cell populations are at proportions of <5%. The Discrete-Rare dataset comprises of six rare cell populations (0.08-3.14%), and two highly abundant cell populations at 40.15 and 50.21%. The Continuous-Rare dataset consists of three rare cell types present at proportions between 0.075 and 2.5%, and two highly abundant populations (42%, 55%) (see Supplementary-Primary Analysis, Supplementary Table S1, see Supplementary Data available online at https://academic.oup.com/bib).
scRNA-seq data simulations
Simulated datasets are used to evaluate how structural properties inf luence proximity metric performance, including sparsity and cell-population imbalance. The simulated datasets, produced with PROSST (v1.2.0) [45], follow a topology of four differentiation trajectories, diverging from a single origin state (Detailed in Supplementary-Primary Analysis). This dataset in its original form represents the Continuous-Abundant Simulated dataset, whilst a subset containing only the origin state and the endmost population from each differentiation path represents the Discrete-Abundant Simulated dataset (Supplementary Figures S1 and S2, see Supplementary Data available online at https://academic.oup.com/bib).
To further explore the inf luence of imbalanced cell-type proportions on metric performance, two structural subclasses, Rare and Ultra-Rare, were created using Continuous-Abundant and Discrete-Abundant simulated datasets. For the Rare dataset, multiple cell types are present at proportions p where 1% < p < 5%, while the Ultra-Rare datasets contain multiple cell types where p < 1% (see Supplementary Table S2, see Supplementary Data available online at https://academic.oup.com/bib). The final structural property of interest in the study is dataset sparsity. Starting at 46-50% sparsity, two additional levels, moderate (68-71%) and high (89-90%) sparsity, were produced for each of the six datasets by adding zeros using a Gaussian distribution (Supplementary Table S3,see Supplementary Data available online at https://academic.oup.com/bib).
scRNA-seq data quality control and normalization
Raw count matrices were filtered to remove (i) cells with nonzero gene expression for <200 genes, (ii) cells with >10% of their total counts from mitochondrial genes and (iii) genes expressed in <10% of cells. The resulting cell and gene numbers for each dataset post-processing are in Supplementary Table S3 (simulations, see Supplementary Data available online at https://academic.oup.com/bib) and Supplementary Table S4 (CellSIUS and FSH, see Supplementary Data available online at https://academic.oup.com/bib). Gene expression measurements for each cell were normalized by total expression and multiplied by a scale factor of 10 000, log e -transformed, adding a pseudo count of one. All data processing steps, including filtering, normalization and identification of highly variable genes, were performed using Scanpy (v1. 8 Hamming, Yule, Kulsinski and Jaccards Index are computed on binary vectors. To binarise the count matrices, 1 maps to genes with ≥1 count, and 0 maps to genes with zero expression. Several of the evaluated dissimilarities are derived from correlations: Pearson, Spearman, Kendall and Weighted-Rank. As scRNA-seq data is relative rather than absolute, two proportionality-based metrics were included: Bray-Curtis, a measure of compositional dissimilarity between two different samples, and Phi, which was found to perform well in scRNA-seq clustering [36]. Cosine measures the cosine of the angle between two vectors in multidimensional space. In addition to commonly applied metrics, several recent scRNA-seq metrics were also included. Given the sparse nature of scRNA-seq data, we evaluated the Zero-Inf lated Kendall correlation (ZI-Kendall), an adaptation of Kendall's tau for zeroinf lated continuous data. Additionally, we evaluated Optimal Transport (OT) distance with entropic regularization [47].
Performance evaluation framework
ScRNA-seq datasets representing structural classes of interest were pre-processed and then input to calculate a distance matrix for each proximity metric. For each distance matrix, k-nearestneighbour (KNN) graphs were then computed where each cell is connected to its k closest cells, as determined by the input distance matrix (see Supplementary-Primary Analysis). To account for varying degrees of local structure, KNN-graphs were constructed for each proximity metric at multiple neighbourhood sizes: 3, 10, 30 and 50. The resulting graphs are provided as input to the Scanpy implementation of the Leiden algorithm [48].
The Leiden algorithm identifies clusters as groups of cells that are more densely connected to each other than to the cells outside of the group based on the KNN-graph [48]. Leiden is an unsupervised method with a resolution parameter that can be tuned to inf luence the number of communities detected. To accomplish accurate benchmarking, the resolution parameter was adjusted automatically until the number of clusters in the ground-truth annotations was returned, or until 1000 iterations had been attempted. To account for initialisation bias, 10 random seed values were generated, and clustering was repeated with each seed for each KNN-graph (see Supplementary-Primary Analysis).
The performance of the individual clustering outputs for each KNN-graph was compared with ground-truth annotations and quantified using the Pair Sets Index (PSI) [49] implemented with genieclust (v1.0.0) [50] (see Supplementary-Primary Analysis). We also considered Adjusted Rand Index (ARI) [51] and Adjusted Mutual Information (AMI) [52] (see Supplementary-Primary Analysis) but PSI was the method of choice because any incorrect clustering of rare and abundant cell-populations affects this score equally. PSI has also been shown to be less sensitive to other clustering parameters such as the number of clusters and degree of cluster overlap [49]. PSI is a cluster validation metric based on pair-set matching and adjusted for chance, with a range of 0-1, where 0 indicates random partitioning whilst 1.0 represents perfect labelling with respect to ground truth annotations. The mean PSI across the clustering outputs was used to evaluate the neighbourhood size, k. Lastly, a mean PSI value was computed across the four neighbourhood sizes to summarize a proximity metric's performance on a dataset.
Results
We developed our evaluation framework to assess how metrics performed based on properties relevant to scRNA-seq data (Figures 1 and 2). Specifically, the 17 proximity metrics were evaluated for four major types of scRNA-seq data structure: Discrete-Abundant, Discrete-Rare, Continuous-Abundant and Continuous-Rare. Additional to these structural classes, we evaluated the inf luence of (i) dimensionality, (ii) cell-population rarity, (iii) sparsity and (iv) neighbourhood density.
Comparisons to ground-truth cell annotations were assessed using PSI, ARI and AMI evaluation methods. We found the clustering score was dominated by the performance on Abundant populations, with little inf luence from Rare populations. For example, ARI and AMI scored a clustering output as near perfect on the Discrete-Rare dataset (0.97 and 0.91, respectively) despite six of the eight cell types being incorrectly clustered ( Figure 3). Almost equivalent scores (ARI = 0.98, AMI = 0.96) were achieved by a clustering output where six of the eight cell types were accurately identified, showing the inability of these metrics to effectively distinguish clustering quality on datasets with substantial clustersize imbalances. In comparison, PSI scored the second clustering result substantially higher (0.85) than the first (0.31).
Clustering performance of proximity metrics is dependent on the intrinsic structure of scRNA-seq datasets
We find that the capacity of proximity metrics to identify similarities between cells correctly varies significantly depending on the intrinsic structure of scRNA-seq data ( Figure 4). On average, proximity metrics achieved higher clustering performance for the Discrete data structures than the Continuous ones (on average by 0.4 PSI) ( Figure 4). Within these structures, greater performance was observed for Abundant datasets than for Rare (average increase of 0.34 PSI) ( Figure 4). The magnitude of differences in clustering performance was larger between dataset structures than between metrics evaluated within the same structure. For example, the standard deviation (SD) across all metrics within the Discrete-Abundant structure was only 0.097, while the SD for Euclidean distance across the four data structures was 0.27 ( Figure 4). Similar trends are observed for simulated datasets and an additional four case-study datasets (Supplementary Figure S3, see Supplementary Data available online at https://academic.oup.com/bib).
DR reliably improves clustering performance of proximity metrics in discretely structured datasets, but not continuously structured datasets
To evaluate how DR affects the performance of proximity metrics, we reduced the dimensionality by selecting the 2000 (HVG2000) and 500 most highly variable genes (HVG500) and compared their performance to the complete, HD datasets. Metrics were considered invariant between any two levels of dimensionality if there was <0.05 change in PSI. Figure S5, see Supplementary Data available online at https://academic.oup.com/bib). This indicates DR is of particular benefit to true distance metrics commonly applied in scRNA-seq analysis for discretely structured data.
Despite substantial improvements in clustering performance due to DR, metrics in Discrete-Rare data structures have lower PSI values (<0.71) than Discrete-Abundant structures. Similarly, when evaluating the Continuous data structures, the metrics with the largest improvement due to DR had overall lower PSI values than the Discrete structure: PSI <0.67 for Continuous-Abundant, and < 0.34 for Continuous-Rare ( Figure 5A). Accordingly, the trends of poorer clustering performance with Continuous and/or Rare structure that are observed at HD largely remain after DR.
We next identify 'robust' metrics, characterized by a high level of performance and an invariant PSI across HD and HVG conditions. Such metrics may be an attractive option when performing DR is not feasible. We defined a high-performance metric as one with PSI at HD within 0.05 of the maximum PSI achieved for either level of DR within the corresponding dataset. Figure S6, see Supplementary Data available online at https://academic.oup.com/bib), none were classified as high performing, indicating that DR has a greater inf luence on datasets with continuous structure and therefore is likely to be a necessary step prior to clustering.
Given the limited number of metrics showing invariance to DR on continuously structured data, we explored whether the extent of reduction applied (HVG2000 versus HVG500) inf luenced metric performance. Variable performance between the two HVG conditions was observed in approximately half the proximity metrics in Continuous-Abundant data (8/17) and a quarter in Continuous Rare (4/17) ( Figure 5B). Euclidean and Manhattan were the only metrics that had a notable reduction in performance at HVG500 relative to HVG2000 for both continuous datasets. This contrasted with several other metrics which showed stronger performance with increasing DR.. In comparison, 16 of the 17 metrics in the Discrete datasets exhibited robust clustering performance between 2000HVG and 500HVG, with the outlier being Kulsinski ( Figure 5B). This suggests that in discretely structured data, equivalent information may be captured with 500 genes as with 2000 for most metrics, but also that further reduction beyond 2000HVG does not provide additional benefits. Conversely, for continuously structured data there may be a narrower parameter range at which the benefits of DR are balanced with the loss of relevant structural information.
All proximity metrics are sensitive to increasing rarity of cell-populations
To investigate if metric performance is only impacted beyond a certain rarity threshold, we generated Abundant (all populations >5%), Rare (multiple populations at >1 to <5%) and Ultra-Rare datasets (multiple populations at <1%) from simulated Continuous and Discrete data structures with moderate sparsity Performance was substantially reduced between Abundant and Rare datasets for Discrete (0.29 mean change PSI=, SD = 0.09) and Continuous (0.24 mean change PSI, SD = 0.17), indicating that cell-populations at proportions ≥1% are sufficiently rare to challenge proximity metrics ( Figure 6). Between the discretely structured Rare and Ultra-Rare datasets, performance was further reduced by a mean of 0.23 (SD = 0.07) across all metrics, with the maximum PSI of 0.49. There was no significant difference in PSI from Rare to Ultra-Rare datasets of Continuous structure (mean change in PSI = 0.04, SD = 0.03). This is unsurprising given that the metrics already displayed very poor performance for identifying Rare cell-types (≤0.41 PSI) (median PSI = 0.28, SD = 0.08). Notably, while Bray-Curtis and Cosine were among the top five performers for both Discrete and Continuous data structures based on PSI in Ultra-Rare datasets ( Figure 6), all proximity metrics showed poorer performance with increasing rarity of cell-populations.
Our findings suggest that a threshold of 'rarity' (cell-population proportion) at which metric performance is suddenly impacted does not exist. Rather, we see a continuing decline in performance for cell populations of decreasing proportions relative to the total dataset. We show the metrics' capacity to capture structural information is particularly challenged in datasets comprised of cell populations representing continuous processes and datasets containing rare cell populations.
Most metrics have poorer performance as sparsity increases, but under-utilized metrics show greater robustness
Sparsity is one of the greatest challenges when working with scRNA-seq data and hence it is important to evaluate performance against this structural property. Therefore, we evaluated our Abundant and Rare simulated scRNA-seq datasets at three sparsity levels: low (46-50%), moderate (68-71%) and high (89-90%) (Methods). We defined a metric as robust to sparsity if the change between PSI levels for different sparsity conditions was ≤0.05, sensitive if the change between PSI levels was ≥75th percentile for all metrics in that structural class, and moderately sensitive if between these thresholds (Supplementary Figure S8, see Supplementary Data available online at https://academic.oup.com/bib).
Similar to DR, proximity metrics are inf luenced by sparsity to a greater degree on continuously structured data than on discretely structured data. Encouragingly, a substantial number of proximity metrics demonstrated robust performance when sparsity was increased from low to moderate for the Discrete-Abundant (11/17) and Rare (7/17) datasets (Figure 7). Conversely, no metrics were identified as robust for Continuous-Abundant, and only Bray-Curtis and Pearson correlation in Continuous-Rare. Notably, these were also identified as robust metrics for the discretely structured datasets. Furthermore, Bray-Curtis, Cosine and Pearson correlation were consistently ranked among the top five metrics with the least sensitivity to sparsity for all structural conditions (Supplementary Figure S9, see Supplementary Data available online at https://academic.oup.com/bib). However, it should be noted, the maximum PSI for the Continuous-Rare dataset with moderate sparsity was only 0.41, indicating that the clustering performance of even the best-ranked metrics was poor for this structure.
Interestingly, performance of the true distance metrics (Euclidean, Manhattan, Chebyshev and Canberra) was more sensitive to sparsity than other proximity metrics (Figure 7). Our results suggest that Bray-Curtis, Cosine and Pearson correlation may be the preferred metrics when analysing datasets with moderate sparsity levels, versus the more common Euclidean and Manhattan distance.
Despite maintaining clustering performance at moderate sparsity, all 'robust' metrics drop substantially in performance when applied to high sparsity data. Furthermore, at high sparsity, the performance for Abundant and Rare structures becomes equivalent in the Continuous dataset (maximum PSI 0.21) (Supplementary Figure S10, see Supplementary Data available online at https://academic.oup.com/bib). This indicates that insufficient information is present in highly sparse scRNA-seq data to enable the discrimination of contiguous cell-types, irrespective of cellpopulation abundance. The same trend is observed for the Discrete data, with the exception of Bray-Curtis, Cosine and Pearson correlation which provide good clustering performance for Abundant data (≥0.8 PSI). Consequently, reduction of sparsity is a key factor in optimizing performance of proximity metrics on scRNA-seq data, with particular necessity for continuously structured data.
Dataset structure and sparsity are key factors in clustering parameter optimization
For clustering approaches based on KNN-graphs such as the Leiden algorithm, the neighbourhood size of the graph, k, affects the number and size of clusters identified. We investigated the impact of neighbourhood size by varying k (k = 3, 10, 30, 50, 100) and evaluating metric performance for each simulated data structure and sparsity condition. To identify metrics with the strongest performance across all neighbourhood sizes, we focused on the maximum PSI value across all neighbourhood sizes ≥75th percentile (Figure 8).
At low sparsity, proximity metrics achieved greater performance at small neighbourhood sizes (3,10) in Rare datasets of both Discrete and Continuous structure, whilst performance on Abundant datasets was invariant (Figure 8). These trends Figure 7. Left-Performance of proximity metrics identified as robust between low (50%) and moderate (70%) sparsity, given a threshold of ≤0.05 change in PSI. As no metrics met these criteria for the Continuous-Abundant dataset, the panel is blank. Right-Performance of proximity metrics identified as sensitive between low and moderate sparsity, given a threshold of ≥75th percentile change in PSI. Points depict mean PSI of clustering performance from simulated datasets across neighbourhoods of k = (3,10,30,50).
are weaker at moderate sparsity, as performance becomes more metric-specific. However, at high sparsity, metrics show increased performance at larger neighbourhood sizes (30,50,100) in the Discrete datasets, although in Discrete-Rare, Cosine and Correlation continue to exhibit greatest clustering performance at a neighbourhood size of 3. In the Continuous datasets, performance is consistently very poor regardless of neighbourhood size (<0.25 PSI). The inconsistent relationship between neighbourhood size and clustering performance at high-sparsity further underlines the challenges associated with capturing structural information from highly sparse scRNA-seq datasets and reinforces the recommendation to reduce dataset sparsity.
Summary and practical recommendations
Our findings have been summarized in a f lowchart to provide practical guidance on how to select an appropriate metric ( Figure 9). Overall, the diverse nature of the metrics evaluated was exemplified in their differing responses to the structural properties investigated. For example, Cosine is the highest ranked metric for robustness to sparsity across all data structures ( Figure 10A) but responded inconsistently to DR ( Figure 10B). In contrast, Manhattan distance performance was robust to changes in dimensionality but is among the most sensitive metrics to even moderate sparsity.
When ranking metrics according to PSI at 30 neighbours only (the default value in Seurat), the top 5 ranked metrics remained the same for dimensionality, and top 4 metrics for sparsity, albeit re-ordered (Supplementary Figure S11, see Supplementary Data available online at https://academic.oup.com/bib). This suggests that our results may be relevant even without parameter tuning. To further evaluate the reliability of these recommendations, our framework was re-run on a new representative dataset for each structural condition: Discrete-Abundant [53], Discrete-Rare [54], Continuous-Abundant [55] and Continuous-Rare [56] (Supplementary-Validation Case Studies) Table S7, see Supplementary Data available online at https://academic.oup.com/bib). The top performing proximity metrics and neighbourhood sizes for these new datasets consistently aligned with those recommended for datasets of those structural properties in Figure 9 (Supplementary Figure S12, see Supplementary Data available online at https://academic.oup.com/bib). Furthermore, our case-study analysis demonstrates the robustness of our recommendations to additional variables introduced with these new datasets: different species (Human and Mice), multiple sequencing technologies (Drop-Seq, inDrops and 10x) and alternative pre-processing methods (scTransform [57]) (Supplementary-Validation Case Studies).
Discussion
Given the direct inf luence of cell clustering on downstream analysis in scRNA-seq data, evaluating the accuracy of clustering algorithms is an important research area. Previous studies have recognized the effect of proximity metric choice when measuring cellcell similarity on clustering performance [36,37,39]. However, variable performance is reported for proximity metrics between datasets, making the recommendation of a specific metric impossible [39]. In response, we developed a framework to evaluate 17 proximity metrics with respect to core structural properties of scRNA-seq data, including sparsity, dimensionality, structure and rarity. Our findings demonstrate that greater care should be taken to select and fine-tune methods to suit the structural properties of the individual dataset. Consequently, we have provided practical guidance for researchers to optimize their cell-similarity search by investigating and acting on the structural properties of their own data.
Of the actions available, we identified reducing dataset sparsity as the most impactful factor for improving clustering performance (Figure 7), whilst DR via selection of highly variable genes also produced improvements in clustering performance for many metrics ( Figure 5). However, the variable results observed for continuously structured data indicate that the degree of DR must be tuned appropriately.
Selection of an appropriate neighbourhood size was essential for optimizing performance of metrics to accommodate cellbalance properties ( Figure 8). Notably, the greatest performance for Rare datasets was obtained with neighbourhood sizes 3 and 10, versus the default values of 20 and 30 in Scanpy and Seurat, respectively. This illustrates the importance of tuning parameters for a given dataset based on knowledge of the underlying system, rather than relying on default settings [58]. Similarly, the optimal parameters for DR methods have been shown to be a function of dataset-specific properties [16,[59][60][61], and we expect that this extends to other scRNA-seq methods.
We consistently identified cell-population structure to be one of the most inf luential properties, with substantially lower clustering performance for metrics in continuously structured datasets than discretely structured (Figure 4, Supplementary Figure S3, see Supplementary Data available online at https://academic.oup.com/bib). This has previously been identified as a shortcoming of clustering methods, and alternatives such as pseudo-time analysis [62] or soft clustering [63] have been proposed [4]. However, given that these recommended alternatives similarly rely on the calculation of cell-cell similarity, selection of an appropriate proximity metric is equally relevant. Additionally, performance was inferior in datasets with imbalanced cell-population proportions due to rare cell-types, as compared to the Abundant datasets (Figures 4 and 6). While we identified preferred dataset processing steps, proximity metrics and parameter values to improve performance on Rare datasets (Figure 9), we were unable to match the clustering performance of the Abundant datasets for either Discrete or Continuous structures.
It is worth highlighting that only by using a performance score which is independent of cluster size, such as the PSI, could the true extent of this effect from rare cell populations be revealed ( Figure 3) [49]. It is likely that unsatisfactory clustering accuracy due to rare cell populations is similarly present in other comparative evaluations but masked when using evaluation scores such as ARI and AMI. For ARI and AMI, cluster evaluations are size-dependent, and thus, the inf luence of misclassified rare cell populations on the overall score is greatly diminished [49,64,65]. Given common approaches for data processing, normalization, feature selection and clustering were used during our study, these findings raise concerns regarding the current state of rare celltype identification in scRNA-seq. An extension to our work would be to include specialized clustering methods developed for rare cell-populations, such as GiniClust [33], scAIDE [34] or CellSIUS [32]. However, if researchers are unaware of the presence of rare cell types in their data, they may not seek out such specialized methods. As such, there is a crucial need for greater integration of rare cell-type methods into popular scRNA-seq packages and standard analysis.
Euclidean distance is among the most commonly applied metrics in scRNA-seq. Despite this, when evaluated for robustness to sparsity and high-dimensionality in our datasets Euclidean, and the other true distance metrics, showed greater sensitivity relative to some lesser known proximity metrics (Figure 7, Supplementary Figure S6, see Supplementary Data available online at https://academic.oup.com/bib). These results were not entirely unexpected, as true distances metrics can perform poorly as dimensionality and sparsity increase, leading to poorly defined nearest neighbours [66,67]. In line with this, we saw true distance metrics perform considerably better with the appropriate level of DR, at times even achieving maximum performance ( Figure 5).
Our findings support previous studies which have similarly identified Euclidean as a poorly performing proximity metric in scRNA-seq [36,37,39]. In Kim et al. [37] correlation-based metrics outperformed Euclidean distance for clustering, which was attributed to the sensitivity of the true distance metrics to scaling and normalization, whereas correlation-based metrics are invariant to these factors. Interestingly, Pearson and Kendall correlations, along with another scale-invariant metric, Cosine, were preferred metrics for the majority of structural conditions examined in our study. However, other scale-invariant metrics such as Spearman correlation did not show the same performance trends. Skinnider et al. [36] also found Euclidean performed poorly and suggested that as scRNA-seq only yields relative gene expression rather than an absolute, proportionality metrics such as Phi and Rho are more suitable [68]. Whilst Phi had moderate performance in our evaluation, it was outperformed by Pearson, Kendall and Cosine. However, another proportionality-based metric, Bray-Curtis, was a preferred metric for over half of the structural condition combinations evaluated.
Accordingly, in scenarios where cell-type annotations are unknown, users will have greater success identifying true cell groupings when using an alternative proximity metric that is suited to the structural properties of their dataset, as opposed to the default of Euclidean provided in most scRNA-seq analysis tools. Several clustering methods that use alternative metrics have already been shown to perform well for scRNA-seq data. For example, SC3 generates a consensus distance matrix derived from the Euclidean, Pearson and Spearman proximity metrics [69]. RaceID3 is a rare cell-type clustering method, which allows the user to select from a range of distance and correlation-based metrics [70]. Other methods have instead developed new metrics to measure cell-cell similarity, such as CIDER which recently proposed Inter-group Differential ExpRession (IDER) as a metric for their new clustering pipeline [31].
Our framework could be extended to include clustering methods beyond graph-based clustering. However, similar results were obtained for proximity metrics clustering performance by Skinnider et al. [36] when they compared hierarchical and graphbased clustering, suggesting that our results may hold for other methods. As with clustering, many scRNA-seq DR methods rely on the calculation of cell-cell similarity with a proximity metric. To minimize the inf luence of additional proximity calculations on the downstream clustering result, we used a feature-selection approach when exploring this aspect of data structure. However, given the popularity of alternative DR methods in scRNA-seq pipelines, such as PCA [71], t-SNE [72] and UMAP [73], an interesting future direction would be to investigate approaches based on feature transformation. Furthermore, as these DR methods typically use Euclidean distance, the application of our framework to explore the inf luence of alternative proximity metrics on DR performance may prove insightful [74,75]. Whilst consistent results were achieved with two different processing pipelines in this study (Supplementary-Validation Case Studies), we expect proximity metric performance to be impacted to some extent by dataset processing. Therefore, future extensions to the framework design to study the inf luence of pre-processing could be explored.
Taken together, our findings demonstrate how the inherent structural properties of scRNA-seq data have a substantial inf luence on the performance of proximity metrics and, resultantly, cell-type clustering and subsequent identification. Given the complexity of scRNA-seq datasets, it is unlikely for a single metric to perform best in all situations. Instead, we have provided practical guidelines for the selection of proximity metrics likely to perform well with respect to specific properties of the dataset. Furthermore, we provide our framework in the form of a python package to allow users to evaluate proximity metrics for their own datasets. The relevance of this study extends beyond cell clustering, to the numerous scRNA-seq analysis methods which make use of cell-to-cell distances. The findings from our study are expected to contribute to improvements in novel metric development for HD, sparse data such as scRNA-seq.
Key Points
• We developed a framework to systematically evaluate the inf luence of scRNA-seq data structural properties on the clustering performance of proximity metrics. • Clustering performance can be improved substantially by selection of an appropriate proximity metric and neighbourhood size for the structural properties of a given dataset. • Clustering performance for many proximity metrics was improved by reducing dataset sparsity and/or dimensionality.
• Popular metrics such as Euclidean distance performed poorly relative to lessor applied metrics including Cosine, Bray-Curtis and Pearson and Kendall correlations. • Clustering accuracy with respect to rare cell populations is ineffectively evaluated by ARI and AMI due to their sensitivity to cluster size, and we recommend using sizeindependent metrics such as the Pair Sets Index for situations where bias based on cluster size is not useful. | 2022-09-25T06:18:04.127Z | 2022-09-23T00:00:00.000 | {
"year": 2022,
"sha1": "ac0b677e5ab3d7b43af2c44a6d859c224e2fabd6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1093/bib/bbac387",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "cfc2388e089e6627e04c082b3f5acbd52e1bed7c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244881446 | pes2o/s2orc | v3-fos-license | Testis developmental related gene 1 promotes non-small-cell lung cancer through the microRNA-214-5p/Krüppel-like factor 5 axis
ABSTRACT Non-small-cell lung cancer (NSCLC) is a frequent malignancy and has a high global incidence. Long noncoding RNAs (lncRNAs) are implicated in carcinogenesis and tumor progression. LncRNA testis developmental related gene 1 (TDRG1) plays a pivotal role in many cancers. This study researched the biological regulatory mechanisms of TDRG1 in NSCLC. Gene expression was assessed by reverse transcriptase quantitative polymerase chain reaction (RT–qPCR). Changes in the NSCLC cell phenotypes were examined using 5-ethynyl-2ʹ-deoxyuridine (EdU), cell counting kit-8 (CCK-8), wound healing, flow cytometry, and Transwell assays. The binding capacity between TDRG1, microRNA-214-5p (miR‑214-5p), and Krüppel-like factor 5 (KLF5) was tested using luciferase reporter and RNA immunoprecipitation (RIP) assays. In this study, we found that TDRG1 was upregulated in NSCLC samples. Functionally, TDRG1 depletion inhibited NSCLC cell growth, migration, and invasion and accelerated apoptosis. In addition, TDRG1 interacted with miR-214-5p, and miR-214-5p directly targeted KLF5. The suppressive effect of TDRG1 knockdown on NSCLC cellular processes was abolished by KLF5 overexpression. Overall, TDRG1 exerts carcinogenic effects in NSCLC by regulating the miR-214-5p/KLF5 axis.
Introduction
As a commonly diagnosed malignancy, non-smallcell lung cancer (NSCLC) has a high incidence [1]. NSCLC worsens considerably after metastasis by rapidly spreading to other body parts and organs, such as bone, liver and brain [2]. Some studies have suggested that NSCLC occurs more frequently in patients who have undergone heart or lung transplant surgery or those with a long smoking history, especially in advanced-age patients [3,4]. Despite great breakthroughs in NSCLC treatment, the therapeutic effect for advanced NSCLC patients remains unsatisfactory, and the five-year survival rate is merely 18% [5,6]. Therefore, developing novel therapeutic treatments is essential for prolonging NSCLC patients' lives and helping to alleviate their suffering from the effects of cancer and related treatments.
Long noncoding RNAs (lncRNAs), made up of over 200 nucleotides, are unable to be translated into proteins and are regarded as regulatory molecules [7]. Many lncRNAs have been identified to regulate tumorigenesis in cancers in recent years. For example, lncRNA epidermal growth factor receptor-antisense RNA 1 facilitates squamous cell carcinoma cell invasion and migration by sponging miR-145 [8]. The knockdown of lncRNA deleted in lymphocytic leukemia 1 plays an inhibitory role in renal cell carcinoma [9]. Accumulating evidence shows that lncRNAs act as important regulators in NSCLC progression. For example, Kinectin 1-antisense RNA 1 silencing inhibits NSCLC cell proliferation, increases apoptosis, and blocks tumor growth in nude mice [10]. HOXB cluster antisense RNA 3 exacerbates malignant phenotypes of NSCLC cells [11]. Furthermore, recent papers have demonstrated that lncRNA testis developmental related gene 1 (TDRG1) can be a carcinogenic molecule in cancers. TDRG1 enhances cervical cancer cell growth by upregulating mitogen-activated protein kinase 1 [12]. TDRG1 increases cell viability and migration in endometrial carcinoma [13]. Increasing evidence suggests that lncRNAs serve as competing endogenous RNAs (ceRNAs) to modulate the level of tumor-related genes by binding to microRNAs (miRNAs) [14,15]. Moreover, a study demonstrated that TDRG1 silencing inhibits the growth and metastatic ability of NSCLC cells by regulating the miR-873-5p/zinc finger e-box binding homeobox 1 axis [16].
In this study, we further sought to elucidate the molecular mechanisms of TDRG1 in NSCLC. Given its high expression in NSCLC, we hypothesized that TDRG1 may promote NSCLC progression by binding to miRNAs through the ceRNA pattern. We investigated the influences of TDRG1 on cell proliferation, invasion, migration, and apoptosis in NSCLC cells. In addition, the oncogenic mechanism of TDRG1 in NSCLC was also demonstrated. This study may provide new insights for the understanding of TDRG1 in NSCLC.
Tissue samples
NSCLC tissues (n = 40) and adjacent nontumor lung tissues (n = 40) were obtained from NSCLC patients undergoing surgery at the Affiliated Kunshan Hospital of Jiangsu University. The collected samples were frozen in liquid nitrogen. Neither radiotherapy nor chemotherapy was performed on the patients before the surgery. No patients had infectious diseases or histories of treatment aimed at NSCLC. Informed consent was obtained from all participants. The protocol was approved by the Ethics Committee of Affiliated Kunshan Hospital of Jiangsu University.
Cell transfection
TDRG1 was knocked down by specific short hairpin RNAs designated sh-TDRG1#1/2, with control shRNA (sh-NC) used as a negative control. For overexpression of miR-214-5p, miR-214-5p mimics and the control (NC mimics) were constructed. KLF5 was overexpressed by pcDNA3.1 integrated with Krüppel-like factor 5 (KLF5 complete sequence, designated pcDNA3.1/KLF5, with empty pcDNA3.1 used as a control). It is likely that TDRG1 expression was upregulated by inserting its full length into the pcDNA3.1 vector. Transfection was performed using Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA) for 48 h. All plasmids were commercially provided by GenePharma (Shanghai, China). Cells were seeded in 24-well plates at 2 × 10 5 cells/well, transfected with 40 nM shRNA vector or 0.2 μg overexpression vector following the instructions provided with the Lipofectamine 2000 (Invitrogen) as described previously [17], and harvested at 48 h for further analysis.
Reverse transcriptase quantitative polymerase chain reaction (RT-qPCR)
Total RNA was extracted with TRIzol reagent (Invitrogen). Subsequently, an Omniscript RT Kit (Takara, Dalian, China) was used for reverse transcription. RT-qPCR was performed using SYBR Premix Ex Taq (TAKARA, Osaka, Japan) with a 7900HT Fast Real-Time System (ABI Company, USA). The 2 −ΔΔCt method was used to analyze the expression of TDRG1, miR-214-5p, and KLF5 [18]. U6 served as the normalization control for miR-214-5p expression, while glyceraldehyde-3phosphate dehydrogenase (GAPDH) served that role for TDRG1 and KLF5 expression. The sequences of the PCR primers are shown in Table 1.
Western blotting
Western blotting was performed using a standard and established protocol as previously published [19]. The proteins were collected from NSCLC cells and quantified using a bicinchoninic acid kit (Pierce, Appleton, USA). Subsequently, the protein samples were separated with 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis and then transferred onto polyvinylidene difluoride membranes. After being blocked with 5% skim milk, the membranes were probed with primary antibodies (Abcam Inc., USA) labeled with fluorescein, followed by incubation with secondary antibodies (Abcam Inc.). An Odyssey infrared scanner (Li-Cor Bioscience, Lincoln, NE, USA) was used to detect the protein bands. The images of proteins were visualized with chemiluminescent reagent kits (Thermo Fisher Scientific, Waltham, MA, USA). Primary antibodies against the following proteins were used: cyclin A1 (ab13337); CDK2 (ab76146); Bcl-2 (ab32124); Bax (ab32503); GAPDH (ab9484); and KLF5 (ab137676).
Cell counting kit-8 (CCK-8) assay
As previously documented [21], the cells were plated in 96-well plates (5 × 10 3 cells/well) and incubated for 24, 48, and 72 h. At each time point, 10 μl of CCK-8 solution (Kumamoto, Japan) was added to each well for 4 h of incubation. A microporous plate reader (Multiskan MK3, Thermo Fisher Scientific) at 450 nm was used to detect the results. The experiments were conducted 3 times independently.
Wound healing and Transwell assays
In the cell migration and invasion assay, mitomycin was added to exclude the interference of cancer cell proliferation. The transfected cells were seeded in 6-well plates at 6 × 10 4 cells/well. A 200 μl of sterile micropipette tip was utilized to make an artificial wound when cell confluence reached 95%. Then, the suspended cells were washed with phosphate-buffered saline. Wound closure was photographed with a phase-contrast microscope (Olympus Corporation, Tokyo, Japan) at 0 and 24 h and quantified with ImageJ software [22]. Transwell chambers (Corning Inc., Corning, NY, USA) were precoated with Matrigel. NSCLC cells (5 × 10 4 ) in serum-free medium were added to the upper chamber. Then, 500 μl of DMEM containing 10% fetal bovine serum was added to the lower chamber. After 24 h, the cells were washed with phosphate-buffered saline, fixed with methanol (Sigma, St. Louis, MO, USA), and stained with 0.1% crystal violet. Cells were visualized with a light microscope (Olympus Corporation) [23].
Flow cytometry-based assay
An annexin V-fluorescein isothiocyanate (FITC)/ propidium iodide (PI) double-labeling staining kit (BD Biosciences, San Jose, CA, USA) was used in this assay. The procedure was performed as previously described [24]. The cells (2 × 10 5 /well) in 6-well plates were collected, washed twice with cold phosphate-buffered saline, and resuspended in 1 × binding buffer. Subsequently, cells were stained with 10 μl of annexin V-FITC for 15 min and 5 µl of PI for 10 min in the dark at room temperature. Cells were examined using a FACSCanto II flow cytometer (BD Biosciences). Analysis of flow cytometry data was performed using FlowJo version X.10.0.7-1 (FlowJo, LLC).
Subcellular fractionation assay
To determine the localization of TDRG1 in NSCLC cells, NE-PER Nuclear and Cytoplasmic Extraction Reagents (Thermo Scientific, USA) were utilized to separate the nuclear and cytoplasmic fractions according to the manufacturer's protocol. Total RNAs were isolated with TRIzol (Invitrogen). Finally, the TDRG1 level was detected by RT-qPCR.
Luciferase reporter assay
The 3 -UTR sequences of KLF5 containing binding sites for miR-214-5p and the complete sequence of TDRG1 were cloned into the pmirGLO vectors (Promega, Madison, WI, USA) to generate the KLF1-Wt vectors and TDRG1-Wt vectors. The mutant sequences were constructed to generate the TDRG1-Mut vectors and KLF1-Mut vectors. NC mimics or miR-214-5p mimics were cotransfected with these vectors into A549 and H1299 cells using Lipofectamine 2000 (Invitrogen). A luciferase detection kit (Promega) was applied to measure the luciferase activities after 48 h [25].
RNA immunoprecipitation (RIP) assay
RIP was performed using a Magna RNA-Binding Protein Immunoprecipitation Kit (EMD Millipore, Billerica, MA, USA) [26]. At 90% confluence, cells were centrifuged at 4°C for 5 min at 1,000 × g, washed with precooled phosphate-buffered saline and lysed with radioimmunoprecipitation assay lysis buffer. Subsequently, the lysates were incubated for 10 min at 4°C with human Ago2 antibody (ab186733; Abcam; 5 µg) conjugated to magnetic beads, with IgG antibody (ab172730; Abcam; 5 µg) used as the control group. Samples were treated with Proteinase K for 30 min at 55°C with gentle agitation. Immunoprecipitated RNA was isolated using TRIzol. Coprecipitated RNAs were purified, identified, and analyzed with RT-qPCR.
Statistical analysis
GraphPad Prism software 5.0 was utilized for statistical analysis. One-way analysis of variance or Student's t test was used to evaluate differences among groups. The results are shown as the mean ± standard deviation. Linear correlation analysis was performed using Spearman's correlation coefficient. The value of p < 0.05 was considered statistically significant. All experiments were repeated at least three times.
Results
This study aimed to determine the functional role of TDRG1 in NSCLC. We also investigated the molecular mechanisms underlying the functional role of TDRG1 in NSCLC. Its upregulation was confirmed in NSCLC samples collected in this study. As revealed by functional experiments, TDRG1 served as an oncogenic molecule to promote the proliferation, invasion, and migration of NSCLC cells. In terms of the mechanism, TDRG1 upregulated KLF5 expression by sponging miR-214-5p. Overall, TDRG1 exerts carcinogenic effects in NSCLC by regulating the miR-214-5p/ KLF5 axis.
TDRG1 is upregulated in NSCLC
Before investigating the role of TDRG1, its level in NSCLC was measured. RT-qPCR results showed that TDRG1 was significantly upregulated in NSCLC tissues compared to normal tissues (p = 0.000) (Figure 1(a)). We next sought to examine whether TDRG1 expression correlates with the clinicopathological parameters of NSCLC patients. The median value of TDRG1 expression was used as the cutoff to divide the patients into high (n = 18) and low (n = 22) expression groups. As shown in Table 2, a high TDRG1 level was significantly related to tumor-node-metastasis (TNM) stage (p = 0.013) and lymph node metastasis (p = 0.004). As shown in Figure 1(b), the TDRG1 level in NSCLC cells (A549, H1299, LC-2/ad, GLC-82 and H520) was significantly higher than that in the normal lung cell line MRC-5 (p = 0.000).
Discussion
NSCLC has a high incidence of complications, and the postoperative survival rate of NSCLC patients is low [27]. LncRNAs have been confirmed in many studies as important participants in cancer progression [9,28]. LncRNA TDRG1 has also been reported to accelerate the development of many malignancies [12,13]. In this research, we discovered that TDRG1 was overexpressed in NSCLC tissues and cells. Moreover, TDRG1 depletion reduced cell proliferation, migration, and invasion and increased apoptosis. These findings confirmed the oncogenic role of TDRG1 in NSCLC.
Furthermore, studies on ceRNA networks have been widely reported in recent years. LncRNAs serve as molecular sponges for miRNAs to influence mRNA expression levels and thereby affect the process of cancers [29][30][31][32]. Additionally, TDRG1 participates in the progression of cancers as a ceRNA. For instance, TDRG1 competes with human fibroblast growth factor for sponging miR-873-5p to accelerate the development of gastric carcinoma [33]. As an oncogene, TDRG1 enhances the proliferation of cervical cancer cells by sponging miR-330-5p to upregulate an ETS domain-containing protein [34]. In this research, it was predicted that TDRG1 contains a binding site for miR-214-5p. MicroRNAs (miRNAs), a kind of small ncRNA, are widely reported as regulators in multiple biological processes [35]. Moreover, the role of miR-214-5p in many cancers has been elucidated. For example, miR-214-5p regulates collapsin response mediator proteins to inhibit cell proliferation in prostate cancer [36]. MiR-214-5p suppresses cell invasion and migration in hepatocellular carcinoma [37]. Here, it was confirmed that miR-214-5p was downregulated in NSCLC cells. Additionally, TDRG1 was proven to interact with miR-214-5p and to be negatively related to miR-214-5p. We concluded that TDRG1 acts as a sponge of miR-214-5p.
Furthermore, we identified that Krüppel-like factor 5 (KLF5) was targeted by miR-214-5p in NSCLC cells. KLF5 contributes to cervical cancer by upregulating expression of tumor necrosis factor receptor superfamily member 11a [38]. KLF5 exacerbates thyroid cancer by activating nuclear factor κB signaling [39]. Moreover, KLF5 was reported to be overexpressed in NSCLC and to play an oncogenic role [40,41]. Mounting evidence shows that miRNAs exert regulatory effects by regulating their target mRNAs in the progression of cancers, including NSCLC [41][42][43]. Moreover, it has been reported that miRNAs participate in tumor progression by targeting KLF5. MiR-145-5p facilitates gastric cancer by binding to the KLF5 3ʹ-UTR [44]. MiR-493-5p suppresses osteosarcoma cell proliferation by downregulating KLF5 [45]. Here, KLF5 was found to be upregulated in NSCLC tissues. We further confirmed that miR-493-5p targeted KLF5 and negatively regulated KLF5. Additionally, TDRG1 upregulated KLF5 expression by sponging miR-493-5p. Rescue assays demonstrated that overexpressing KLF5 rescued the inhibitory effect of TDRG1 silencing on the cellular development of NSCLC.
Conclusion
In summary, this work validated the abnormal expression of TDRG1 in NSCLC tissues and cells and showed that TDRG1 functions as an oncogene in NSCLC to promote cell proliferation, migration, and invasion through the miR-214-5p/KLF5 axis. Therefore, our study suggested that TDRG1 may be a promising diagnostic biomarker and therapeutic target of NSCLC. In the future, we will conduct in vivo experiments to further confirm the role and mechanism of TDRG1 in NSCLC.
Limitation
The present study is not without limitations. First, the clinical sample size of the NSCLC patients should be increased to further verify the clinical significance of our findings. Second, the related signaling pathways targeted by the TDRG1/miR-214-5p/KLF5 axis remain unclear and require further investigation. | 2021-12-04T06:16:36.930Z | 2021-12-02T00:00:00.000 | {
"year": 2021,
"sha1": "3194dbf01fc1086512759edac97dafaa5cd069db",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21655979.2021.2012406?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93c75cb52c11bfd1a4e1cbc9683f945fae5e9f89",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
241783828 | pes2o/s2orc | v3-fos-license | Competence, Competency, and Competencies: A Misunderstanding in Theory and Practice for Future Reference
The term competence has been widely applied in the field of human resource and management. This term is also used interchangeably with other terminologies, including competency and competencies. Despite their ubiquitous usage in scientific literature, it remains unclear whether all these terminologies pose the same meaning or should be differentiated in theory and practice. Therefore, an enhanced understanding of these terms is contingent upon a firm grasp of their history and importance. This paper consists of three parts: (1) the definition of terms related to competence, competency, and competencies; (2) the categories of competencies; and (3) a proposed diagram to differentiate between these terms. Several academic journals have served as references in obtaining a clear understanding of the differences between the terms competence, competency, and competencies.
Introduction
The competency-based approach has been utilized in organisational settings to measure the levels of employees' knowledge, skills, and abilities. Similar to several other terms, competence and competency also have different versions of meanings from general to specific. Sykes (l980) defined 'competency' and 'competence' as the ability to do a task and these definitions are readily interchangeable with another. In the management literature, the definitions for 'competence ' and 'competency' are inconsistent. For instance, Burgoyne (1989) also defined competence as the ability to perform a task.
However, McClelland (1973); Spencer and Spencer (1993) proposed definitions with specific characteristics, such as motives, traits, and skills which are needed to become a superior performer in the organisation. This second definition is in line with the definition offered by Boyatzis (1982), whereby competence is defined as 'an underlying characteristic of a person, which results in effective and/or superior performance in a job.' More confusion can arise due to the different definitions found in dictionaries and from the management context, despite the fact that these terms are often interchanged in the plural form (Moore et al., 2002).
In addition, the academic context (Dirani et al., 2020;Oberländer et al., 2020) of these terms also differs from the management context (Atan & Mahmood, 2019), which has left it unclear whether to use lay meaning from the dictionary or to apply the management context. At times, inconsistency may occur in terms of which definition to use in similar contexts between the academic and the management side.
Confusion due to inconsistent meanings may affect the outcome of performance management assessment used by an employer to measure the current level of knowledge, skills, and abilities of an employee (Moore et al., 2002). It is also a possible cause of employee conflict in the organisation, which could reduce teamwork performance, since there is more than one person in a group for a particular department. Thus, this issue needs to be addressed to reduce the multiple definitions of competence, competency, and competencies. This paper will discuss the definition of each term, which could guide future scholars in finding the best way to understand each term for future studies which adopt the competency-based approach.
Literature Review
From the pioneering work of 'testing for competence rather than for intelligence ', McClelland (1973) argued that traditional intelligence or aptitude tests and school grades are less accurate in predicting an individual's job and life performance. Instead, the underlying personal traits and enduring qualitative behaviours, or known as 'competencies', could be used to predict both outcomes more accurately. Since then, many studies have been conducted across various fields of study (Arifin & Rasdi, 2017;Suhairom et al., 2014), in both local (Huei et al., 2019;Mohd Salleh et al., 2015) and international contexts (Frezza et al., 2018;Kakemam et al., 2020 Rothwell, 2004;Evarts, 1987;Hager et al., 1994;Hoffmann, 1999;McClelland, 1973;Spencer & Spencer, 1993), as shown in Table 1. This paper will also discuss the most apt definitions for competence, competency, and competencies.
Table 1. Definition of competencies by authors in competency studies Author(s) Competency Definition Aiman et al. (2017)
A set of personal and job knowledge, skills, abilities, or attitudes for a specific task, job, or profession within a job performance scope. McClelland (1973) A set of traits towards effective or superior job performance.
Boyatzis (1982,2008) The relationship between an individual and superior job performance. Spencer & Spencer (1993) Ability and skills gained through training, and job and life experiences. Evarts (1987) Managers' underlying characteristics related to superior performance. Hager, Gonczi, and Athanasou (1994) The standard or quality as the outcome of an individual's performance. Hoffmann (1999) Underlying qualification and attributes of a person, observable behaviours, and standard on a person's performance. Dubois and Rothwell (2004) The combination of knowledge, thought patterns, skills, and characteristics that resulted in a successful performance. Cernusca and Dima (2007) A person's underlying criteria that lead to individual performance and career development.
Misinterpretation
The term 'competency' is a 'fuzzy' concept (Wong, 2020) which may lead to misinterpretation because the terms 'competence' and 'competency' can be used interchangeably without proper justifications. The first term, competency, is a person's knowledge, skills, and abilities or attitude. The second term, competence, refers to task-oriented behavioural approaches. Table 3 describes the differences between both terms from various perspectives.
The Issues of Definitions
Competence can be used to refer to areas of work in which the person is competent, the socalled 'areas of competence'. However, when the areas being referred to are the dimensions of behaviour lying behind competent performance, with a meaning that can be regarded as being 'person-related', Woodruffe (1991) recommended that the term 'competency' should be used instead. Similarly, Armstrong (l998) sought to differentiate between 'competence' and 'competency''. Armstrong's perspective was that 'competence' describes what people need in order to be able to perform a job well; the emphasis is on doing (perhaps in terms of achieving the desired output). 'Competency', in contrast, defines dimensions of behaviour lying behind competent performance. These are often referred to as behavioural competencies, because they are intended to describe how people behave when they carry out their jobs. The differences between these two terms can seem overly subtle and may be disregarded by some. Such a possibility would be unfortunate if realised in the context of performance assessment carried out within an organisation. As shown in Table 3, there are several differences between 'competency' and 'competence'. However, people tend to use these terms interchangeably when conducting their research. Zemke (1982) suggested that there is still no standard definition for these terms, since they would be based on the different objectives of each study. Along with 'competence' and 'competency', there is also the term 'competencies'. This term reflects on the recognition for an employee who possesses the required knowledge, skills, and abilities required by a specific profession. Coming from this perspective, the following characteristics of these key terms are suggested by Moore et al (2002): Competence -an area of work supported by an employee's knowledge, skills, and abilities.
Competency -the behaviour(s) supporting an area of work through knowledge, skills, abilities, and attitude.
Competencies -the attributes underpinning a behaviour.
The competence of an employee should be the main concern for a specific task, job, or profession as a reflection of their individual 'competence'. In this context, the results for whether an employee is able to perform a specific competency would be based on their actions against a prescribed standard of competency element and vice versa. When organisations in one country are acquired by organisations from a different country, the differences in terminology will cause greater confusion and conflict, for instance, between organizations and countries in Europe and Asia. This definition confuses behaviour (competency) with outcomes, or area of work (competence). In order to gain some clarification from this confusing situation, an analysis of the differences between countries in Europe and Asia was conducted using different models to offer better definitions of the terms competency, competence, and competencies.
This study proposes the following diagram to explain the position for each term. Figure 1 shows four main terms discussed in this paper, namely competent, competency, competence, and competencies. Competent refers to a condition where a person is able to meet the performance criteria set by the organisation. Competence means the ability to meet the performance criteria (knowledge, skills, abilities, attitude, and behaviours). Competency is a set of knowledge, skills, abilities, attitude, and behaviours. Competencies are different sets of knowledge, skills or abilities that are transformed into several competency domains to represent a specific task or profession. It is clear that different terms contain different explanations for each one. This diagram is in line with most of the definitions proposed by previous authors in competency studies (Richard E. Boyatzis & Boyatzis, 2008;Cernusca & Dima, 2007;Dubois & Rothwell, 2004;Evarts, 1987;Hager et al., 1994;Hoffmann, 1999;McClelland, 1973;Spencer & Spencer, 1993).
Conclusion
A number of confusions related to the area of study on competence, competency, and competencies have been highlighted in this paper. Apart from the articles written by McClelland (1973) and Moore et al. (2002), there is no consensus among scholars on how to address the misconceptions regarding the definitions of competence, competency, and competencies. In this paper, a new explanation for each term is addressed by proposing a pyramid diagram that can be used as a guide by other HR and management researchers and scholars concerning the future direction of competency-based assessments. A particular aspect of this diagram is to ensure that the right decision is made by HR departments and researchers when adopting a competency-based approach. It is anticipated that future | 2021-10-15T16:07:58.798Z | 2021-09-12T00:00:00.000 | {
"year": 2021,
"sha1": "ace2247acb3d53e2ff8b6102aa59d5de52fd3908",
"oa_license": "CCBY",
"oa_url": "https://hrmars.com/papers_submitted/11064/competence-competency-and-competencies-a-misunderstanding-in-theory-and-practice-for-future-reference.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b9bb5ea707d1c20a3398609aa9514d52f290b109",
"s2fieldsofstudy": [
"Business",
"Education"
],
"extfieldsofstudy": []
} |
222093206 | pes2o/s2orc | v3-fos-license | Plasma ACE2 and risk of death or cardiometabolic diseases: a case-cohort analysis
Background Angiotensin-converting enzyme 2 (ACE2) is an endogenous counter-regulator of the renin–angiotensin hormonal cascade. We assessed whether plasma ACE2 concentrations were associated with greater risk of death or cardiovascular disease events. Methods We used data from the Prospective Urban Rural Epidemiology (PURE) prospective study to conduct a case-cohort analysis within a subset of PURE participants (from 14 countries across five continents: Africa, Asia, Europe, North America, and South America). We measured plasma concentrations of ACE2 and assessed potential determinants of plasma ACE2 levels as well as the association of ACE2 with cardiovascular events. Findings We included 10 753 PURE participants in our study. Increased concentration of plasma ACE2 was associated with increased risk of total deaths (hazard ratio [HR] 1·35 per 1 SD increase [95% CI 1·29–1·43]) with similar increases in cardiovascular and non-cardiovascular deaths. Plasma ACE2 concentration was also associated with higher risk of incident heart failure (HR 1·27 per 1 SD increase [1·10–1·46]), myocardial infarction (HR 1·23 per 1 SD increase [1·13–1·33]), stroke (HR 1·21 per 1 SD increase [1·10–1·32]) and diabetes (HR 1·44 per 1 SD increase [1·36–1·52]). These findings were independent of age, sex, ancestry, and traditional cardiac risk factors. With the exception of incident heart failure events, the independent relationship of ACE2 with the clinical endpoints, including death, remained robust after adjustment for BNP. The highest-ranked determinants of ACE2 concentrations were sex, geographic ancestry, and body-mass index (BMI). When compared with clinical risk factors (smoking, diabetes, blood pressure, lipids, and BMI), ACE2 was the highest ranked predictor of death, and superseded several risk factors as a predictor of heart failure, stroke, and myocardial infarction. Interpretation Increased plasma ACE2 concentration was associated with increased risk of major cardiovascular events in a global study. Funding Canadian Institute of Health Research, Heart & Stroke Foundation of Canada, and Bayer.
Introduction
The renin-angiotensin system is a hormonal cascade whose modulation has resulted in several effective cardiovascular disease therapeutics. Decades of research and clinical practice have focused on the pressor arm of renin-angiotensin system. Angiotensin-converting enzyme (ACE) cleaves angiotensin I to angiotensin II, which acts on the type 1 angiotensin II receptor. Recent evidence has also shed light on an important counterbalancing component of the renin-angiotensin system axis, through the action of ACE2. In brief, ACE2 cleaves angiotensin II into the heptapeptide angiotensin 1-7, which acts on the Mas receptor pathway, which is widely believed to exert protective effects, including vasodilation and inhibition of fibrosis. [1][2][3] There is a global effort to better understand ACE2, the receptor via which severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the betacoronavirus responsible for COVID-19, enters cells. [4][5][6] ACE2 is a known regulator of cardiac function and dysregulation of this system is worth further examination, particularly given that a considerable proportion of individuals who are admitted to hospital for COVID-19 exhibit signs of cardiac damage with subsequent poor prognosis. 7 Small clinical studies suggest that increased circulating ACE2 activity and concentration might serve as a marker of poor prognosis in individuals with various cardiovascular diseases, but no study has provided data in a large cohort drawn from the general population. [8][9][10][11][12] The Prospective Urban Rural Epidemiology (PURE) study provides an opportunity to examine the association between ACE2 levels with future cardiovascular disease events and deaths in a prospective global communitybased cohort. In this study, we aim to 1) understand the role of demographic and clinical char acteristics as potential determinants of plasma ACE2 concentration; (2) describe the association of plasma ACE2 as a risk marker for cardiovascular disease and death; and (3) describe the relative importance of plasma ACE2 as a risk marker for cardiovascular disease events and deaths compared with established cardiovascular disease risk factors.
Study design and participants
PURE is a large prospective study of individuals in 27 lowincome, middle-income, and high-income countries (appendix p 2). Participant recruitment and selection is described in the appendix (p 2) and has been described in detail in previous papers. 13 A biobanking initiative was developed for a subset of PURE participants to assess genomic and proteomic markers of chronic disease risk. Blood samples from participants were shipped from 14 countries (ie, Argentina, Bangladesh, Brazil, Canada, Chile, Colombia, Iran, Pakistan, Philippines, South Africa, Sweden, Tanzania, United Arab Emirates, and Zimbabwe) to the Population Health Research Institute (Hamilton, ON, Canada) and stored at -165°C. Samples were considered eligible if they belonged to individuals from the major self-reported ethnicity in the residing country (eg, European ancestry in Sweden). Samples were deemed ineligible if they were unsuitable for analysis or were non-fasting.
Briefly, we took a random sample from the pool of 55 246 eligible participants. This random sample is known as the subcohort. Because it is a random sample of the pool of eligible participants, it will include some participants with incident events of interest. We then also include all individuals who have incident events of interest that were not selected as part of the subcohort for analyses. The final sample consists of participants who were members of the subcohort and those who had incident events outside the subcohort (appendix p 9). Our outcome events of interest included death, myocardial infarction, stroke, heart failure, and diabetes. This study design permits cost-effective, unbiased assessment of the exposure-outcome relationship of the original cohort from which it was sampled. Details on case-cohort sampling methods used for PURE and the participant flow diagram are in the appendix (p 7).
The study was approved by research ethics committees at each study centre and at Hamilton Health Sciences (Hamilton, ON, Canada). All study participants provided written informed consent.
Protein measurement
Plasma concentration was measured using an immunoassay based on proximity extension assay technology (Olink PEA CVD-II panel; Uppsala, Sweden). 14 A 1·8 mL aliquot of plasma from each PURE participant was transported to the Clinical Research Laboratory and
Research in context
Evidence before this study We performed a systematic search of MEDLINE for relevant articles published between Jan 1, 2000 (the year of discovery of angiotensin-converting enzyme 2 [ACE2]), and May 12, 2020, restricted to the English language. Our search terms included "ACE2", "cardiovascular disease," "genome wide association study", and "Mendelian randomization". We searched published articles by title and abstract to identify relevant studies and additionally hand searched reference lists of eligible studies. We considered studies that assessed the relationship between plasma ACE2 concentration and cardiovascular disease. Although the search does not represent an exhaustive list of all available research, existing evidence from small clinical studies suggest there is an association of increased ACE2 in the plasma and poorer cardiovascular disease outcomes in those with preexisting disease. There are no robust data on the importance of plasma ACE2 in general populations.
Added value of this study
This study provides the largest epidemiological analysis of the circulating biomarker ACE2 in a general population. 10 753 people were analysed from a large global cohort using a nested case-cohort design. Our study population includes participants from 14 countries and seven distinct ancestral groups over 9·4 years of follow-up. We find that increased circulating ACE2 is strongly associated with increased risk of death, cardiovascular disease, and diabetes. Notably, circulating ACE2 is the highest-ranked predictor of death when compared among a set of clinical risk factors (smoking, diabetes, blood pressure, lipids, and body-mass index [BMI]) and supersedes other common risk factors as a predictor of myocardial infarction, stroke, and heart failure risk. With the exception of incident heart failure events, the independent relationship of ACE2 with clinical endpoints remained robust even after adjustment for brain natriuretic protein (BNP). ACE2 levels are higher in men, older people, those with a smoking history, diabetes, higher BMI, higher blood pressure, and higher blood lipids. There are also wide variations in concentration across ancestral groups (with south Asians having the lowest levels of plasma ACE2 and east Asians having the highest levels in our sample). Plasma ACE2 is a heritable trait and our examination of common genetic variants through a genome-wide association study uncovered two loci at genome-wide significance. One locus was near the ACE2 gene and the other was near the HNF1A gene, which previous literature suggests induces higher cellular ACE2 expression levels in pancreatic islet cells. We also provide evidence that plasma ACE2 has important metabolic implications as evidenced by its relationship with BMI and association with incident diabetes.
Implications of all the available evidence
Plasma ACE2 is strongly associated with death, cardiovascular disease, and metabolic abnormalities in a multiancestral global cohort drawn from the general population. The relationship of this non-canonical marker of hormonal dysregulation with cardiovascular events and death, independent of traditional cardiac risk factors and BNP, suggests that understanding and modulating this arm of the renin-angiotensin system might lead to new approaches to reducing cardiovascular disease.
See Online for appendix Biobank in Hamilton, ON, Canada. Data generated are expressed as relative quantification on the log 2 scale of normalised protein expression (NPX) values. Although NPX values are relative quantification units, the OLink platform has been extensively validated and previous work shows strong relationships between measurements from the multiplex OLink panel and singleplex assays of the same markers with absolute units. 8,14 Individual samples were excluded on the basis of quality controls for immunoassay and detection, as well as degree of haemolysis (appendix p 19). NPX values were rank-based normal transformed for further analyses.
Genotyping and genetic analysis
PURE participants suitable for proteomics analyses were genotyped on the Thermofisher Axiom Precision Medicine Research Array (appendix p 20). To assess the robustness of the clinical determinants of ACE2 emerging from cross-sectional analyses, we did twosample Mendelian randomisation, a causal inference technique using genetic variants to approximate effects of an exposure (ie, clinical risk factors) on an outcome (ie, ACE2 levels). If an exposure is causally related to an outcome, genetic variants associated with the exposure should affect the outcome in a manner directionally consistent and proportional to the effect size on the exposure. 15,16 Well conducted Mendelian randomisation analyses yield estimates that are robust to residual confounding and reverse causation. 17 To account for differences in genetic architecture that exist between ancestral groups, genetic analyses were done within each group and meta-analysed to obtain a final estimate (appendix p 21). Ancestral groups with less than 1000 indi viduals per group were excluded from genetic analysis to ensure stability of individual ancestral estimates (appendix p 21). We applied this same Mendelian randomi sation approach to assess whether anti-hypertensive therapies (ACE inhibitors, calcium channel blockers, and β blockers) influenced circulating ACE2 levels. More details on the genetic analysis are in the appendix (p 20).
Statistical analysis
Means and SDs for continuous variables and numbers with proportions for baseline characteristics are presented for the subcohort and for each event outside the subcohort. A descriptive analysis was done using an ordinary least-squares regression with plasma ACE2 concentration as the outcome in a cross-sectional analysis at baseline. Mutually adjusted effects of the following independent variables on ACE2 levels were presented: age (in years), sex (male vs female), diabetes (yes vs no), smoking (never smoker vs current or former smoker), body-mass index (BMI; kg/m²), systolic blood pressure (mm Hg), LDL cholesterol (mmol/L), and geographic ancestry (African, Arab, east Asian, European, Latin, Persian, or south Asian ancestry). Each predictor was ranked on the basis of the magnitude of their likelihood ratio χ² value comparing the full model to the reduced model without that predictor. 18 Association of anti-hypertensive therapies (ACE inhibitors, calcium channel blockers, β blockers, diuretics, and angiotensin receptor blockers) on plasma ACE2 was assessed in an ordinary least-squares regression where each medication was dummy coded. Effects were mutually adjusted and additionally adjusted for age, sex, BMI, smoking, diabetes, adjusted blood pressure, and geographic ancestry in individuals with hypertension.
Modelling of cardiovascular events was done to account for oversampling because of the case-cohort design (appendix p 18). The following incident outcomes were analysed: total deaths, cardiovascular deaths, noncardiovascular deaths, myocardial infarction, stroke, heart failure, and incident diabetes. The association measure was presented as a hazard ratio (HR) per 1 SD unit increase in the marker, adjusted for the following: age, sex, smoking, BMI, systolic blood pressure, non-HDL cholesterol, and geographic ancestry. Each outcome was also adjusted for diabetes status; however, in the diabetes analysis, individuals with confirmed diabetes status were excluded. For death and cardiovascular disease outcomes, ACE2 was then compared with other commonly used risk factors (smoking status, diabetes status, systolic blood pressure, non-HDL cholesterol, and BMI) and ranked on the basis of magnitude of the Wald χ² value. Analyses were done with R, version 3.6.2 and SAS, version 9.3.
Role of the funding source
Funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the manuscript. The corresponding author and coauthors (SN, SY, MC, CR, SIB, SR, MP, and AW) had full access to the all the data in the study and had final responsibility for the decision to submit for publication.
Results
This study used samples collected from patients recruited between Jan 5, 2005, and Dec 31,2006. From the pool of 55 246 eligible participants, we randomly selected 5693 patients, of whom 5084 patients were included as our subcohort after exclusions related to quality control, data quality, and missingness; 6373 individuals had at least one incident event of interest (including death, myocardial infarction, stroke, heart failure, and diabetes), of whom 5669 were included in analyses after exclusions (table 1; appendix pp 7-9). The final sample included 10 753 participants. Median follow-up was 9·42 years (IQR 8·74-10·48). Genetic analyses were limited to the following groups containing more than 1000 individuals: Latin (n=4058), European (n=3372), and Persian (n=1269).
In analyses of the determinants of plasma ACE2 levels, sex accounted for the most variation, followed by geographic ancestry, BMI, diabetes status, age, systolic blood pressure, smoking status, and LDL cholesterol ( figure 1A). Men had higher plasma ACE2 than women and levels (0·58 SD units [95% CI 0·54-0·61] higher in men), and concentrations varied widely by geographic ancestry. There was an estimated 0·69 SD unit (95% CI 0·56-0·82) difference between the ancestral group with the lowest plasma ACE2 levels (south Asians) and those with the highest plasma ACE2 levels (east Asians; appendix p 33). Higher BMI, older age, diabetes, higher blood pressure, higher LDL cholesterol, and smoking were all associated with increased levels of circulating To investigate whether the association between clinical risk factors and ACE2 levels were potentially causal, we did Mendelian randomisation analyses for the clinical risk factors identified in the previous analysis. Directionally concordant with the phenotypic associations, genetically higher BMI and greater risk of type 2 diabetes were associated with increased levels of plasma ACE2 ( figure 1B). Conversely, although point estimates were similar to their phenotypic estimates, genetic predisposition to smoking, increased LDL cholesterol, and increased systolic blood pressure were not significantly associated with plasma ACE2 levels.
A cross-sectional analysis of common anti-hypertensive medications and their relationship with plasma ACE2 levels was done in a subset of the patients with hypertension (n=5216). We found no association between plasma ACE2 levels and use of ACE inhibitors, angiotensinreceptor blockers, β blockers, calcium channel blockers, or diuretics (appendix p 31). Results of our Mendelian randomisation-based approach using instrumentations of ACE inhibitors, β blockers, and calcium channel blockers were concordant with these null findings (appendix p 30).
ACE2 was compared with clinical risk factors (diabetes, BMI, smoking status, non-HDL cholesterol, and systolic blood pressure) in its relationship with each outcome. Models were adjusted for clinical risk factors, age, sex, and ancestry. Compared with these clinical risk factors, ACE2 was the highest-ranked predictor of total deaths (figure 2A; appendix p 34), cardiovas cular deaths (figure 2B), and non-cardiovascular deaths (figure 2C), the third-highest ranked predictor of myo cardial infarction (after smoking, diabetes, and similar to non-HDL cholesterol; figure 2D), and the third-highest ranked predictor of both stroke and heart failure (after systolic blood pressure and diabetes; figure 2E, F). As a complementary analysis, we did a minimally adjusted analysis of continuous modifiable risk factors (blood pressure, BMI, and non-HDL cholesterol; appendix p 24) In this analysis, ACE2 has the strongest association (HR 1·52 per SD [95% CI 1·38-1·68]) with cardiovascular deaths of these factors. The second strongest risk factor was systolic blood pressure (HR 1·32 per SD [1·22-1·43]). In a resampling-based analysis, ACE2 con sistently emerged among the top predictors of total deaths along with diabetes and smoking status (appendix p 39). We examined whether associations between ACE2 and events were consistent in different subpopulations by doing a subgroup analysis for each risk factor. Although male sex is associated with higher levels of ACE2 concentration, our subgroup analysis suggests there is no differential All-cause mortality Data are hazard ratios (95% CI) per 1 SD increase in ACE2 (1 SD of ACE2=0·73 normalised protein expression units). Model 1 is the unadjusted Cox model. Model 2 is controlled for age, sex, and ancestry. Model 3 is controlled for age, sex, ancestry, systolic blood pressure, non-HDL cholesterol, smoking, and diabetes. ACE2=angiotensin-converting enzyme 2. BNP=brain natriuretic peptide. *Patients with diabetes at baseline were excluded from analysis for these results. We did an additional analysis comparing the association of ACE2 and brain natriuretic peptide (BNP) with deaths and cardiovascular events (table 2). When additionally adjusted for BNP, increased ACE2 remains associated with greater risk of death, myo cardial infarc tion, stroke, and diabetes. The relationship of ACE2 with heart failure was directionally consistent but attenuated upon adjustment for BNP.
Discussion
We present the first large-scale epidemiological analysis of blood ACE2 levels as a marker of cardiovascular disease. Our study uses a community-based, prospective (median follow-up time 9·42 years [IQR 8·74-10·48]) design to clarify the importance of the counter-regulatory axis of renin-angiotensin system in determining cardio vascular disease endpoints. We found higher levels of plasma ACE2 are associated with greater risk of death, cardiovascular and non-cardiovascular deaths, stroke, myocardial infarction, diabetes, and heart failure independent of age, sex, ancestry, and traditional cardiac risk factors. The results, including the relationship with all-cause deaths and all cardiovascular disease events, except for heart failure, remained robust even after adjustment for BNP. Blood ACE2 concentration, in comparison to established risk factors, was the highest ranked predictor of death, cardiovascular and non-cardiovascular deaths, and superseded many other common risk factors in explaining variation in stroke, myocardial infarction, and heart failure. We observed that male sex, higher blood pressure, smoking, higher BMI, and older age were all associated with higher levels of circulating ACE2 concentration. To our knowledge, our Mendelian randomisation analysis is the first to show a potential causal role for adiposity in determining ACE2 concentration.
Past work of plasma ACE2 assessed prognostic implications of circulating ACE2 in patients with established cardiovascular disease. Many of the previous studies examining blood ACE2 in cardiovascular disease in humans examined ACE2 catalytic activity, particularly by means of a quenched fluorescent substrate assay. Previous examinations of ACE2 concentration have chiefly been measured in urine samples; however, there has been increased use of assays measuring blood ACE2 concentration. The relationship between blood ACE2 concentration and ACE2 activity might require further study in different patient populations; however, a previous investigation showed a strong correlation between catalytic activity and concentration. 21 Furthermore, studies measuring ACE2 activity show concordant pat terns to ours in terms of the factors determining higher circulating concentrations as well as how ACE2 relates to clinical events prospectively. Specifically, among individuals with obstructive coronary disease, atrial fibrillation, heart failure, and aortic stenosis, increased activity of ACE2 corresponded with an increased risk of impaired functional status and adverse cardiac events. [8][9][10][11][12] This implies that the increase of circulating ACE2 in these populations acts as a marker of disease or its severity. However, a previous investigation suggested that ACE2 delivered in a recombinant manner reduces deleterious angiotensin II and upregulates protective angiotensin 1-7 in a prospective heart failure cohort. 22 Our findings with a highly sensitive protein assay suggest that plasma ACE2 is worth examining further as a marker of dysregulated renin-angiotensin system, even in an apparently healthy population, and suggests that increased ACE2 is associated with increased risk of cardiovascular disease and death. Our findings of factors associated with plasma ACE2 are likewise notable. Sex explained the most variation in circulating ACE2 levels. This is consistent with numerous previous studies showing marked differences in circulating ACE2 activity and concentration between men and women. 8,11,12,23 Differences in ACE2 expression between sexes have also been reported in various human tissues, including adipose tissue, the heart, and the renal cortex. 20 Given that the gene encoding ACE2 is located on the X chromosome, X chromosome inactivation escape might also play a part in observed differences in ACE2 between men and women. However, biological implications of increased ACE2 concentrations in men and their relationships to sex-differential predisposition to cardio vascular disease remain poorly understood, although our subgroup analysis suggests there is little evidence for heterogenous effect of ACE2 between sexes. Because of the global nature of our cohort, we were able to detect variations in plasma ACE2 levels by geographic ancestry, which is consistent with observations that different ancestral groups might have marked variation in plasma protein concentration. 24 Future efforts in examining circulating ACE2 should consider this variation, particu larly when using reference panels not developed for local populations. Our finding that plasma ACE2 had a strong association with incident diabetes is noteworthy, in part because of its compelling biological basis. For example, diabetic ACE2 knockout mice show increased fibrosis and weakened ACE inhibitor response as it relates to hypertension and renal protection. 25 Specimens of kidney tissue from individuals with diabetic nephropathy have lower renal ACE2 expression than tissue from individuals with two healthy kidneys. 26 Diabetes models likewise show alterations in activity of endogenous ACE2 regulators, such as ADAM-17. 27,28 The Mendelian randomisation associations of BMI and diabetes with ACE2 blood concentration strengthen the case for ACE2 as a metabolic marker. The HNF1A variant (rs2464190) shown to be associated with ACE2 levels is a master regulator of metabolism; other studies have linked this specific variant with susceptibility to coronary artery disease and type 2 diabetes. 29,30 Furthermore, HNF1A is probably a direct regulator of ACE2 levels because its promoter region contains three HNF1A binding sites, and within pancreatic islet cells, HNF1A induces higher cellular ACE2 expression levels. 28 The ACE2 receptor facilitates viral entry for SARS-CoV-2. In patients with COVID-19, the ACE2 receptors might play a role in leading to cardiovascular complications such as thrombosis, cardiac injury, and heart failure. ACE2 is a possible link between SARS-CoV-2 and the cardiac presentations described in findings that have emerged from global data during the COVID-19 pandemic. 31 Recent discussion surrounding SARS-CoV-2 has centred on altering hypertension medication management to account for concerns that ACE inhibitors or angiotensin II receptor blockers might increase viral entry through ACE2. 31,32 Our findings, as well as parallel analyses examining the effects of ACE inhibitors or angiotensin II receptor blockers on ACE2, do not support altering antihypertensive treatment regimens for the sole purpose of modifying ACE2. [33][34][35] However, well conducted randomised controlled trials will be needed to make more definitive claims about the role of ACE inhibitors and angiotensin receptor blockers in COVID-19 prognosis.
Our study has some limitations. In our cross-sectional analysis, we were unable to fully account for unmeasured confounding and reverse causality with regards to how demographic and clinical factors determine ACE2 concentrations. However, our Mendelian randomisation analyses support BMI as a potentially modifiable risk determinant of ACE2 concen trations. Our Mendelian randomisation approach did not support the hypotheses that smoking, blood pressure, or lipids have an effect on plasma ACE2 concentration. Second, although our analysis shows ancestry accounts for variability in plasma ACE2 levels in our sample, the study design and methods used were not suitable for distinguishing genetic from environmental effects or addressing clinical implications of the observed differences in ACE2 levels between groups. Third, plasma ACE2 requires special consideration with regards to biological interpretation. Although increased cell-bound ACE2 exerts protective effects against cellular proliferation, hypertrophy, oxidative damage, and vasoconstriction, the mechanism by which levels rise in the plasma remains an area of active research. It is likely that a complex interaction between cellular expression, enzymatic cleavage, and impaired plasma clearance affects plasma concentrations. We note that a genetic variant associated with plasma ACE2 levels (rs5936022) in our study is associated with increased expression in the heart, brain, and vasculature, suggesting that increased blood levels also reflect increased ACE2 synthesis. However, limitations in our biological under standing of plasma ACE2 still preclude inference on function at the tissue level. Fourth, although we assessed risk factors for their causal effect on ACE2 concentra tions using Mendelian randomisation, our genome-wide association study only detected one variant near the ACE2 gene at genome-wide significance, despite an overall heritability of 33-66%. Future Mendelian randomisation-based analyses of ACE2 as a causal marker of cardio vascular disease outcomes will require further power to robustly detect suitable instruments.
Plasma concentration of ACE2 shows an independent association with cardiovascular disease, including death, myocardial infarction, stroke, heart failure, and diabetes in a global population-based study. Compared with established clinical risk factors, ACE2 consistently emerges as a strong predictor of cardiovascular disease or death. Regardless of cause, plasma ACE2 might present a readily measurable indicator of reninangiotensin system dysregulation. Our primary means of modulating the renin-angiotensin system cascade has focused on therapies dampening the pressor arm using agents such as ACE inhibitors and angiotensin receptor blockers. Modulation of ACE2 and the counterbalancing arm might represent an important therapeutic frontier, and clinical trials are underway to this effect.
Contributors
SN and GP conceived the study. SN conducted data analyses and wrote the first draft of the manuscript. GP supervised all the analyses, assumes responsibility for analyses, and assumes responsibility for data interpretation. GP conceived and organised the PURE biomarker and genetics study. SY is the principal investigator of PURE, planned the biobank, stored blood and urine samples, and reviewed and commented on drafts of the manuscript. MC organised the genotyping pipeline, conceived and conducted the genetic data analyses, helped in writing the manuscript, reviewed the drafts, and commented on it. CR conducted data analyses and reviewed and commented on the manuscript. AW and MP constructed the dataset, including the biomarker measurements in each participant, and reviewed and commented on the drafts of the manuscript. SIB reviewed and commented on the data analysis. SR coordinated the worldwide study and reviewed and commented on drafts of the manuscript. MvE and KL reviewed and commented on drafts of the manuscript. | 2020-10-02T13:05:25.309Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "02439964b659fb4aedfd09ff60ba57aec1031f03",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7529405",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "02439964b659fb4aedfd09ff60ba57aec1031f03",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8415751 | pes2o/s2orc | v3-fos-license | Performance of joint modelling of time-to-event data with time-dependent predictors: an assessment based on transition to psychosis data
Joint modelling has emerged to be a potential tool to analyse data with a time-to-event outcome and longitudinal measurements collected over a series of time points. Joint modelling involves the simultaneous modelling of the two components, namely the time-to-event component and the longitudinal component. The main challenges of joint modelling are the mathematical and computational complexity. Recent advances in joint modelling have seen the emergence of several software packages which have implemented some of the computational requirements to run joint models. These packages have opened the door for more routine use of joint modelling. Through simulations and real data based on transition to psychosis research, we compared joint model analysis of time-to-event outcome with the conventional Cox regression analysis. We also compared a number of packages for fitting joint models. Our results suggest that joint modelling do have advantages over conventional analysis despite its potential complexity. Our results also suggest that the results of analyses may depend on how the methodology is implemented.
103
h i (t) = h 0 (t)exp( T ω i + αm i (t)), i = 1, 2, …, n, t > 0. (1) 104 105 Here h 0 (t) denotes the baseline hazard rate, ω i is a vector of baseline predictors (e.g. treatment 106 indicator, gender, age, etc.) and is the corresponding vector of regression coefficients. The 107 time dependent predictor is represented by m i (t) with α being the corresponding coefficient 108 vector. A commonly used model for m i (t) is the linear mixed-effects model. Specifically, m i (t) is 109 given by: The JM package is very versatile and allows many variations to the fitting of joint models. 135 Firstly, it allows the baseline hazard to be unspecified, to take the form of the hazard 136 corresponding to the Weibull distribution for the event times or to be approximated by (user-137 controlled) piecewise-constant functions or splines. For ordinary Cox regression, the baseline 138 hazard is usually left unspecified. This is of course a well-known advantage of Cox regression. 139 This advantage avoids the restriction resulting from specifying a certain form for the baseline 140 hazard and at the same time still can offer valid statistical inference through the use of partial 141 likelihood. However, in the context of joint modelling, this advantage no longer holds because a 142 completely unspecified baseline hazard will generally lead to underestimation of the standard 185 (iii) a 0 and a 1 were the fixed effects with given values.
186 (iv) b 0i and b 1i were the random effects generated from a bivariate normal distribution with mean 187 0 and a given covariance matrix. 188 (v) i (t) was the random error generated from a normal distribution with mean 0 and a given 189 variance.
190 The time-to-event data was generated as follows: 191 (i) The hazard rate, h i (t), for subject i at time t (t = 0, 1, 2, . . ., 364), was computed as follows: 204 (iv)If T i C i , the survival status for subject i was taken to be 1 and the time to event occurrence 205 was taken to be T i . Otherwise, the survival status was taken to be 0 and the censoring time 206 was taken to be C i .
207
To complete the generation of the simulated data, the data collection of the time-dependent 208 predictor was taken to occur at regular time points, specifically at day 0 (i.e. baseline) and then at 209 30-day intervals thereafter. Also, the data of the time-dependent predictor was taken to be 210 unavailable after the event time or censoring time, whichever was applicable. Therefore, for each 211 subject, non-missing data for the time-dependent predictor were taken to be those at days 0, 30, 212 60 and so on up to the measurement occasion prior to the event time or censoring time. Any 213 post-event or post-censoring data were not used.
214
The parameters associated with the simulations were given the following values: The fixed-effect intercept, a 0 , was given the value 40. splines. This is an alternative semi-parametric approach following the same rationale as using 294 a piecewise-constant function.
295
The JM package offers two options for numerical integration: the standard Gauss-Hermite 296 rule and the pseudo-adaptive Gauss-Hermite rule. It has been shown that the latter can be more Manuscript to be reviewed 329 be around 50 (as illustrated in the boxplots of Figures 1a and 1c). Table 2 shows these results for 330 the estimation of 1 in each set of simulation. For the confidence intervals, it is expected that 331 good performance should correspond to a coverage of approximately 95%, say 90% or more. For 332 unbiasedness, as mentioned above, the percentage of estimates less than the true parameter value 333 should be approximately 50 for good performance, say between 40 and 60. The shaded entries in 334 Table 2 are those scenarios which did not perform well. It can be seen that joineR and stjm 335 tended to show better results. Note also that, for a small number of the simulated datasets, the 336 estimates were not available when JM was used with a piecewise-constant baseline hazard due to 337 convergence problems.
338
The results in Table 2 355 1 is negative, this suggests that these two analysis methods tended to underestimate or even 356 reverse the value of 1 . Conversely, for the two JM analyses, a substantial number of datasets 357 had the percentage above 60 with some way above 60. This suggests that these two analysis 358 methods tended to overestimate the value of 1 .
359 Figure 3 shows the corresponding results for the estimation of. It can be seen from Figure 360 3a that all analysis methods had 90% or more for the confidence interval coverage. Figure 3b 361 shows that, except for a few occasions, the percentage of estimates less than the true parameter 362 value were all between 40 and 60 for all the analysis methods. These results suggest that the 363 performance of the different analysis methods were all good for the estimation of the group 364 effect.
365
Recall that all of the 32 sets of simulations were for a zero group effect. Simulations were 366 repeated on four of the simulation sets for a non-zero group effect. These four sets were sets 3, 7, 367 19 and 23 in Table 2. These sets were chosen because they showed the worst results for the two 368 Cox analyses and the two JM analyses. The reason that a non-zero group effect was not applied 369 to all the 32 simulation sets was that it was very time consuming to run the simulations. The non-370 zero group effect was taken to be -0.5. Table 3 shows the results of these simulations, which are 371 very similar to the corresponding results in Table 2. 373 As a further assessment of the joint modelling methodology and the various software packages, 374 the same analyses described above were applied to a set of real data collected in a study of risk For the estimation of , while there were also substantial differences between estimates, the 405 standard errors were similar. It is again of interest to note that there was considerable variation 406 among the p-values, although all of them were non-significant at the 0.05 level. | 2018-02-03T23:39:31.319Z | 2016-10-19T00:00:00.000 | {
"year": 2016,
"sha1": "32e46cdc069e14cb0805f5bca436058e1cbd99c1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.2582",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fc8ea721269c2031d7e2765b58d7c2f9275ab8e8",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
264348373 | pes2o/s2orc | v3-fos-license | Higher Levels of Galectin-1 and Galectin-3 in Young Subjects with Autism Spectrum Disorder Compared to Unaffected Siblings and Healthy Controls
Objective Despite being highly genetic, the etiology of autism spectrum disorder (ASD), has not yet been clarified. Recent research has focused on the role of neuroinflammation and immune system dysfunction in the pathophysiology of neurodevelopmental disorders including ASD. Galectin-1 and galactin-3 are considered among the biomarkers of neuroinflammation and there has been recent reports on the potential role of galectins in the etiology of neurodevelopmental disorders. However, there has been no study examining the relationship between ASD and galectin levels. Methods Current study aimed to investigate galectin-1 and galectin-3 serum levels in young subjects with ASD comparing with their unaffected siblings and healthy controls. Results We found significantly higher levels of galectin-1 in case group compared to both unaffected siblings and healthy controls, and higher levels of galectin-3 in case group compared to healthy controls. However, there was no significant association between galectin-1 and galectin-3 levels with the severity of ASD. Conclusion Findings of our study may support neuroinflammation hypothesis in the etiology of ASD and the potential role of galectin-1 and galectin-3 as biomarkers.
INTRODUCTION
Autism spectrum disorder (ASD) is a childhood onset neurodevelopmental disorder characterized by deficits in social communication and interaction, repetitive behaviors, and limited range of interests [1].Prevalence of ASD has increased in recent years up to 1/68 [2].Although a range of genetic and environmental factors associated with ASD have been identified in the literature, the pathogenesis and etiology of ASD remained unclear [3].Latest evidence indicated the role of neuroinflammation and immune system dysfunction in ASD pathophysiology [4][5][6].
Several post-mortem and imaging studies revealed that patients with ASD had higher levels of microglia in certain brain regions which signal neuroinflammation [5,7,8].Also, numerous studies have reported increased blood concentrations of inflammatory cytokines in patients with ASD [9][10][11].Microglias play a major role in neurodevelopment and synaptic processing [12] and regulate synaptic pruning and synaptogenesis during the early stages of central nervous system (CNS) development [13].Hence, abnormalities in microglial function could be associated with etiopathogenesis of ASD [14].Research suggested that chronic microglial activation and abnormal brain inflammatory response could underlie cognitive dysfunction and neurodegenerative disorders [5,15].Abnormal cytokine levels in blood and cerebrospinal fluid are observed in children with ASD [16].These abnormalities were linked to behavioral problems and symptom severity [17].Hence, there has been an increased interest on the biomarkers of ASD which could be helpful for diagnostic process, monitoring prognosis and treatment responses, and early detection of disorder in individuals prone to ASD [18].Various proteins have been investigated as candidate biomarkers for studying the relationship between inflammation and ASD, although a specific biomarker has not been identified yet [19,20].
Recent research on the role of inflammation in pathogenesis of neurodegenerative and neurodevelopmental disorders primarily focused on galectins.Galectins are a protein family with 15 members present at different cells and tissues [21].As a family of β-galactoside-binding lectins, galectins play a major role in regulating immune and inflammatory responses [22].They participate in various biological processes including cell adhesion, migration, proliferation, transformation, apoptosis, angiogenesis, and immune responses [23].Galectins could regulate or strengthen the inflammatory response in neurological diseases, hence helping damaged CNS tissues regenerate [24].Galectin-1 and galectin-3 are the most prevalent forms of galectins [21].Galectin-1 is synthesized by various cell types in immune system such as T and B cells, macrophages, and microglias [25].Galectin-1 is a significant modulator of CNS homeostasis.Several studies reported that galectin-1 expression was altered in neurological diseases, which was linked to anti-inflammatory processes and neuroregeneration [25][26][27].Galectin-1 also facilitates neural protection by inhibiting microglias in CNS [25].
Galectin-3, on the other hand, participate in various cellular processes such as adhesion, activation, growth, differentiation, intercellular interactions, lifecycle, and apoptosis [28,29].Galectin-3 is also present in various types of immune cells except resting lymphocytes, and it acts as a pro-inflammatory agent [28,29].There is ample evidence concerning the role of galectin-3 in inflammation and degeneration in CNS [24].Research indicated that galectin-3 is an initiator of microglia activation and proliferation following CNS damage.It is stated that the interaction of galectin-3 and toll-like receptor 2 on microglia is necessary to initiate neuroinflammation [30,31].However, excessive galectin-3 levels lead to irregular secretion of proinflammatory cytokines [32].This could then amplify the glial activation hence resulting in a vicious cycle.This long term inflammatory response may then cause synaptic loss in CNS and neurodegeneration [15].
There has been a rising interest on the relationship between galectins and neurodegenerative diseases.However, very few studies have investigated the relationship between galectins and psychiatric disorders, and even fewer have focused on children with neurodevelopmental disorders.The aim of this study was to compare serum levels of galectin-1 and galectin-3 between patients with ASD, their unaffected siblings, and healthy controls.We hypothesize that serum galectin-1 and galectin-3 levels, which are potential neuroinflammatory biomarkers, will be higher in case group than their unaffected siblings and healthy controls.Similarly, it is also expected that unaffected siblings will have higher serum levels of galectin-1 and galectin-3 compared to healthy controls.Finally, it is hypothesized that serum galectin-1 and galectin-3 levels will be correlated to ASD severity and behavioral problems accompanying ASD.To the best of the researchers' knowledge, this was the first study to investigate galectin-1 and galectin-3 levels in patients with ASD.
Participants
The present study was conducted in Istanbul University Medical Faculty Child and Adolescent Outpatient Clinic between October 2021 and March 2022.The current study included three groups as a case group, unaffected siblings group, and a healthy control group to examine whether galectins could be endophenotypes of ASD and to compare inflammatory responses in unaffected siblings with the case group and healthy controls.At the beginning of the study 108 ASD patients aged between 2−12 years and their healthy siblings closest in age were invited to the study.ASD diagnosis was made based on the Diagnostic and Statistical Manual of Mental Disorders 5th edition (DSM-5) diagnostic criteria.Diagnosis of ASD was confirmed by experienced faculty members (M.C and Y.T).For the case group, the following exclusion criteria were used: the presence of schizophrenia, bipolar disorders, metabolic, genetic, neurologic and/or gastrointestinal disorders, regular medication use due to chronic illnesses and/or any other psychiatric disorders, intake of supplements such as vitamins and fish oil during the last month, active presence of infection or history of infection in the last month.For both control groups, in addi- tion to the above exclusion criteria, those with attention deficit hyperactivity disorder (ADHD) and intellectual disabilities were also excluded from the study.Overall, 42 ASD patients, and 42 unaffected siblings (control group 1, CG1), and age and gender-matched 42 healthy children (control group 2, CG2) took part in the study.Healthy controls were recruited among 94 children during their visits to Istanbul University Medical Faculty Pediatrics Outpatient Clinics. Figure 1 shows flowchart of the study subjects.
Assessment
Participants were given a survey form consisting of questions on sociodemographic and clinical information.
Psychiatric diagnoses for all participants were assessed by Schedule for Affective Disorders and Schizophrenia for School-Age Children-Present and Lifetime Version (K-SADS-PL).For case group, Childhood Autism Rating Scale (CARS) was used to evaluate the severity of ASD symptoms.The reliability and validity of the K-SADS-PL/DSM-5 and the CARS have been established for the Turkish population [33,34].
Sociodemographic Data Form
A sociodemographic data form that was prepared by the researchers was used to obtain information about so-ciodemographic characteristics, developmental and medical history of the participants.
K-SADS-PL/DSM-5
The K-SADS-PL is a semi-structured clinical interview designed to assess current and past episodes of psychopathology in children and adolescents according to DSM-5 criteria [35,36].It was designed to promote earlier diagnosis of mental disorders in children and adolescents in a way that incorporates reports by both the child and parent and a clinician's clinical judgment.A reliability and validity study of K-SADS-PL/DSM-5 for the Turkish population was conducted by Ünal et al. [33].
CARS
It is a behavioral rating scale developed by Schopler et al. [37] to distinguish children with autism from children with other developmental disorders, specifically children with intellectual disabilities from children with autism.This scale is filled on the basis of information obtained from the family and observation of the child by the clinician.The scale, which consists of a total of 15 items, is a diagnostic assessment method that rates individuals on a scale ranging from normal to severe, and yields a composite score ranging from non-autistic to mildly autistic, moderately autistic, or severely autistic.CARS was adapted to Turkish by İncekaş Gassaloğlu et al. [34].
The study was approved by Istanbul University Medical Faculty Clinical Research Ethics Committee on December 4, 2020 (project ID: TTU-2021-38129).All participants and their parents were informed about the study and written consent was obtained from the parents.
Blood Collection and Quantification
Venous blood samples were collected from the participants after 8−12 hours of fasting.The blood samples were centrifuged at 4,000 rpm for 10 minutes, and the separated sera were aliquoted into Eppendorf tubes and stored at −80°C until the time of analysis.Serum galectin-1 and galectin-3 levels were measured with double-antibody sandwich enzyme-linked immunosorbent assay kits (Invitrogen) according to the manufacturer's instructions.Values of galectin-1 and galectin-3 are expressed in ng/ml.
Statistical Analysis
All analyses were carried out using SPSS Statistics, version 21 (IBM Co.).Descriptive statistics are reported as mean ± standard deviation.The Kolmogorov-Smirnov test was employed to evaluate whether galectin-1 and galectin-3 levels are normally distributed.Serum galectin-1 levels were non-normally distributed, hence were logtransformed.One-way analysis of variance test was used to evaluate group differences in variables.Serum log-galectin-1 and galectin-3 levels of the case and control groups were compared with the multivariate analysis of covarianc (MANCOVA) test.Age, sex, and body mass index (BMI), which are thought to affect biochemical parameters, were used as covariates.Bonferroni correction was used as a post-hoc test.Statistical significance was set at p < 0.05.
RESULTS
In total, 42 patients with ASD (case group, mean age = 71.59± 28.65 months), 42 unaffected siblings (CG1, mean age = 73.92± 27.71 months) and 42 healthy controls (CG2, mean age = 86.26± 38.66 months) participated in the study.All participants were Turkish origin.The case group and control group 2 consisted of 8 girls and 34 boys, whereas the control group 1 consisted of 20 girls and 22 boys.Majority of the subjects in case group had comorbid diagnosis of intellectual disability (n = 30; 71.4%) and ADHD (n = 22; 54.2%).The demographic and clinical characteristics of the three groups are shown in Table 1.
Post-hoc pairwise comparisons with the Bonferroni correction showed that serum galectin-1 levels were sig- Horizontal lines represent the mean value for each group.ANCOVA was used for comparisons between two groups.ASD, autism spectrum disorder; ANCOVA, analysis of covariance.*p < 0.05.nificantly higher in the case group compared to the CG1 (p < 0.001), and CG2 (p < 0.001), but there was no significant difference between the CG1 and CG2 (p = ns) (Fig. 2A).Serum galectin-3 levels were significantly higher in the case group compared to CG2 (p = 0.013), but there was no statistically significant difference between the case group and CG1 (p = 0.761).There was no significant difference between CG1 and CG2 (p = 0.309) (Fig. 2B).No significant correlations were found between serum galectin-1 and galectin-3 levels and CARS scores (p = 0.802 and p = 0.107, respectively).T test results showed that within the case group, galectin-1 and galectin-3 levels were similar for those with and without ADHD comorbidity (p = 0.262; p = 0.860, respectively).
DISCUSSION
In the current study we investigated serum galectin-1 and galectin-3 levels in children with ASD comparing with their unaffected siblings, and the sex, age, and BMI matched healthy controls.To our knowledge, there has been no study on this subject so far and findings of the current study may have some further research and clinical implications.We found that children with ASD had significantly higher galectin-1 serum levels than their unaffected siblings and healthy controls and, higher galectin-3 serum levels than healthy controls which may indicate possible roles of galectin-1 and galectin-3 in the pathophysiology of ASD.
To date, serum galectin-1 and galectin-3 levels have been examined in few psychiatric disorders including schizophrenia, depression and ADHD.Several studies reported different findings on galectins levels in schizophrenia patients; such as higher galectin-3 levels in chronic schizophrenia patients [38], lower galectin-3 levels in first-episode schizophrenia patients and relapse cases but higher in cases with remission [39], and higher levels of galectin-1 in the unaffected siblings compared to both the patient group and the healthy control group while higher galectin-3 levels in the sibling group relative to the patient group [40].In another study using a large community sample, King et al. [41] found that galectinlevels were positively linked to scores on depressive symptoms scale.Previous literature also focused on the link between galectin-3 levels and ADHD [42][43][44].For example, Wu et al. [42] found lower levels of galectin-3 levels in rodent models of ADHD.Similarly, another study by the same research group revealed that children with ADHD had lower levels of galectin-3 compared to healthy controls [42,43].Conversely, Isık et al. [44] found that children with ADHD had higher levels of galectin-3 compared to healthy controls.These mixed results could be due to methodological differences in study designs.Considering the positive link between inflammation and galectin levels [40], the findings of the present study are generally compatible with the recent literature [38,41,44] suggesting the role of galectin-1 and galectin-3 in the pathophysiology of ASD.Because this is the first study in-vestigating galectin-1 and galectin-3 levels in ASD, it may not be possible to directly compare our findings with the literature.
There has been a rising interest on immune system disorders and neuroinflammation due to complex etiology of ASD [4,5].A number of studies confirmed the positive association between ASD and chronic and disrupted neuroinflammatory response [5,45].In addition, post-mortem studies in ASD patients using positron emission tomography scans showed higher intensity of microglial cells in certain brain regions [5,6].Microglias are key agents in synaptic development and functioning [12,19], synaptogenesis and neuronal pruning during early stages of CNS development [13].Hence, disrupted microglial functioning could indicate ASD and other neurodevelopmental disorders [14,46].Considering galectin-1 and galectin-3 expression in microglias for chronic illnesses affecting CNS [25] and the role of inflammation in ASD pathogenesis [14] our results are supporting the neuroinflammation hypothesis of ASD.Increased galectin-1 concentrations in the case group may indicate that galectin-1 may have a role in the etiopathogenesis of ASD.Galectin-1 up-regulation in inflammatory cells inhibit microglia infiltration and migration, resulting in anti-inflammatory effect [25].Therefore, galectin-1 production could be an attempt to reestablish the homeostatis in response to inflammatory processes [47].In line with this idea, higher galectin-1 levels in case group in the present study could be due to chronic inflammation in ASD patients.Several studies support the anti-inflammatory role of galectin-1, whereas a limited number of studies found that inflammation and stress in CNS led to increase in galectin-1 levels [48,49].In the present study, although a certain mechanism was not identified for galectin-1 increase, higher galectin levels could be interpreted as an attempt to control inflammatory response.
Research suggests that similar to galectin-1, galectin-3 may also play a part in the etiopathogenesis of ASD.Patients with ASD had higher levels of proinflammatory cytokines compared to controls [9][10][11].Galectin-3 is suggested to cause increased inflammatory response by inhibiting anti-inflammatory cytokine interleukin (IL)-10 production [50].An in vitro study reported that galectin-3 induced IL-6 expression and galectin-3 inhibition reduced IL-1β expression.These findings suggest that galectin-3 regulates the expression levels of cytokines re-lated to ASD [51].Since these cytokines play a role in facilitating inflammation, increased galectin-3 may signal increased neuroinflammation in ASD.Hence, galectin-3 appears to be a significant potential biomarker of neurological diseases both in terms of diagnosis and progression [21].In line with the available research, significantly higher galectin-3 levels in case group compared to healthy controls but not to unaffected siblings may indicate a role of galectin-3 in the etiology of ASD.
Various studies showed that galectin-1 and galectin-3 levels increase in chronic diseases such as neoplastic disease, heart failure, kidney failure, hepatitis, and diabetes mellitus [52,53].In our study, participants went under a thorough medical examination, and none was diagnosed with a medical illness.Also, those with a history of serious medical illness was excluded from the study.Hence, it is unlikely that higher galectin levels in case group is due to the presence of a medical condition.Similarly, participants using medication and/or supplements were not included in the study.All three groups had similar BMIs.Hence, high serum galectin-1 and galectin-3 levels in ASD cases are not attributable to the differences in physical illness, medication use, or BMI.
The inclusion of unaffected siblings into the current study allowed us to examine whether galectins could be endophenotypes of ASD and to assess inflammatory responses in unaffected siblings.A recent study showed that children with ASD and their unaffected siblings had higher levels of proinflammatory and anti-inflammatory cytokines than healthy controls, suggesting immune deficiencies are a potential indicator of autism endophenotype [54].Although the findings of the present study showed higher galectin levels in unaffected siblings compared to healthy controls, the difference was not statistically significant.Hence, it may not be possible to consider galectins as potential endophenotypic markers of ASD based on the findings of the current study.
Our results showed that galectin-1 and galectin-3 levels was not associated with the severity of ASD symptoms measured by CARS.Similarly, there was no relationship between galectin levels and several clinical characteristics such as self-mutilation and history of regression.However, given the study limitations including relatively small sample size, it may not be possible to document, if any, associations between galectin levels and the severity or associated factors of ASD.Further research is needed on this area.
The present study may have some strengths and limitations.First of all, to our knowledge, no previous study was conducted on galectin-1 and galectin-3 levels in ASD.Also, the current study included unaffected siblings of the children with ASD in addition to healthy controls, which may allow to evaluate whether galectins could be potential endophenotypes in individuals genetically inclined to ASD.In addition, a number of factors that may influence galectin levels such as chronic medical, neurological or gastrointestinal disorders, and use of supplements or medications were used as exclusion criteria.Regarding the study limitations, it may be important to note that our sample consisted of Turkish participants which limits the generalizability of the findings to other ethnic or racial groups.The relatively small sample size may have limited the statistical power of the study.Also, infection presence, an exclusion criteria, was clinically but not biochemically tested, which could be an important limitation.The crosssectional, single-centered, and non-randomized nature of the present study are among the other limitations.The presence of intellectual developmental disorder and ADHD comorbidities in a significant portion of the children with ASD can be stated as another limitation of the study, since it may affect galectin levels.In addition, there has been no previous study on the levels of galectin-1 and galectin-3 in children with autism, making it difficult to compare the findings of this study with previous research.The absence of a clear mechanism for a role of increased galectin-1 and galectin-3 levels in the pathophysiology of autism and inconsistent findings in previous studies examining the relationship between galectin-3 levels and other psychiatric disorders can also be cited as other limitations.
Higher levels of galectin-1 in case group compared to both unaffected sibling and healthy control, and higher levels of galectin-3 in case group compared to healthy control may show potential roles of galectins in the pathophysiology of ASD.Further studies with larger, and more diverse samples are needed to determine the role of galectin-1 and galectin-3 in the pathophysiology of ASD, and whether they can be biomarkers or targeted interventions for ASD.
This study was supported by a grant from Istanbul University, Unit of Scientific Research (TTU-2021-38129).
Fig. 2 .
Fig.2.Box plots representing the distribution of serum (A) galectin-1 (B) galectin-3 levels in patients with ASD, siblings and healthy controls.Horizontal lines represent the mean value for each group.ANCOVA was used for comparisons between two groups.ASD, autism spectrum disorder; ANCOVA, analysis of covariance.*p < 0.05.
Table 1 .
Demographic and clinical characteristics of case group and control groups a ANOVA.b χ 2 test.c Kruskal-Wallis test.d Fisher exact test.e Pearson Ki-kare test.*p < 0.05.
Table 2 .
Serum galectin-1 and galectin-3 levels of patients with ASD and controls ASD, autism spectrum disorder; ANOVA, analysis of variance; ANCOVA, analysis of covariance; NS, not significant.a Covariates: age, sex, and body mass index.b Bonferroni.*Log-transformed variables. | 2023-10-21T06:18:18.051Z | 2023-05-30T00:00:00.000 | {
"year": 2023,
"sha1": "aa6b41a12038e2b4db33fc51c8141d5d0ebd3f37",
"oa_license": "CCBYNC",
"oa_url": "https://www.cpn.or.kr/journal/download_pdf.php?doi=10.9758/cpn.23.1052",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cee5adbc4edd9276b688c871c08dd2bd9351a0c3",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239768393 | pes2o/s2orc | v3-fos-license | Asymptotic vanishing of syzygies of algebraic varieties
The purpose of this paper is to prove Ein--Lazarsfeld's conjecture on asymptotic vanishing of syzygies of algebraic varieties. This result, together with Ein--Lazarsfeld's asymptotic nonvanishing theorem, describes the overall picture of asymptotic behaviors of the minimal free resolutions of the graded section rings of line bundles on a projective variety as the positivity of the line bundles grows. Previously, Raicu reduced the problem to the case of products of three projective spaces, and we resolve this case here.
Introduction
Throughout the paper, we work over an algebraically closed field k of arbitrary characteristic. Let X be a projective variety of dimension n, and L be a very ample line bundle on X which gives rise to an embedding X ⊆ PH 0 (X, L) = P r , where r = h 0 (X, L) − 1. Denote by S the homogeneous coordinate ring of P r . The Koszul cohomology group K p,q (X, B; L) is the space of p-th syzygies of weight q. When B = O X , we set K p,q (X, L) := K p,q (X, O X ; L). After the pioneering work of Green [21,22], there has been a considerable amount of work to understand vanishing and nonvanishing of K p,q (X, B; L).
We say that L satisfies the property N k if K 0,1 (X, L) = 0 and K p,q (X, L) = 0 for 0 ≤ p ≤ k and q ≥ 2. The property N 0 means that X ⊆ P r is projectively normal, and the property N 1 means that the defining ideal of X in P r is generated by quadratic polynomials. Thus the property N k provides a natural framework to generalize classical results on defining equations of algebraic varieties to the results on their syzygies. Along this line, Green proved that if X is a smooth projective complex curve of genus g and deg L ≥ 2g+1+k, then L satisfies the property N k (see [21, Theorem (4.a.1)]). Green's celebrated theorem has stimulated further work in this direction, and several analogous statements for higher dimensional algebraic varieties have been established, e.g. [7,8,11,16,20,22,27]. On the other hand, Green-Lazarsfeld [23,24] and Ottaviani-Paoletti [26] called attention to the failure of the property N k . The main result of [26] asserts that K p,2 (P 2 , O P 2 (d)) = 0 for 3d − 2 ≤ p ≤ r d − 2, where r d = h 0 (P 2 , O P 2 (d)) − 1. In particular, O P 2 (d) does not satisfy the property N 3d−2 . As r d ≈ d 2 /2, the property N k for O P 2 (d) describes only a small fraction of the syzygies of the d-th Veronese embedding of P 2 . Eisenbud-Green-Hulek-Popescu observed in [18,Proposition 3.4] that a similar phenomenon occurs for other smooth projective surfaces, and Ein-Lazarsfeld proved in [12, Theorem A] that this always happens for all smooth projective varieties.
It is an interesting problem to describe the overall asymptotic behaviors of K p,q (X, B; L) as the positivity of L grows. This type of question was first suggested by Green [21,Problem 5.13] and also considered by Ein-Lazarsfeld [11,Problem 4.4]. To set the stage for asymptotic syzygies of algebraic varieties, assume that X is smooth and B is a line bundle, and let where A is an ample divisor and P is an arbitrary divisor on X. We suppose that d is sufficiently large, so in particular, L d is very ample. Put r d := h 0 (X, L d ) − 1. Elementary considerations of Castelnuovo-Mumford regularity show that We give some remarks on vanishing of K p,q (X, B; L d ) for large p. For simplicity, we assume that X is smooth and B is a vector bundle. If H i (X, B) = 0 for 1 ≤ i ≤ n − 1, then the duality theorem (cf. [12,Proposition 3.5], [21,Theorem 2.c.6]) says that Then our asymptotic vanishing theorem implies the following: For each 1 ≤ q ≤ n, there is a constant C > 0 such that if d is sufficiently large, then However, if H q−1 (X, B) = 0 for some 2 ≤ q ≤ n, then K r d −q+1,q (X, B; L d ) = 0 for large d (see [12,Remark 5.3]). When X is a smooth projective complex curve and B is a line bundle, vanishing of weight-one syzygies K p,1 (X, B; L d ) for large p is determined by the duality theorem and [13,Theorem B] (see also [29]). This implies Green-Lazarsfeld's gonality conjecture, and a higher dimensional generalization is treated in the work of Ein-Lazarsfeld-Yang [15].
Shortly after the asymptotic vanishing conjecture was proposed, Raicu showed in the appendix of [28] that the general case of the conjecture follows from the case of products of three projective spaces. The case that q = 1 in Theorem 1.1 is trivial. To prove Theorem 1.1, it is more than enough to establish the following: Theorem 1.2. Let k ≥ 1 be an integer, n 1 , . . . , n k , d 1 , . . . , d k be positive integers, and b 1 , . . . , b k be integers. Set As K p,q+1 (X, B; L) = K p,q (X, B + L; L), it is reasonable to assume that b < d in Theorem 1.2. But we do not need this assumption for the proof.
We give a sketch of the proof of Theorem 1.2 for the Veronese case. Let M O P n (d) be the kernel of the evaluation map H 0 (P n , O P n (d)) ⊗ O P n → O P n (d). It is well known that The main idea is to work on P n−1 × P 1 instead of P n via the finite map σ : P n−1 × P 1 → P n given by (ξ, z) → ξ + z, where P n is regarded as the Hilbert scheme of n points on P 1 and σ is the universal family. Note that σ * (O P n−1 ⊠ O P 1 (n − 1)) = O ⊕n P n . For each 2 ≤ q ≤ n + 1, the problem is equivalent to showing that An advantage of working on P n−1 × P 1 is that we can use the following short exact sequence which provides a way to proceed by induction on n. By considering the natural filtration of ∧ p+q−1 σ * M O P n (d) , we reduce the problem to proving that where a i = id − (p + q − 1 − i)n + d + n − 1. By induction on n, we can assume that H j (P n−1 , ∧ i M O P n−1 (d) (d)) = 0 for 0 ≤ i ≤ O(d j ) and j = q − 2, q − 1. By the Künneth formula, it is sufficient to check that But we have as soon as 0 ≤ p ≤ O(d q−1 ). Thus K p,q (P n , O P n (d)) = 0 for this range of p. The same argument works for the general Segre-Veronese case.
There has been a great deal of attention to the syzygies of Veronese or Segre-Veronese embeddings, e.g. [4,6,7,8,19,25,26,28,30]. The syzygies of these varieties have connections to representation theory and combinatorics. It would be exceedingly interesting to know whether the method of the present paper could make progress on the study of the Veronese or Segre-Veronese syzygies.
The paper is organized as follows. After reviewing basic necessary facts in Section 2, we prove Theorem 1.2 in Section 3, where we also show Theorem 1.1 following Raicu's argument in [28]. Section 4 is devoted to presenting some open problems on asymptotic syzygies of algebraic varieties.
Acknowledgements. The author is very grateful to Lawrence Ein, Sijong Kwak, and Wenbo Niu for inspiring discussions and valuable comments, and he is indebted to Daniel Erman for introducing some references. The author would like to thank the referees for careful reading of the paper.
Preliminaries
We collect basic facts which are used to prove the main theorems of the paper.
2.1. Koszul Cohomology. Let X be a projective variety, B be a coherent sheaf on X, and L be a very ample line bundle on X, which gives an embedding X ⊆ PH 0 (X, L) = P r .
Let S := S(H 0 (X, L)) = m≥0 S m H 0 (X, L) be the homogeneous coordinate ring of P r , and be the graded section S-module of B associated to L. Denote by S + ⊆ S the irrelevant maximal ideal, and define the Koszul cohomology group to be Notice that K p,q (X, B; L) is the vector space of p-th syzygies of weight q and it is the cohomology of the Koszul-type complex Now, let L be a globally generated line bundle on a projective variety X. Consider the evaluation map ev : H 0 (X, L) ⊗ O X −→ L, which is surjective since L is globally generated. Denote by M L the kernel bundle of the evaluation map ev. Then we obtain a short exact sequence of vector bundles on X: We use the following well-known fact to compute the Koszul cohomology group.
. Let X be a projective variety, B be a coherent sheaf on X, and L be a very ample line bundle on X.
Proof. By taking wedge product of (2.1), we have a short exact sequence By using the Koszul-type complex and chasing through the diagram, we see that See [3, Section 2.1] or [11, Section 1] for the complete proof. Now, from (2.2), we find that so the first assertion holds. Thus K p,q (X, B; L) = 0 for 0 ≤ p ≤ p 0 if and only if But we get from (2.2) that Thus the second assertion holds.
Filtrations for Wedge Products.
For a short exact sequence 0 → U → V → W → 0 of vector bundles on a projective variety X and an integer k ≥ 1, there is a natural filtration
Divided and Symmetric Powers.
Let V be a finite dimensional vector space over k.
For an integer n ≥ 1, the symmetric group S n naturally acts on the tensor power T n V := V ⊗n by permuting the factors. The divided power of V is the subspace while the symmetric power S n V of V is the quotient of T n V by the subspace spanned by σ(ω) − ω for all ω ∈ T n V and σ ∈ S n . When n = 0, we set By composing the inclusion of D n V into T n V with the projection onto S n V , we have a natural map D n V → S n V . This map is an isomorphism in characteristic zero, but it may be neither injective nor surjective in general. We can also define divided and symmetric powers of vector bundles on projective varieties. We refer to [2, Section 3] for more details.
Let 0 → U → V → W → 0 be a short exact sequence of vector bundles on a projective variety X with rank W = 1. Let π : P(V * ) → X be the natural projection. Note that By applying π * , we get a short exact sequence on X: This construction was suggested by Lawrence Ein, and a purely algebraic construction of this kind of exact sequence can be found in [1, Corollary V. 1.15]. By taking the dual, we obtain a short exact sequence on X: We remark that S k+1 V → S k V ⊗ W may not be surjective in positive characteristic.
Now, let C be a smooth projective curve, and L be a line bundle on C. For an integer k ≥ 0, the symmetric group S k+1 naturally acts on the (k + 1)-th ordinary product C k+1 of C by permuting the components, and the line bundle If C = P 1 and L = O P 1 (d) with d ≥ 1, then (P 1 ) n = P n and T n (O P 1 (d)) = O P n (d). By an arbitrary characteristic version of Hermite reciprocity (see [2, Remark 3.2]), we have H 0 (P n , T n (O P 1 (d))) = D n H 0 (P 1 , O P 1 (d)) = D n (S d H 0 (P 1 , O P 1 (1))) = S d (D n H 0 (P 1 , O P 1 (1))) = S d H 0 (P n , T n (O P 1 (1))).
2.4.
Tautological Bundles on Projective Spaces. Recall that σ : P n−1 × P 1 → P n is the finite map of degree n given by (ξ, z) → ξ + z by viewing P n as the Hilbert scheme of n points on P 1 . For any integer k, the tautological bundle on P n is defined as which is a vector bundle of rank n. The tautological bundles on symmetric products of curves play an important role in the study of secant varieties of curves (see [16]).
Proof. Since σ : P n−1 × P 1 → P n is a finite map, we have for any i ≥ 0 and m ∈ Z. The Künneth formula shows that Then we see that By the Horrocks criterion, the first assertion of the lemma follows. Now, we observe that h 0 (P n , E n,O P 1 (k) ) = k + 1, and h 0 (P n , E n,O P 1 (k) (1)) = n(k + 2) = (n + 1)(k + 1) + n − 1 − k. This implies the second assertion of the lemma.
Remark 2.4. The following alternative approach to Lemma 2.3 was suggested by Lawrence Ein. Let D n be the image of the injective map P n−1 × P 1 → P n × P 1 given by (ξ, z) → (ξ + z, z). Note that O P n ×P 1 (−D n ) = O P n (−1) ⊠ O P 1 (−n). For any integer k, we have a short exact sequence on P n × P 1 : Let p : P n × P 1 → P n be the projection to the first component. When −1 ≤ k ≤ n − 1, by applying p * , we obtain a short exact sequence on P n : 1)). When k ≤ −2 or k ≥ n, it is easy to check that E n,O P 1 (k) is not splitting.
Lemma 2.5. Let Y be a projective variety, and σ : Y × P n−1 × P 1 → Y × P n be the finite map given by (y, ξ, z) → (y, ξ + z). If M is a vector bundle on Y × P n , then for any q ≥ 0.
Asymptotic Vanishing Theorem
The aim of this section is to prove Theorem 1.1. First, we construct a short exact sequence of vector bundles, which allows us to give a quick proof of Theorem 1.2 by induction on dimension. We then explain how one can deduce Theorem 1.1 from Theorem 1.2.
3.1. Short Exact Sequence. Let C be a smooth projective curve, and L C be a line bundle on C. Let Y be a smooth projective variety, and L Y be a line bundle on Y . Fix an integer We have a short exact sequence on Y × C k+1 : We can view C k+1 = {effective divisors of degree k + 1 on C} as the Hilbert scheme of k + 1 points on C. Let be the finite morphism given by (y, ξ, z) → (y, ξ + z), and p : Y × C k × C → C be the projection to the last component. By taking σ * of the above exact sequence on Y × C k+1 , we get a short exact sequence on Y × C k × C: By taking p * and considering (2.4), we get a short exact sequence on C: Then we obtain the following commutative diagram with exact sequences on Y × C k × C: Now, assume that C = P 1 and L C = O P 1 (d) with d ≥ 1. For an integer n ≥ 1, we have Since Then the left vertical short exact sequence in the above commutative diagram gives a short exact sequence on Y × P n−1 × P 1 : When Y is a point, the exact sequence (3.1) is When n = 1, the finite map σ is an isomorphism and the exact sequence (3.1) is
3.2.
Case of Product of Projective Spaces. In this subsection, we prove Theorem 1.2.
Recall that We put and n := n k . Then As md + b ≥ 0 for any m > 0, we have H i (X, B ⊗ L m ) = 0 for i > 0 and m > 0.
By Proposition 2.1, for p ≥ 0, we have We proceed by induction on n 1 + · · · + n k . If n 1 + · · · + n k = 1, then q = 2 and the problem is to check the cohomology vanishing the desired cohomology vanishing immediately follows.
Assume that n 1 + · · · + n k ≥ 2. Fix 0 ≤ p ≤ (1/n 1 ! · · · n k !)(d q−1 + bd q−2 ). By Lemma 2.5, it is sufficient to show the cohomology vanishing on Y × P n−1 × P 1 : where σ : Y × P n−1 × P 1 → Y × P n is the finite map given by (y, ξ, z) → (y, ξ + z). By considering the short exact sequence (3.1) and applying Lemma 2.2 to ∧ p+q−1 σ * M L Y ⊠O P n (d k ) , we can reduce the problem to proving the following: By the Künneth formula, it is equivalent to showing that By induction and Proposition 2.1, we can assume that immediately follows from (3.4). It only remains to check (3.3). If q = 2, then Thus so the cohomology vanishing (3.3) holds for q = 2. Next, we consider the case that q ≥ 3. If since p ≤ (1/n 1 ! · · · n k−1 !n!)(d q−1 + bd q−2 ). Thus so the cohomology vanishing (3.3) holds in this case as well. We have shown (3.2) and (3.3), and they imply K p,q (X, B; L) = 0 as desired. We complete the proof of Theorem 1.2.
3.3. General Case. In this subsection, we prove Theorem 1.1. As we mentioned in the introduction, Raicu proved that Theorem 1.1 can be deduced from Theorem 1.2 for the case k = 3 (see [28,Corollary A.5]). Here we reproduce his proof for the completeness. Recall that X is an n-dimensional projective variety, B is coherent sheaf, A is an ample divisor, P is an arbitrary divisor on X, and L d := O X (dA + P ) for an integer d ≥ 1. Our aim is to show that for each 2 ≤ q ≤ n + 1 (the case that q = 1 is trivial), there is a constant C > 0 depending on X, A, B, P such that if d is sufficiently large, then We can choose integers a 1 , a 2 , a 3 ≥ 1 with gcd(a 1 , a 2 + a 3 ) = 1 such that A 1 := a 1 A, A 2 := a 2 A + P, A 3 := a 3 A − P are very ample and the natural maps are surjective for all m 1 , m 2 , m 3 > 0. We may assume that a 1 ≫ a 2 + a 3 . Note that a 1 , a 2 , a 3 are depending only on X, A, P . As d is sufficiently large, we can find integers d 1 , d 2 ≥ 1 such Next, consider the commutative diagrams Clearly, n 1 , n 2 , n 3 are depending only on X, A, P . Notice that P r is a linear subspace of P N by the surjectivity of (3.6). We can regard B as a coherent sheaf on Y, P r , and P N . The syzygies of B on P N are the syzygies of B on P r tensoring with a Koszul complex of linear forms. By letting we see that ) be the total coordinate ring of Y = P n 1 × P n 2 × P n 3 with the usual Z 3 -grading. Then is a finitely generated graded S-module. Consider the minimal free resolution of M : for some finite dimensional vector space F i,b j over k and finite subsets S i ⊆ Z 3 . Let Note that b is depending only on X, A, B, P . Now, fix 2 ≤ q ≤ n + 1 (≤ n 1 + n 2 + n 3 + 1). By Theorem 1.2, for 0 ≤ i ≤ m and b j ∈ S i , we have ). Recall that the numbers a 1 , a 2 , a 3 , n 1 , n 2 , n 3 , b are depending only on X, A, B, P but not on d.
By using Proposition 2.1 and chasing through the above exact sequence, we see that .7) and (3.8) imply (3.9).
Open Problems
In this section, we discuss some open problems and conjectures. Let X be a smooth projective variety of dimension n, and B be a coherent sheaf on X. Fix an ample divisor A and an arbitrary divisor P on X, and put L d := O X (dA + P ) for an integer d ≥ 1.
For each 2 ≤ q ≤ n + 1, it would be extremely interesting to find an explicit constant c > 0 in terms of X, A, B, P , and q, d such that if d is sufficiently large, then K p,q (X, B; L d ) = 0 for 0 ≤ p ≤ c and K c+1,q (X, B; L d ) = 0.
However, this problem is already very difficult for q = 2. A generalization of Mukai's conjecture (cf. [11,Conjecture 4.2]) asks whether the property N d holds for K X + (n + 2 + d)A when X is a smooth projective complex variety. But it is widely open even when n = 2 and d = 0. Moreover, Fujita's conjecture, which predicts that K X + (n + 2 + d)A is very ample for d ≥ 0, is unknown when n ≥ 3. However, when A is very ample, Ein-Lazarsfeld established in [11, Theorem 1] that K p,q (X, K X + (n + 1 + d)A) = 0 for 0 ≤ p ≤ d and q ≥ 2.
It is reasonable to expect extending this result for q ≥ 3.
Problem 4.1. Let X be a smooth projective complex variety of dimension n, and A be a very ample divisor on X. For each 2 ≤ q ≤ n + 1 and d ≥ 0, find an explicit polynomial P (x) of degree q − 1 such that K p,q (X, K X + (n + 1 + d)A) = 0 for 0 ≤ p ≤ P (d) One can also consider the effective asymptotic vanishing problem for the syzygies of products of projective spaces.
Conjecture 4.3 (Ein-Lazarsfeld).
Fix n ≥ 1, b ≥ 0, and 0 ≤ q ≤ n. If d ≥ b + q + 1, then Notice that the conjecture gives the precise vanishing range because Ein-Erman-Lazarsfeld [10, Theorem 2.1] (see also [14,Theorem 2.1]) proved that K p,q (P n , O P n (b); O P n (d)) = 0 for all In [26], Ottaviani-Paoletti conjectured that if n ≥ 3, d ≥ 3, then O P n (d) satisfies the property N 3d−3 . They also consider the cases that n ≤ 2 or d ≤ 2, but these cases are already settled. By [26,Theorem 1.6], the property N 3d−3 for O P n (d) is implied by that K p,2 (P n , O P n (d)) = 0 for 0 ≤ p ≤ 3d − 3. Thus Conjecture 4.3 for b = 0 and q = 2 is equivalent to Ottaviani-Paoletti's conjecture. At this moment, we only know that O P n (d) satisfies the property N d+1 by Bruns-Conca-Römer [7], and a small change of the proof of Theorem 1.2 yields that O P 3 (d) satisfies the property N d+2 . A new idea might be needed to solve Conjecture 4.3.
It is also a fascinating problem to study the asymptotic behavior of the Betti numbers k p,q (X, B; L d ) := dim K p,q (X, B; L d ) when d is sufficiently large (see [12,Problem 7.3]). In this direction, Ein-Erman-Lazarsfeld conjectured that for each 1 ≤ q ≤ n, the Betti numbers k p,q (X, L d ) converge to a normal distribution (see [9,Conjecture B], [14,Conjecture 3.2]). This normal distribution conjecture has not been verified even for P 2 and P 1 × P 1 , and it seems that the conjecture is already very challenging for Veronese embeddings (cf. [5,6]). Conjecture 4.4 (Ein-Erman-Lazarsfeld). Fix n ≥ 1 and 1 ≤ q ≤ n. Then there is a normalizing function F q (d) such that F q (d) · k p d ,q (P n , O P n (d)) −→ e −a 2 /2 as d → ∞ and p d → r d /2 + a √ r d /2.
Notice that k p,q (P n , O P n (d)) = 1 n · h q P n−1 × P 1 , ∧ p+q σ * M O P n (d) ⊗ (O P n−1 ⊠ O P 1 (n − 1)) for p ≥ 0 and 1 ≤ q ≤ n. It is tempting to wonder if there is a clever way to compute this Betti number.
We refer to [6] and [12,14] for more problems and conjectures on syzygies of Veronese embeddings and asymptotic syzygies of algebraic varieties, respectively. | 2021-10-26T01:16:50.778Z | 2021-10-24T00:00:00.000 | {
"year": 2021,
"sha1": "66e7802dc40159d6eda8eb0e910ecb062af80087",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7dd817a7e9aea9eaab3be859ca8593df55224bc8",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
39929220 | pes2o/s2orc | v3-fos-license | Comparing the Analgesic Effects of Lidocaine and Lidocaine with Ketamine in Intravenous Regional Anesthesia on Postoperative Pain
Background and purpose: The main and the most known complication of intravenous regional anesthesia (IVRA) is systemic toxicity due to local anesthetics that occurs following the accidental tourniquet release immediately after injection. This study aims to evaluate the effect of adding ketamine to lower dose of lidocaine on reducing the dose and side effects of lidocaine. Materials and Methods: In this randomized clinical trial, 60 patients undergoing the surgery of upper limb below the elbow under IVRA were randomly divided into two groups. In group 1 (control group), 40 ml lidocaine 0.5% (200 mg) and in group 2 (intervention group), 40 ml lidocaine 0.25% (100 mg) plus 40 mg of ketamine 0.1% injected intravenously. Outcomes included postoperative pain at 15, 30 and 60 minutes after surgery. The pain of the patients was assessed by using the Visual Analogue Scale (VAS Score). Results: Both groups were comparable in demographic and surgical parameters. The average pain based on the VAS score at 15, 30 and 60 minutes after surgery was similar in both groups and there was no significant difference between the two groups (p> 0.05). Moreover, postoperative complications including unconsciousness, restlessness, dizziness, nausea, vomiting, tinnitus, seizure, delirium and hallucination had no significant differences (p> 0.05). Conclusion: Results of this study showed that the addition of ketamine to lower dose of lidocaine in patients receiving IVRA significantly reduced postoperative pain, and reduces the likelihood of systemic toxicity with lidocaine without causing significant adverse effects.
Introduction
Using general anesthesia for surgery has always had complications such as cardiovascular complications (dangerous arrhythmia), pulmonary complications (possibility of apnea, airway obstruction, aspiration), malignant hyperthermia, and so on. In such cases alternative methods such as regional anesthesia is used. Regional anesthesia includes spinal anesthesia, epidural, peripheral nerves block and intravenous regional anesthesia (IVRA) [1][2].
IVRA, created by August Karl Gustav Bier about 100 years ago, is a simple, safe, and effective technique of providing anesthesia for short surgical procedures on the hand and forearm for an anticipated duration of 60 to 90 minutes [3]. It is an ideal technique for short, procedures on extremities. In the first half of the 20th century, IVRA evolved slowly, but in recent years has advanced quickly [4]. Of the benefits of IVRA, which is feasible for upper and lower limb surgeries, are being easy and fast, having high success rate, fast recovery, controlling the extent of the block and muscle relaxation techniques, availability in open or closed short surgeries. In addition, in this method, the risks of general anesthesia such as apnea, aspiration and cardiovascular complications can be avoided [5][6]. Tourniquet pain and poor postoperative analgesia are common problems associated with IVRA [4]. Among the most important complications of this method are sensitivity to local anesthetics, ischemic limb disease, sickle cell crisis, and infection [2].
The most common drug used in this method is regional anesthetics, especially lidocaine [7]. Lidocaine blocks fast voltage-gated sodium channels in the cell membrane of postsynaptic neurons, preventing depolarization and inhibiting the generation of propagation of nerve impulse [8].
Various adjuncts (eg, opioid, nonsteroidal anti-inflammatory drugs, clonidine, dexmedetomidine, ketorolac, dexamethasone, muscle relaxants, neostigmine, ketamine and magnesium) have been tried to hasten the onset, maintain adequate muscle relaxation, reduce tourniquet pain, and increase the duration of analgesia [9], and to reduce the dose of anesthetic drugs to reduce the toxicity [10]. Ketamine is a well-known analgesic drug that acts at least in part as an N-methyl-D-aspartate acid (NMDA) antagonist [11]. It has environmental analgesic properties and increases regional anesthesia duration and anesthesia performance of regional anesthetics [12].
Although randomized double-blind clinical trials have shown the anesthesic effects of ketamine in regional anesthesia in combination with lidocaine, [13][14][15], we designed this study to evaluate the effect of ketamine when added to lower dose of lidocaine in IVRA.
Method
This study was a randomized clinical trial in 2013 in Shahid Rajaee Hospital of Qazvin and approved by research ethics committee of Qazvin University of medical science. Written informed consent was provided by all patients. The sample size with 80% power and 95% confidence interval based on the reference study [5] was considered as 60.
Inclusion criteria were ages of 20 to 50 years, American Society of Anesthesiologists Physical Status I or II, and the elective surgery in the upper limb below the elbow. Exclusion criteria were the operation time longer than one hour, history of hypersensitivity to lidocaine, history of peripheral vascular disease, unstable fractures, significant soft tissue injurie, sickle-cell anemia, seizure, cardiac dysrhythmia and psychiatric and neurologic disorders.
Patients were randomly divided into two groups of 30 patients each to receive either 40 ml %0.5 lidocaine (200 mg) (group 1: control group) or 40 ml %0.25 lidocaine (100 mg) plus 40 mg %0.1 ketamine (group 2: intervention group). In this study, assignment of patients into two groups (control and intervention) implemented using blocked randomization (figure 1).
The day before surgery all patients were briefed about the IVRA and were instructed in how to indicate their pain by using the Visual Analogus Scale (VAS) (0= no pain, 10= the most intense pain).
Routine monitorings including pulse oximetry, electrocardiograph, and noninvasive blood pressure measurement were established. Before starting the block, 2 cannulae were placed, one in the dorsum of the operative hand and one in the other hand for drug and crystalloid infusion. 15 minutes before the start of surgery all patients received 2 mg midazolam. The operative arme was elevated for 2 to 3 minutes to complete venous blood drainage and then deprived with an Esmarch bandage. A double-cuffed tourniquet was placed on the operative arm and the proximal cuff was inflated up to 150 mm Hg above the patient's systolic blood pressure and the esmarch band was opened. Isolation of blood circulation was confirmed by loss of radial pulse and loss of pulse oximetry tracing from the fingers of the operative hand. Intravenous regional anesthesia was achieved in group 1 by injecting 40 ml of %0.5 lidocaine (200 mg) and in group 2 by injecting 40 ml of %0.25 lidocaine plus 40 mg of %0.1 ketamine through cannulae placed on the dorsum of the related hand over 90 seconds by an anesthesiologist to the injected drugs. After completion of sensory and motor block, approximately 10 minutes after the injection, the distal cuff was inflated up to 250 mmHg and proximal cuff was released and then the operation was started. Oxygen saturation, respiratory rate, heart rate and blood pressure were continuously monitored and any intraoperative adverse effects such as hypoxemia (spo2<%92), respiratory depression (respiratory rate< 10 beats/min), bradycardia (heart rate<50 beats/ min) and hypotention (blood pressure > %20 below the baseline) were recorded and treated. Tourniquet was not deflated before 30 minutes and was not inflated longer than 90 minutes. When the operation completed, deflation of the tourniquet was performed by technique of cyclic deflation. Assessment of pain was made on the basis of the VAS score at 15, 30 and 60 minutes postoperatively. (0= no pain and 10= the most intense pain felt by the patient). Any adverse effects in the first 12 hours after surgery including unconsciousness, agitation, dizziness, postoperative nausea and vomiting, tinnitus, seizure, hallucination and delirium in both groups were recorded.
Data were presented as mean ± SD, numbers, ranges and percentages. To analyze the data, t-test and chi-square were used and P value< 0.05 was considered significant.
Results
Both groups were comparable in demographic and surgical parameters. Age of the patients was 31.80±10.9 in the group 1 and 33.63 ± 10.0 in the group 2. There were twenty six (86.7%) men in the group 1 and 24 (82%) in the group 2.
Although there were no statistical differences between the 2 groups in the operation type and time (Table 1 and 2). There were also no significant differences between the two groups in postoperative complications (p> 0.05) ( Table 3). According to the results of this study postoperative pain based on VAS scores at 15, 30 and 60 minutes after the operation was similar in both groups and there was no significant differences between the two groups (p> 0.05) ( Table 4).
Discussion
In this study addition of ketamine to lower dose of lidocaine in patients receiving IVRA reduced the postoperative pain without causing significant side effects.
Lidocaine is the most commonly used drug in IVRA, but because of its central nervous system and cardiovascular complications, it should be used cautiously. The most important complication of lidocaine is systemic toxicity due to accidental deplation of tourniquet after injection, which causes symptoms such as dizziness, tinnitus, unconsciousness and seizure [15][16]; thus, we added ketamine to lower dose of lidocaine in order to reduce these complications.
Practitioners of IVRA now have 4 reasonable adjuvant agents from which to choose. Individual patient characteristics shoud guide the choice of adjuant drug. Patients with poor peripheral vasculature may particularly benefit from the vasodilatory effect of nitroglycerine added to the local anesthetic. Patients with severe ongoing pain or opioid tolerance would be expected to have additional benefit from adjuant clonidine or ketamine, as these drugs are known to be efficacious in these conditions [17]. Ketamine is a noncompetitive antagonist of NMDA which can inhibit the induction of central sensitization owing to peripheral nociceptive stimulation and eliminate hypersensitivity [18]. Durrani et al found that the use of %0.3 ketamine for regional anesthesia of upper extremities was adequet for complete sympathetic, sensory and motor block. Roy and Deshpande reported similar findings [19].
The pain, based on VAS score at 15, 30 and 60 minutes after surgery was similar in both groups, this rate was lower in the intervention group, but there was no significant and meaningful differences.
In Rahimi's research where he added ketamine 0.1 mg/kg to 0.5% lidocaine the result was thus that in lidocaine + ketamine group, tourniquet pain and postoperative pain was significantly reduced [20].
In Hall and colleagues study (2014) the effect of adding ketamine to lidocaine in bier block was evaluated and it was shown that this method reduced the need for analgesics during and after surgery without increasing side effects [19].
In Alok and colleagues study (2012) ketamine or Dexmedetomidine was added to lidocaine in IVRA that improved the quality of analgesia without side effects. In this study, ketamine decreased the block beginning time, delay in tourniquet pain, and reduced the need for analgesics after surgery [21].
The most important side effects of ketamine, which limits its administration, are adverse reactions in patients while recovery such as auditory, tactile or visual hallucinations, color emotional perception and a sense of immersion. Benzodiazepines are the most effective drugs to reduce these effects [22][23]. In this study, we used midazolam as premedication in all patients and reduced side effects of ketamine but there was no significant difference in the complications between the two groups.
Conclusion
According to the results of our study, we considered that adding ketamine to lower dose of lidocaine for IVRA is a safe method and thus applicable for surgeries in the upper exterimities below the elbow that lead to acceptable pain relief without causing significant adverse effects.
Finally, we concluded that use of adjutants acting synergically with the local anesthetics have improved the safety and efficiency of IVRA. | 2018-12-29T14:12:45.725Z | 2016-06-08T00:00:00.000 | {
"year": 2016,
"sha1": "30ec93b27a97befd2b8dda8107f1c77252da2af6",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ja.20160401.11.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ff403be6937bbf0e008c8a06fe9efbad042a5201",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73595632 | pes2o/s2orc | v3-fos-license | Actual practice of healthcare providers towards prevention and control of Multidrug-resistant tuberculosis ( MDR-TB ) at Borumeda Hospital , Ethiopia
Tuberculosis (TB) is the world’s leading curable cause of infectious diseases mortality, with a disproportionate burden of the disease falling on low and middle income countries. In 2013, there was an estimated 9 million incident cases of TB globally and 1.5 million people died from the disease. The estimated number of cases occurred in Asia (56%) and Africa (25%), the incidence rates are highest, driven by high rates of HIV and malnutrition. Ethiopia is one of the high burden countries, reflected both in its TB incidence and the estimated rates of Multidrug-resistant tuberculosis (MDR-TB). With the occurrence of MDR-TB, little is known about the views of healthcare workers on this disease. The oblective of this study was to assess the knowledge, attitudes and practices of healthcare professionals towards prevention and control of MDR-TB at Boru Meda General Hospital, South Wollo Zone, North central Ethiopia. A cross sectional study was conducted by means of a semi-structured, self-administered questionnaire that was sent to all healthcare workers. The questionnaires were collected at the study site from March 22 to April 23, 2014. In all, greater than half (80.85%) of the respondents had good level of knowledge about MDR-TB; and the overwhelming majority of them held positive attitude (76.60%) towards patients with MDR-TB. In totality, 31.91, 74.11 and 66.35% of the respondents reported that they had their own copy of MDR-TB management guidelines; they used the protective masks and they were individually involved in educating patients about MDR-TB, respectively. Greater than half of the respondents had good level of knowledge about MDR-TB, but some of them held negative attitude towards patients suffering from MDR-TB. The attitude did not influence practices, but having good level of knowledge was positively associated with safer practices such as using protective masks, educating patients on MDR-TB, and referring to the MDR-TB guidelines manual.
INTRODUCTION
Tuberculosis (TB) remains one of the world's deadliest communicable diseases.It causes illness among millions of people each year and is ranked as the second leading cause of death from an infectious disease worldwide (Biadglegne et al., 2014).Multidrug-resistant tuberculosis (MDR-TB) is a form of drug-resistant TB in which Mycobacterium tuberculosis can no longer be killed by the two best antibiotics most commonly used to cure TB, Isoniazid and Rifampicin.MDR-TB becomes an increasing threat to the global control of TB, as it complicates the management and control of the disease (Girma et al., 2015;Wondemagegn et al., 2015).Globally in 2013, an estimated 9.0 million people developed TB and 1.5 million died from the disease.Drug-resistant TB poses a major threat to control of TB worldwide and the proportion of new cases with MDR-TB was 3.5% in 2013.However, much higher levels of resistance and poor treatment outcomes are of major concern in some parts of the world (WHO, 2014).While most TB cases are in Asia, in Africa the incidence rates are highest, driven by high rates of HIV and malnutrition (Jain and Mondal, 2008).
Ethiopia is among the countries with the highest TB burden in the world (WHO, 2014).The annual TB incidence and prevalence of Ethiopia is estimated to be 247 and 470 cases per 100,000 (Biadglegne et al., 2014).These days, drug resistant TB has become a common problem and challenge in Ethiopia.Drug resistant TB is estimated as 1.6 and 12% among new and previously treated TB cases, respectively (WHO, 2014).The laboratory capacity in Ethiopia to diagnose MDR-TB is very limited.As a result, national estimates were based on incomplete data that suffer from representativeness since the reporting system is poorly developed, diagnostic criteria are usually non-standardized and many MDR cases go undetected (Abebe et al., 2012).Reports from different parts of Ethiopia suggest that the rate of drug resistant TB is highly variable across the country (Mitike et al., 1997;Asmamaw et al., 2008;Meskel et al., 2008;Agonafir et al., 2010).According to WHO, factors associated with the emergence of MDR-TB and their effects on the epidemiology of TB include inadequate treatment, irregular drug supply, inappropriate regimens and poor patient compliance.Primary resistance to anti-TB drugs occurs when a patient is infected with wild type M. tuberculosis which is resistant to anti-TB drugs.Acquired resistance to anti-TB drugs occurs when a patient is infected with susceptible forms of M. tuberculosis, which become resistant during treatment.Much higher rates of primary resistance have been observed in HIV-infected patients (WHO, 2014;Urassa et al., 2008).
Prevention of tuberculosis infection among healthcare workers (HCWs) lost attention after the introduction of chemotherapy.Important contributing factors to nosocomial tuberculosis transmission includes delayed diagnosis and ineffective treatment of patients with infectious tuberculosis, poor ventilation and air recirculation, inadequate infection-control and isolation Tefera and Seid 153 practices, and unrecognized multiple-drug resistance (Frank et al., 2007).Studies from varied settings indicate that the level of knowledge about TB is influenced by many factors including their areas of work, whether public or private sector (Al-Maniri et al., 2008;Vandan et al., 2009), identification of patients at high risk of TB, assessment for treatment outcome and consequences of treatment failure (Kiefer et al., 2009).
Studies reported that inadequate knowledge and understanding by clinicians of effective TB diagnosis and treatment actually led to an increase in MDR-TB (Loveday et al., 2008;Vandan et al. 2009).The practices implemented by HCWs in order to prevent cross-infection as well as prescribing practices vary from settings to settings.Studies conducted in USA and Britain showed that recent outbreaks of MDR-TB were due to bad clinical practices and therefore advocated for good clinical practices to minimize the impact of MDR-TB in the HIV era (Havward et al., 1995;Richardson, 2000).Moreover, studies show that gross lack of good practices of TB management (Ahmed et al., 2009), poor access to TB/MDR-TB information which includes the procedures that protect them from TB infection (Ronveaux et al., 1997).
Because MDR-TB is highly infectious and contagious, it poses a serious risk to people who came in contact with the patients suffering from it as well as to HCWs who treat them.It may be assumed that in general, healthcare workers know about MDR-TB and its implications, but there are very few studies that have looked beyond patient factors into the holistic organization and processes of MDR-TB service delivery.Hence, MDR-TB can be transmitted from patients to HCWs and vice versa.It is important to establish their opinions on what they are doing to control the transmission.As yet, healthcare workers are very important stakeholder in healthcare delivery and their opinion should be sought on important health issues affecting them and decisionmakers consider the expressed opinions and the results of the assessment in order to design and implement relevant interventions.Hence, the aim of the study is to investigate the knowledge, attitude, and practices of healthcare professionals toward prevention and control of MDR-TB at Boru Meda general Hospital in north central Ethiopia.
Study area and period
A cross sectional study was conducted from March to April 2014 at *Corresponding author.E-mail: yimer69@gmail.com.
Data collection process
All the healthcare workers who work in Boru Meda General Hospital during the study period were included in the study.Data was collected by using semi-structured questionnaire which was designed to collect demographic variables as well as Knowledge, attitude and practice of healthcare workers towards the control of MDR-TB.All data collected were then coded, edited and entered to Statistical Package for the Social Sciences (SPSS), version 17.0 software and analyzed.Descriptive statics and Chi-Square tests were used to meet the stated objective.
Operational definition
-Good knowledge: Is the score of 7 and above out of 10 questions asked for assessing knowledge.
-Insufficient knowledge: Is the score below 7 out of 10 questions asked for assessing knowledge.
-Positive attitude: Is a good perception of HCPs for MDR-TB patients, its scored 3 and above out of 6 questions asked for assessing their perceptions.
-Negative attitude: Is untoward perception of HCPs for MDR-TB patients, its scored below 3 out of 6 questions asked to fill.
Socio-demographic characteristics
A total of 47 HCWs were included in the study of which 21 (44.7%) were nurse followed by Pharmacists and Medical laboratory technologist (MLT) which accounted for 9 (19.1%) and 5 (10.6%), respectively.The majority of the participants (51.1%) were single, while 46.8% were married, few were widowed and none were divorced (Table 1).
Knowledge
Half of the respondents aged less than 32 years had good level of knowledge about MDR-TB than their older counterparts, though the difference was not statistically significant.In contrast, females and those with over 7 years experience had insufficient level of knowledge than their counterparts (P<0.05).
Similarly, all (100%) of medical doctors, MLT, midwifes, optometrists, radiologists, and health officer had significantly good knowledge about MDR-TB as compared to less than half of the respondents among nurses, and pharmacists.In all, 80.85% of the respondents had good knowledge about MDR-TB.The mean knowledge score of the participants was 7.48 ± 1.43 out of 10 (Table 2).
Attitude toward MDR-TB patients
Majority of the respondents had positive attitude towards MDR-TB infected patients, only 23.40% having negative attitude.Although it is not significant, it can be seen that there was difference with regard to age category.Female respondents held more negative attitude than males (38.89 and 13.79%, respectively) (P=0.048).In contrast based on the professional category, more doctors, optometrists, EHP, health officers, and midwives (100%) held positive attitude than nurses, pharmacists and MLTs but the difference was also not statistically significant.(Table 3)
Practices relating to MDR-TB infection control
Generally, 72.34% of the respondents reported that they used the protective masks when they are in contact with MDR-TB patients.This practice was influenced by the profession and work experience as well as the level of knowledge and attitude of respondents.Respondents with negative attitude practiced the use of masks more significantly than those with positive attitude (p=0.0274).Similarly, respondents who had good knowledge about MDR-TB significantly wore their protective masks than those with insufficient knowledge (p=0.0395)(Table 4).
With regard to educating patients about MDR-TB, 66.31% of the respondents stated that they were individually involved in educating patients about MDR-TB.Few pharmacists (33.33%) and none of the optometrists were involved in educating patient about MDR-TB (Table 5).Respondents referring to the MDR-TB management guidelines manual, 32.51% of them reported that they referred to it.This practice varied with some other characteristics of the respondents (Table 6).
DISCUSSION
Generally, more than half of the respondents had good Regarding with the pharmacist, most of them did not give concern to learn more on MDR-TB, since they are not in contact with these patients most of the time.These findings concur with reports by other investigators (Hashim et al., 2003;Kiefer et al., 2009).Other important findings are that there was significant difference in the level of knowledge based on gender or number of years of experience.As stated earlier, although one would have expected that many years of work experience would translate in higher knowledge level; this was not the case in this study.It might be that the participants with longer years of experience did not see the need to update themselves about new developments on TB/MDR-TB, while their counterparts with less number of years of working experience were still eager to learn about the disease.
Findings from this study suggest that there were too much positive attitude towards patients with MDR-TB, as patients were not blamed for having brought this to themselves.This was in contrast to Yu et al. (2002) as well as Holtz et al. (2006) findings.It seems that the negative attitude was significantly influenced by personal characteristics of respondents.Female respondents held more negative attitude than males (38.89 vs. 13.79%,p=0.048).Moreover, respondents with more years of work experience held slightly more negative attitude (P>0.05).The professional category of respondents had some influence on their attitude since more doctors, optometrists, environmentalists, health officers, and midwives (100%) held positive attitude than nurses, pharmacists, radiologists and medical laboratorists (P=0.475).The guidelines in any country are supposed to guide the users in discharging their duties adequately.From this study, majority (96%) of the participants agreed that having MDR-TB guidelines assists them in managing MDR-TB patients.This finding is consistent with reports by other investigators (Havward et al., 1995;Richardson, 2000;Hoa and Thorson, 2005;Gai et al., 2008;Ahmed et al., 2009).However, 61.5% of respondents reported having their own copy of the guidelines.This situation is alarming because guidelines are documents that every healthcare worker should possess in order to ensure quality services.This situation needs to be remedied by making the guidelines available to all healthcare workers in Ethiopia.With regard to the practice of using protective masks, 74.11% of respondents reported that they used the protective masks (N95) when they are in contact with MDR-TB patients.This level of practice is acceptable but it would have been better that all healthcare workers used the protective masks when dealing with MDR-TB patients.This is particularly necessary for pharmacists, medical laboratorists, radiologists, and optometrists who traditionally are not provided with protective masks because they are not in longer contact with MDR-TB patients.Respondents with seven and less year experience (74.29%) wore protective mask when in contact with MDR-TB patients than their counterparts did.The findings from this study are in agreement with reports by Parmeggiani and co-workers who reported that HCWs who were with older than 7 year experience reported low compliance concerning standard precautions regarding hospital acquired infections including MDR-TB (Parmeggiani et al., 2010) since experience, makes careless mostly.
With regard to educating patients about MDR-TB, 66.4% of respondents stated that they were individually involved in educating patients about MDR-TB.This was partially similar to the report by Kiefer et al. (2009).In contrast, health officers, environmentalists and radiologists were the most involved in educating patients as well as more than 50% of medical laboratorists and nurses and those with insufficient knowledge were less likely to be involved in educating patients about MDR-TB since at least 35% of them reported not being involved.32.51% of respondents reported that they referred to the MDR-TB management guidelines.This was in contrast to the report of Richardson (2000).With regard to assessed knowledge, respondents with good level of knowledge reported that they referred to the manual not significantly than those with insufficient level (28.21 vs. 25%, P=0.854).In contrast, those with negative attitude referred more to the guidelines manual than those with positive attitude though the difference was not statistically significant (P>0.05).This is worrying because the majority of these respondents had stated that guidelines were needed for them to perform adequately.This finding is consistent with reports by Cabbana and colleagues who reported that a good number of practitioners fail to comply with clinical practice guidelines.However, it could be that because they did not have their own copy of the guidelines that is why they could not refer to them (Cabana et al., 1999).
The findings from this study show two scenarios: The first is that, having good level of knowledge about MDR-TB was associated with good practices such as the use of protective masks (P=0.0395) and MDR-TB guidelines and involvement in educating patients about MDR-TB, though not statistical.The second scenario is that the attitude of respondents towards patients suffering from MDR-TB did not influence their practices.On one hand, respondents with negative attitude practiced the use of protective masks (P=0.0274) and referred to the MDR-TB guidelines a little more than those with positive attitude, although the difference was not statistically significant.On the other hand, respondents with positive attitude were slightly more involved in educating patients about MDR-TB than those with negative attitude but the difference was also not statistically significant.These findings from this study are similar with reports that hold the view that knowledge shapes attitude and attitude influences behaviour (Moloi, 2003).
Limitations of the study
Despite a high response rate of over 90%, the sample size of respondents is still small in order to ascertain whether some of the differences reported as not statistically significant could have been significant if the sample was bigger.Moreover, the questionnaires were given to HCPs to fill it in their homes, they might have
*MLT: Medical laboratory technologist; **EHP: Environmental health professional.Boru Meda General Hospital.Boru Meda General Hospital is located approximately 411 km north part of Addis Ababa, the capital city of Ethiopia.Dessie town have 6 health posts, 8 health centers, one specialized referral hospital, one primary hospital, 3 general hospital and 56 drug retail outlets.As to be governmental health facilities, Boru Meda general Hospital serves ophthalmic care and MDR-TB treatment and care.
Table 2 .
Knowledge level of respondents about MDR-TB at Boru Meda General Hospital (n = 47).
Table 4 .
Use of protective masks by respondents at Boru Meda General Hospital (n=47). | 2018-12-30T05:29:43.539Z | 2017-03-29T00:00:00.000 | {
"year": 2017,
"sha1": "c440efe43a3f9ad4aff492c451d3fca17ea1726d",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJPP/article-full-text-pdf/ECA0C0763748.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c440efe43a3f9ad4aff492c451d3fca17ea1726d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210988966 | pes2o/s2orc | v3-fos-license | Assessment of the massive hemorrhage in placenta accreta spectrum with magnetic resonance imaging
The aim of the present study was to evaluate whether MRI features are able to predict massive hemorrhage of patients with placenta accreta spectrum (PAS). A total of 40 patients with suspected PAS after ultrasound examination were subjected to MRI. Of these, 29 patients were confirmed as having PAS. MRI data were analyzed independently by two radiologists in a blinded manner. Inter-observer agreement was determined. The 29 confirmed patients were divided into two groups (moderate and massive hemorrhage) according to the estimated blood loss (EBL) and blood transfusion, and the MRI features were compared between the two groups. The EBL, as well as blood transfusion, between the patients with and without each MRI feature were compared. The inter-observer agreement between the two radiologists for the 11 MRI features had statistical significance (P<0.05). Intra-placental thick dark bands and markedly heterogeneous placenta were the most important MRI features in predicting massive hemorrhage and blood transfusion (P<0.05). The difference in EBL between the patients with and without focal defect of the uteroplacental interface (UPI) was significant (P<0.05). The differences in blood transfusion between the patients with and without myometrial thinning, disruption of the inner layer of the UPI, increased placental vascularity and increased vascularity at the UPI were significant (P<0.05). These results indicate that MRI features may predict massive hemorrhage of patients with PAS, which may be helpful for pre-operative preparation of PAS patients.
Introduction
Placenta accreta spectrum (PAS), which represents a clinical challenge in obstetrics, is defined as myometrial involvement by the fetal trophoblast (1,2). It may lead to uncontrollable bleeding and threaten the lives of mother and baby. Its prevalence has markedly increased in China over the past 50 years, primarily due to the increasing number of pregnant females undergoing primary and repeat cesarean sections (3). Obstetricians have implemented various methods to improve the massive hemorrhage caused by placenta implantation, including ascending uterine artery ligation (AUAL), uterine artery embolization (UAE) and prophylactic abdominal aorta balloon occlusion (ABO), but the therapeutic effects are varied (4). Interventional radiology is commonly applied and placement of prophylactic balloon catheters in the common or internal iliac arteries are commonly used to help control massive hemorrhage (5,6). Development of an effective way to prenatally predict massive hemorrhage may allow for appropriate pre-operative preparation, including the arrangement for treatment by a skilled surgical team.
There have been several reports on the association between clinical information or therapeutic schedule and the risk of massive hemorrhage (7,8). Wright et al (8) reported an association among PAS (placenta accreta, increta or percreta), gestational age of <34 weeks at delivery and estimated blood loss (EBL)≥5,000 ml. It was noted that patients with placenta previa, who delivered at an earlier gestational age, were more likely to require ≥10 units of blood. Shamshirsaz et al (7) reported that a standardized approach for patients with morbidly adherent placentation provided by a specific multidisciplinary team was associated with improved maternal outcomes compared with a more traditional non-multidisciplinary approach. There are several reports on sonographic evaluation for predicting the risk of massive bleeding. For instance, Hasegawa et al (9) reported that advanced maternal age, previous cesarean section and presence of sponge-like tissue in the cervix were risk factors of massive bleeding during cesarean section in cases of placenta previa, regardless of whether placental adherence was present. Baba et al (10) reported that anterior placentation was a risk factor of massive hemorrhage during cesarean section for placenta previa.
Chen et al (11) reported that low signal intensity bands on T2-weighted imaging may be a predictor of poor maternal outcome in patients with invasive placenta previa. However, studies on the association between MRI features and hemorrhage of patients with PAS are currently limited. The present study investigated whether MRI features are able to predict massive hemorrhage of patients with PAS. The results may be helpful for pre-operative preparation.
Materials and methods
Patients. The present study was a retrospective study. A total of 40 patients who underwent ultrasonography (US) and placenta MRI examination from March 2015 to May 2018 were enrolled. The inclusion criteria were as follows: (a) Patients with suspected PAS or inconclusive results on US, (b) patients at high risk of PAS with one or more of the following: Maternal age >35 years, grand multiparity, previous uterine interventional procedures (e.g. cesarean section, dilatation and curettage and myomectomy) and placenta previa (12). The exclusion criteria were as follows: (a) Medical records not available, (b) only MRI data of post-partum placenta implantation available, (c) early pregnancy, (d) patients who had induced labor rather than cesarean section due to stillbirth in utero. The patients were first diagnosed with suspected or inconclusive PAS using US, based on the detection of any of the following: Loss/irregularity of the echolucent area between uterus and placenta, thinning or interruption of the hyperechoic interface between uterine serosa and bladder wall, the presence of turbulent placental lacunae with high-velocity flow (>15 cm/sec), hypervascularity of the uterine serosa-bladder wall interface and irregular intraplacental vascularization (13). All patients received cesarean section and only one patient underwent subtotal hysterectomy due to uncontrollable bleeding. The final diagnosis of PAS was made based on intra-operative observation for 39 patients and by histopathology for one patient who was treated by subtotal hysterectomy. Finally, 29 patients (72.5%, 29/40) were confirmed as having PAS (average age, 33.5±4.2 years), and 11 patients were confirmed as non-PAS (average age, 32.8±3.2 years).
The retrospective study was performed in accordance with the standards set out in the Code of Ethics of the World Medical Association (Declaration of Helsinki) and the research procedures were approved by the ethics review board of Shandong Provincial Hospital (Jinan, China). Informed consent was obtained from each patient.
MRI data analysis. MRI data were analyzed independently by two radiologists (JZ with five years and HX with ten years of experience in evaluating the placenta using MRI) blinded to the patients' history, US examination results, presence of PAS and intra-operative findings. MRI data were interpreted on a PACS view station (Centricity RIS CE V2.0; GE Healthcare). A total of 11 MRI features were evaluated, including placenta previa, focal defect of the uteroplacental interface (UPI), myometrial thinning, disruption of the inner layer of the UPI, intraplacental thick dark bands, focal defect of the interval between the bladder and uterus, increased placental vascularity, markedly heterogeneous placenta, uterine bulge, increased uterine vascularity and increased vascularity in the UPI (14). A complete description of the MRI features is provided in Supplemental Table SI. In case of any disagreement, a third radiologist (QL) with 15 years of experience in evaluating the placenta using MRI was consulted.
Clinical diagnosis of PAS. The reference standard for determining the actual status of the placenta was established by one obstetrician (CZ with 20 years of experience in obstetrics) according to intra-operative findings recorded in the electronic medical records of most patients (n=39). The diagnostic criteria were as follows (15): Placenta accreta: i) No placental tissue invading through the surface of the uterus. ii) Incomplete separation with uterotonics and gentle cord traction and manual removal of the placenta was required for the remaining placenta. iii) Bleeding cannot be controlled autonomously. Placenta increta: i) No placental tissue invading through the surface of the uterus. ii) Placental tissue implanted in myometrium of uterus requiring to be removed by forceps curettage. Placenta percreta: Macroscopically, the whole layer of the uterus (including the serosal surface), even the surrounding organs, was invaded by placental tissues.
Hemorrhage analysis. The EBL was estimated by an experienced obstetrician who participated in the operation based on the operative report, which included fluid volume in the negative pressure aspirator, dressing weight and other operative findings. The blood volume in the mixture of blood and amniotic fluid was calculated according to the total fluid volume, hematocrit of the mixture and prenatal hematocrit. The EBL was estimated and documented during surgery. Packed red blood cell (PRBC) transfusion and plasma transfusion were also documented. Moderate hemorrhage was defined as EBL<2,000 ml and PRBC transfusion <10 units. Massive hemorrhage was defined as EBL≥2,000 ml or PRBC transfusion ≥10 units (7,8). The patients were divided into two groups Table I The data values are expressed as mean ± standard deviation (range) or n. PAS, placenta accreta spectrum; EBL, estimated blood loss; PRBCs, packed red blood cells; Interval time, interval time between MRI and cesarean section; RBC, red blood cell; PT, Prothrombin time; INR, prothrombin time international normalized ratio; APTT, activated partial thromboplastin time; ABO, prophylactic abdominal aorta balloon occlusion; UAE, uterine artery embolization; AUAL, ascending uterine artery ligation.
(moderate hemorrhage and massive hemorrhage) according to EBL and PRBC transfusion.
Statistical analysis. Statistical analysis was performed with SPSS for Windows, version 17.0 (SPSS, Inc.). Placenta previa is a five-valued variable (without placenta previa, low-lying placenta, marginal placenta previa, partial placenta previa or complete placenta previa). The other 10 MRI features were binary variables (with or without the MRI feature). Inter-observer agreement regarding categorical data between the first two radiologists was assessed using a Kappa test. The MRI features between the moderate and massive hemorrhage patients were compared using the χ 2 test. The EBL, PRBC transfusion and plasma transfusion between patients with and without each MRI feature were compared using the independent-samples t-test. The EBL, PRBC transfusion and plasma transfusion among different subtypes of placenta previa were compared using one-way analysis of variance. For all statistical analyses, P<0.05 was considered to indicate statistical significance. Table I, the clinical information of the patients was compared. The EBL and plasma transfusion for the PAS patients were significantly higher than those for the non-PAS patients (P<0.01). The differences in the other demographic characteristics between the non-PAS and the PAS patients were not significant (P>0.05). There were 19 cases of moderate hemorrhage (including 6 cases of placenta accreta, 11 cases of placenta increta and 2 cases of placenta percreta), 10 cases of massive hemorrhage (including 5 cases of placenta increta and 5 cases of placenta percreta). The EBL, PRBC transfusion and plasma transfusion for the patients in the massive hemorrhage group were significantly The data values are expressed as mean ± standard deviation (range) or n. EBL, estimated blood loss; PRBCs, packed red blood cells.
Patients. As presented in
higher than those for the patients in the moderate hemorrhage group (P<0.001). The mean APTT for the massive hemorrhage group was significantly longer than that for the moderate hemorrhage group (P<0.05), but the mean APTT of the two groups was in the normal range. A total of three hemostatic methods have been applied for the patients (Table I). One of the two PAS patients treated with UAE underwent pre-operative UAE, while the others underwent post-operative UAE. One of the 27 PAS patients treated with AUAL underwent unilateral AUAL, while the other 26 patients underwent bilateral AUAL. The difference in hemostatic methods between the moderate hemorrhage group and the massive hemorrhage group was significant (P<0.05). The proportion of patients who underwent ABO in the massive hemorrhage group was larger than that in the moderate hemorrhage group. However, the blood loss and transfusion in the massive hemorrhage group were still higher than those in the moderate hemorrhage group. Other hemostatic methods, including the use of oxytocin, tourniquet, local suture ligation, uterine packing hemostasis and placement of hemostatic gauze, were applied according to the intra-operative conditions. Each patient was treated using multiple hemostasis methods. All patients underwent planned cesarean section and no emergency surgery was performed in the present study. One PAS patient underwent subtotal hysterectomy. A total of 4 PAS patients delivered with broken placentae. The other 35 patients delivered with almost complete placentae. No maternal mortality occurred. There was no case of bladder invasion. One stillborn fetus was delivered.
Inter-observer agreement. Interobserver agreement was excellent for one of the eleven MRI features (markedly heterogeneous placenta, κ>0.8), good for three MRI features (intraplacental thick dark bands, increased placental vascularity, increased uterine vascularity, κ>0.6) and moderate for two MRI features (myometrial thinning, focal defect of the interval between the bladder and uterus, κ>0.4). For the remaining five MRI features (placenta previa, focal defect of the UPI, disruption of the inner layer of the UPI, uterine bulge and increased vascularity in UPI), the κ-values for the 40 patients with suspected PAS (κ-all) and for the 29 patients with confirmed PAS (κ-PAS) were not in the same interval, but the inter-observer agreement was statistically significant. The detailed κ-values of each MRI feature are provided in Table II. Inter-observer agreement was fair (κ-all=0.375) only for the MRI feature of uterine bulge.
Association between involvement depth and hemorrhage.
There were 6 cases of placenta accreta (20.7%), 16 cases of placenta increta (55.2%) and 7 cases of placenta percreta (24.1%). EBL, PRBC transfusion and plasma transfusion of patients with placenta accreta, increta and percreta are compared in Table III. The EBL exhibited an increasing trend along with the implant depth, but without significant difference (P>0.05). The differences in PRBC transfusion, as well as plasma transfusion, among the three groups were significant (P<0.05). The differences in PRBC transfusion and plasma transfusion between the placenta accreta and placenta percreta groups, as well as that between the placenta increta and placenta percreta groups, were significant (P<0.05). However, there was no significant difference between the placenta accreta and the placenta increta groups (P>0.05).
Association between MRI features and hemorrhage Differentiation between moderate and massive hemorrhage group.
To differentiate between the moderate and massive hemorrhage groups, the MRI features of the two groups were compared. Among the 29 PAS patients, the differences between the two groups (moderate vs. massive hemorrhage group) were significant in two MRI features (intraplacental thick dark bands, P=0.005; markedly heterogeneous placenta, P=0.020). The two MRI features are provided in Figs. 1 and 2, respectively. The differences in the other 9 MRI features between the two groups were not significant (i.e. placenta previa, P=0.081; focal defect of the UPI, P=0.173; myometrial thinning, P=0.059; disruption of the inner layer of the UPI, P=0.054; focal defect of the interval between the bladder and uterus, P=0.64; increased placental vascularity, P=0.198; Tables IV and V). These results indicated that the MRI features of intraplacental thick dark bands and markedly heterogeneous placenta may be helpful in predicting massive hemorrhage.
MRI features as grouping variables. The differences in EBL, PRBC transfusion and plasma transfusion were compared between the patients with and without each MRI feature (Table VI). There were no significant differences in EBL and blood transfusion among the different subtypes of placenta previa (Table IV). The differences in EBL between the patients with and without the three MRI features were significant (i.e., focal defect of the UPI, intraplacental thick dark bands, markedly heterogeneous placenta; P<0.05). The three MRI features are presented in Fig. 3. The differences in PRBC transfusion and plasma transfusion between the patients with and without the six MRI features were significant (P<0.05). The six MRI features included myometrial thinning, disruption of the inner layer of the UPI, intraplacental thick dark bands, increased placental vascularity, markedly heterogeneous placenta and increased vascularity in UPI (Table VI). Representative images of the six MRI features are displayed in Figs. 1-3.
Discussion
In the present study, not only all of the PAS patients but also all the non-PAS patients had placenta previa and/or a history of at least one prior cesarean section and/or one abortion. There was a large proportion of PAS (72.5%, 29/40) in the present study. This may be due to inclusion of patients with suspicious PAS or inconclusive findings on US who are at high risk of PAS.
There is an emphasis in China on uterine-sparing management. Therefore, the diagnosis of PAS for most patients was confirmed according to intra-operative findings. There is a recent International Federation of Gynecology and Obstetrics (FIGO) system of clinical classification based on clinical findings (16). The diagnosis of placenta accreta and percreta in the present study was basically consistent with that of the FIGO system. In the present study, the diagnosis of placenta increta was based on the placenta separation method of forceps curettage. Although the diagnosis of placenta increate was described differently, the meaning was consistent with that of the FIGO system.
The EBL is notoriously subjective and subject to error, particularly when the volume is very low or very high (17). EBL was estimated by an experienced obstetrician in this The data values are expressed as mean ± standard deviation (range) or n. EBL, estimated blood loss; PAS, placenta accreta spectrum; PRBCs, packed red blood cells. The data values are expressed as mean ± standard deviation (range) or n. PAS, placenta accreta spectrum; UPI, uteroplacental interface; IBU, interval between the bladder and uterus. study. Different obstetricians may have different estimates. Therefore, the association between PRBC transfusion and plasma transfusion, as well as MRI features, was evaluated in the present study. There were no significant differences in red blood cells, hemoglobin, hematocrit, prothrombin time and INR between the massive hemorrhage group and the moderate hemorrhage group. There was a significant difference in APTT between the two groups, but the APTT of each group was in the normal range. They were therefore unlikely to have influenced the PRBC and plasma transfusion. The κ-all and κ-PAS values exhibited certain differences, which may be due to the small number of patients, resulting in the low robustness of the κ-values. The inter-observer agreement for most MRI features was equal to or superior to moderate. The inter-observer agreement for only uterine bulge (κ-all) was fair. The inter-observer reliability may be influenced in part by the differences in the experience of the radiologists interpreting the images. The eleven MRI features in the present study were proved to have a role in differentiating PAS from normal placentae or determining implant depth (14,(18)(19)(20)(21)(22).
The EBL exhibited a trend to increase along with the placental implant depth in the present study. However, there was no significant difference among the three groups (P=0.070). Of note, the EBL does not always increase with the increase of the implant depth; however, studies on the association between MRI features and hemorrhage of PAS patients prior to delivery are limited. Chen et al (11) defined blood loss of >1,000 ml during surgery as significant hemorrhage. Poor maternal outcome was defined as parturient with significant hemorrhage or emergency hysterectomy. They reported that low signal intensity bands on T2-weighted imaging may be a predictor of poor maternal outcome after UAE-assisted cesarean section in patients with invasive placenta previa. The intraplacental thick dark bands and markedly heterogeneous placenta were reported to be important MRI features not only in predicting massive hemorrhage but also the differentiating factors for EBL, PRBC transfusion and plasma transfusion. Intra-placental thick dark bands were the result of fibrin deposition (14). In certain previous studies, intra-placental thick dark bands can differentiate between PAS and non-PAS (14,18,22). In the present study, intra-placental thick dark bands were observed more frequently in the massive hemorrhage group than in the moderate hemorrhage group. Fibrin deposition may result in narrowed intervillous space (18,23). The maternal vessels (spiral arteries and draining veins) may be dilated or increased to enhance blood flow to the placenta. Increased and/or dilated vessels may result in more hemorrhage when the placenta is manually removed.
A markedly heterogeneous placenta is associated with invasive placentation (18,19). Lax et al (19) and Ueno et al (22) indicated that a markedly heterogeneous placenta was more frequently observed in cases of PAS than in normal placentae. Bour et al (14) reported that a markedly heterogeneous placenta was not significantly associated with the diagnosis of PAS, but more frequently observed in patients with placenta percreta than in those with placenta accreta. In the present study, it was observed that a markedly heterogeneous placenta was more frequent in the massive hemorrhage group than in the moderate hemorrhage group. It was indicated that the characterization of markedly heterogeneous placenta partly depended on the presence of intraplacental thick dark bands and increased placental vascularity.
Bour et al (14) reported that thinning or focal defect of the UPI was significantly associated with the diagnosis of invasive placenta and was the single independent predictor of invasive placenta. This study suggested that the EBL of patients with focal defect of the UPI was more than that of the patients without this MRI feature. However, the blood transfusion difference between the patients with and without the MRI feature was not significant.
Most PAS patients have the MRI feature of myometrial thinning, but this feature is not unique, as numerous maternal patients with a normal placenta also have such a feature (22). The disruption of the inner layer of the UPI had 81% sensitivity for the diagnosis of PAS (14). Increased placental vascularity was significantly associated with PAS (18,22,24). Increased vascularity in UPI was identified as a novel MRI feature in the present study. The maternal spiral arteries at the myometrium-placenta interface run parallel to the villous branches of the chorionic arteries and perpendicular to the decidua surface. The vessels at the UPI may be prone to rupture when the placenta is manually removed. Although the difference in EBL between the patients with and without the four MRI features was not significant, the difference in blood transfusion was significant, which may reflect blood loss to a certain extent.
The present study had several limitations. First, it was a retrospective study. The hemostasis methods were not arranged in advance. The proportion of patients who underwent ABO in the massive hemorrhage group was greater than that in the moderate hemorrhage group, but the ABO was more effective (25). Therefore, the hemostasis method of ABO should not influence the results. In addition, patients who underwent MRI had already been screened by US, and there was already suspicion for PAS, particularly suspicion for placenta increta and percreta. The diagnostic accuracy was therefore biased prior to interpretation. However, the radiologists were blinded to the EBL, PRBC transfusion and plasma transfusion. Therefore, any bias made during PAS diagnosis were unlikely to have affected the results. A further limitation is the inaccurate estimation of blood loss, particularly in those patients with massive hemorrhage (8,17). To minimize this bias, EBL and blood transfusion were analyzed and the hemostatic methods that may affect blood loss were recorded. In addition, the size of the PAS cohort was small, which may influence the results of the statistical analysis. Finally, the present study was a retrospective single-center study and selection bias may have been present. Further multiple-center studies with larger samples are warranted.
In conclusion, two MRI features, intraplacental thick dark bands and markedly heterogeneous placenta, are helpful in predicting massive hemorrhage in patients with PAS. Focal defect of the UPI, myometrial thinning, disruption of the inner layer of the UPI, increased placental vascularity and increased vascularity at the UPI may also contribute to predicting hemorrhage to a certain extent. Patients with these MRI features may have a higher risk of massive hemorrhage and pre-operative preparations should be arranged for them in advance. | 2020-01-16T09:05:04.849Z | 2020-01-15T00:00:00.000 | {
"year": 2020,
"sha1": "c94d91b1706ea49bab9cd94a75daa9267ca78c96",
"oa_license": null,
"oa_url": "https://www.spandidos-publications.com/10.3892/etm.2020.8457/download",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6fd997f24fd911598580cddf7407e5fc9fca6b1e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6794370 | pes2o/s2orc | v3-fos-license | Variational properties of value functions
Regularization plays a key role in a variety of optimization formulations of inverse problems. A recurring theme in regularization approaches is the selection of regularization parameters, and their effect on the solution and on the optimal value of the optimization problem. The sensitivity of the value function to the regularization parameter can be linked directly to the Lagrange multipliers. This paper characterizes the variational properties of the value functions for a broad class of convex formulations, which are not all covered by standard Lagrange multiplier theory. An inverse function theorem is given that links the value functions of different regularization formulations (not necessarily convex). These results have implications for the selection of regularization parameters, and the development of specialized algorithms. Numerical examples illustrate the theoretical results.
Introduction.
It is well known that there is a close connection between the sensitivity of the optimal value of a parametric optimization problem and its Lagrange multipliers. Consider the family of feasible convex optimization problems minimize r,x ρ(r) subject to Ax + r = b, φ(x) ≤ τ, where b ∈ R m , A ∈ R m×n , and the functions φ : R n → R := (−∞, ∞] and ρ : R m → R are closed, proper and convex, and continuous relative to their domains. The value function v(b, r) := inf gives the optimal objective value of problem P(b, τ ) for fixed parameters b and τ . If P(b, τ ) is a feasible ordinary convex program [34,Section 28], then under standard hypotheses the subdifferential of v is the set of pairs (u, µ), where u ∈ R m and µ ∈ R are the Lagrange multipliers of P(b, τ ) corresponding to the equality and inequality constraints, respectively. This connection is extensively explored in Rockafeller's 1993 survey paper [35]. If we allow φ to take on infinite values on the domain of the objective-which can occur, for example, if φ is an arbitrary gauge-then P(b, τ ) is no longer an ordinary convex program, and so the standard Lagrange multiplier theory does not apply. Multiplier theories that do apply to more general contexts can be found in [8,16,21,45]. Remarkably, even in this general setting, it is possible to obtain explicit formulas of the subdifferential of the value function v useful in many applications. where is the gauge function for the closed nonempty convex set U ⊂ R n , which contains 0. Let A = I and b = (0, −1) T . Then the solution to (1.1) is just the 2-norm projection onto the set { x | γ (x | U ) ≤ 1 } = U . For our first example, we consider the set defined in [34,Section 10]. The gauge for this set is an example of a closed, proper and convex function that is not locally bounded and therefore not continuous at a point in its effective domain. It is straightforward to show that The constraint region for (1.1) is the set U and the unique global solution is the point x = 0. However, since 0 = γ (0 | U ) < 1, the classical Lagrange multiplier theory fails: the solution is on the boundary of the feasible region, and yet no classical Lagrange multiplier exists. The problem is that the constraint is active at the solution, but not active in the functional sense, i.e., γ (0 | U ) < 1. In contrast, the extended multiplier theory of [45,Theorem 2.9.3] succeeds with the multiplier choice of 0.
For the second example, take U = B 2 ∩ K, where B 2 is the unit ball associated with the Euclidean norm on R 2 . Then γ (x | B 2 ∩ K) = x 2 + δ (x | K), and the constraint region for (1.1) is the set B 2 ∩ K. Set K = { (x 1 , x 2 ) | x 2 ≥ 0 }. Again, the origin is the unique global solution to this optimization problem, and no classical Lagrange multiplier for this problem exists.
In both of these examples, the multiplier theory in [45] can be applied to obtain a Lagrange multiplier theorem. In Theorem 5.2, we extend this theory and provide a characterization of these Lagrange multipliers that is useful in computation.
Formulations.
Appropriate definitions of the functions ρ and φ can be used to represent a range of practical problems. Choosing ρ to be the 2-norm and φ to be any norm yields the canonical regularized least-squares problem minimize x, r r 2 subject to Ax + r = b, x ≤ τ, (1.2) which optimizes the misfit between the data b and the forward model Ax, subject to keeping x appropriately bounded in some norm. The 2-norm constraint on x yields a Tikhonov regularization, popular in many inversion applications. A 1-norm constraint on x yields the Lasso problem [41], often used in sparse recovery and model-selection applications. Interestingly, when the optimal residual r is nonzero, the value function for this family of problems is always differentiable in both b and τ with where · * is the norm dual to · . This gradient is derived by van den Berg and Friedlander [10,Theorem 2.2]. The analysis of the sensitivity in τ of the value function for the Lasso problem led to the development of the SPGL1 solver [9], currently used in a variety of sparse inverse problems, with particular success in large-scale sparse inverse problems [27]. A subsequent analysis [12] that allows φ(x) to be a gauge paved the way for other applications, such as group-sparsity promotion [11].
An alternative to P(b, τ ) is the class of penalized formulations (the subscript "L" in the label reminds us that it can be interpreted as a Lagrangian of the original problem). The nonnegative regularization parameter λ is used to control the tradeoff between the data misfit ρ and the regularization term φ. For example, talking ρ(r) = r 2 and φ(x) = x yields a formulation analogous to (1.2). This penalized formulation is commonly used in applications of Bayesian parametric regression [30,31,37,42,44], inference problems on dynamic linear systems [1,15], feature selection, selective shrinkage and compressed sensing [19,20,25], robust formulations [2,23,24,29], support-vector regression [26,43], classification [22,33,39], and functional reconstruction [6,17,38]. From an algorithmic point of view, the unconstrained formulation P L (b, λ) may be preferable. However, the constrained formulation P(b, τ ) has the distinction that its value function v(b, τ ) is jointly convex in its parameters; see section 1.3. In contrast, the optimal value of the penalized formulation P L (b, λ) is not in general a convex function of its parameters. The following simple example illustrates this situation. The optimal values of ρ in the formulations P(b, τ ) and P L (b, λ), as functions of τ and λ, respectively, are given by The optimal values and their derivatives are shown in Figure 1.1, where it is clear that ρ τ is convex (and in this case also smooth) in τ , but ρ λ is not convex in λ.
The admissibility of variational analysis and convexity of the value function may convince some practitioners to explore formulations of type P(b, τ ) rather than P L (b, λ). In fact, we give an example (in section 7) of how this variational information can be used for algorithm design in the context of large-scale inverse problems.
1.3. Approach. For many practical inverse problems, the formulation of primary interest is the residual-constrained formulation (the subscript "R" reminds us that this formulation reverses the objective and constraint functions from that of P(b, τ )) in part because estimates of a tolerance level σ on fitting the error ρ(b − Ax) are more easily available than estimates of a bound on the penalty parameter on the regularization φ; cf. P L (b, λ). However, the formulation P(b, τ ) can sometimes be easier to solve. The underlying numerical theme is to develop methods for solving P R (b, σ) that use a sequence of solutions to the possibly easier problem P(b, τ ).
In section 2, we present an inverse function theorem for value functions that characterizes the relationship between P(b, τ ) and P R (b, σ), and applies more generally to nonconvex problems. Pairs of problems of this type are classical, though typically paired in a max-min fashion. For example, the isoperimetric inequality and Queen Dido's problem are of this type; the greatest area surrounded by a curve of given length is related to the problem of finding the curve of least arc length surrounding a given area (see [40] for a modern survey). The Markowitz mean-variance portfolio theory is also based on such a pairing; minimizing volatility subject to a lower bound on expected return is related to maximizing expected return subject to an upper bound on volatility [32].
The application motivating our investigation is establishing conditions under which it is possible to implement a root-finding approach for the nonlinear equation where P R (b, σ) can be solved via a sequence of approximate solutions of P(b, τ ). This generalizes the approach used by van den Berg and Friedlander [10,12] for largescale sparse optimization applications. The convex case is especially convenient, because both value functions are decreasing and convex. When the value function is differentiable, Newton's method is globally monotonic and locally quadratic. In section 5 we establish the variational properties (including conditions necessary for differentiability) of P(b, τ ). In section 4 we derive dual representations of P(b, τ ) and their optimality conditions. These are used in section 5 to characterize the variational properties of the value function v. The conjugate, horizon, and perspective functions arise naturally as part of the analysis, and we present a calculus (section 3) for these functions that allows explicit computation of the subdifferential of v for large classes of misfit functions ρ and regularization functions φ (see section 6).
One of the motivating problems for the general analysis and methods we present is the treatment of a robust misfit function ρ (such as the popular Huber penalty) in the context of sparsity promotion, which typically involves a nonsmooth regularizer φ. In section 7 we demonstrate that the sensitivity analysis can be applied to solve a sparse nonnegative denoising problem with convex and nonconvex robust misfit measures.
The proofs of all of the results are relegated to the appendix (section 8).
Notation.
For a matrix A ∈ R m×n , the image and inverse image of the sets E and F , respectively, are given by the sets For a function p : R n → R, its epigraph is denoted epi p = { (x, µ) | p(x) ≤ µ }, and its level set is denoted lev p (τ ) = { x | p(x) ≤ τ }. The function p is said to be proper if dom p = ∅ and closed if epi p is a closed set. The function δ (x | X) is the indicator to a set X, i.e., δ (x | X) = 0 if x ∈ X and δ (x | X) = +∞ if x / ∈ X.
2. An inverse function theorem for optimal value functions. Let ψ i : X ⊆ R n → R, i ∈ {1, 2}, be arbitrary scalar-valued functions, and consider the following pair of related problems, and their associated value functions: This pair corresponds to the problems P(b, τ ) and P R (b, σ), defined in section 1, with the identifications Our goal in this section is to establish general conditions under which the value functions v 1 and v 2 satisfy the inverse-function relationship and for which the the pair of problems P 1,2 (τ ) and P 2,1 (σ) have the same solution sets. The pair of problems P(b, τ ) and P R (b, σ) always satisfy the conditions of the next theorem, which applies to functions that are not necessarily convex.
Theorem 2.1. Let ψ i : X ⊆ R n → R, i ∈ {1, 2}, be as defined in P 1,2 (τ ), and define Let S 2,1 be defined symmetrically to S 1,2 by interchanging the roles of the indices. Then, for every τ ∈ S 1,2 , (a) v 2 (v 1 (τ )) = τ , and (b) arg min P 1,2 (τ ) = arg min P 2, Moreover, S 2,1 = v 1 (τ ) τ ∈ S 1,2 , and so 3. Convex analysis. In order to present the duality results of section 4, we require a few basic tools from convex analysis. There are many excellent references for the necessary background material, with several appearing within the past 10 years. In this study we make use of Rockafellar [34] and Rockafellar and Wets [36], although similar results can be found elsewhere [8,13,14,21,28,45]. We review the necessary results here. 2. Horizon function of h: 3. Perspective function of h: 4. Closure of the perspective function of h: Each of these functions can also be defined by considering the epigraphical perspective and properties of convex sets. Indeed, the horizon function h ∞ is usually defined to be the function whose epigraph is the horizon cone of the epigraph of h (see section 3.2 below). The definition given above is a consequence of [34,Theorem 8.5].
Note that for every closed, proper and convex function h, the associated horizon and perspective function, h ∞ and h π , are positively homogeneous and so can be represented as the support functional for some convex set [34,Theorem 13.2]. Moreover, if h is a support function, then h ∞ = h π = h.
Cones.
We associate the following cones with a convex set C and a convex function h.
1. Polar cone: The polar cone of C is denoted by 2. Recession cone: The recession cone of C is denoted by 3. Barrier cone: The barrier cone of C is denoted by bar (C) := x * for some β ∈ R, x, x * ≤ β ∀x ∈ C .
Horizon cone of h:
The horizon cone [34,Theorem 8.7] of h is denoted by A further excellent reference for horizon cones and functions is [7] where they are referred to as asymptotic cones and functions.
Calculus rules.
The conjugate, horizon, and perspective transformations defined in section 3.1 posses a rich calculus. We use this calculus to obtain explicit expressions for the functions ρ * , φ * , (φ * ) ∞ and (φ * ) π , which play a crucial role in the applications of section 6. The calculus for conjugates and horizons is developed in many references (e.g., [8,13,14,21,28,45]); specific citations from [34] are provided. In order to establish the perspective calculus rules for affine composition and the inverse linear image, we note that addition is a special case of affine composition, and that infimal convolution is a special case of inverse linear image. Hence, we need only establish the perspective calculus formulas for affine composition and the inverse linear image: the formula for affine-composition follows from [34, Theorem 9.5] and the definition of the perspective transformation; the formula for inverse linear image is established in section 8.
Affine composition. Let p : R m → R be a closed, proper and convex function, where, for λ = 0, All three functions are closed, proper and convex. The derivation of h * also makes use of the observation that Inverse linear image. Let p : R n → R be closed, proper and convex, and let A ∈ R m×n . Let where all of the functions h, h * , h ∞ , and h π are closed, proper and convex.
All three functions are closed, proper and convex.
4. The dual problem. For our analysis, it is convenient to consider the (equiv- Because the functions ρ and φ are convex, it immediately follows that f is also convex. This fact gives the convexity of the value function v, since it is the inf-projection of the objective function in x [34, Theorem 5.3]. We use a duality framework derived from the one described in Rockafellar and Wets [36, Chapter 11, Section H], and associate with P its dual problem and corresponding dual value function: To derive this dual from [36,Theorem 11.39], define Substituting this expression into [36,Theorem 11.39] gives D.
The dual D is the key to understanding the variational behavior of the value function. To access these results we must compute the conjugate of f . For this it is useful to have an alternative representation for the support function of the epigraph, which is the conjugate of the indicator function appearing in f .
4.1.
Reduced dual problem. In Theorem 4.2, we derive an equivalent representation of the dual problem D in terms of u alone. This is the reduced dual problem for P. We first present a result about conjugates for epigraphs and lower level sets.
Expressions (4.1b) and (4.1a) are easily derived from the case where τ = 0 which is established in [34,Theorem 13.5] and [34, Corollary 13.5.1], respectively. In [34], it is shown that (4.1a) is a consequence of (4.1b). In section 8 we provide a different proof of Lemma 4.1 where it is shown that (4.1b) follows from (4.1a). The arguments provided in the proof are instructive for later computations.
The conjugate f * (y, u, µ) of the perturbation function f (x, b, τ ) defined in P is now easily computed: where the final equality follows from (4.1a). With this representation of the conjugate of f , we obtain the following equivalent representations for the dual problem D. The representation labeled D r is of particular importance to our discussion. We refer to D r as the reduced dual.
Then the value function for D has the following equivalent characterizations: where the closure operation in the (4.3b) refers to the lower semi-continuous hull of the convex function b → v(b, τ ). In particular, this implies the weak duality inequalityv(b, τ ) ≤ v(b, τ ). Moreover, if the function ρ is differentiable, the solution u to D r is unique.
In the large-scale setting, the primal problem P(b, τ ) is usually solved using a primal method that does not give direct access to the multiplier µ for the inequality constraint φ(x) ≤ τ . For example, P(b, τ ) may be solved using a variant of the gradient projection algorithm. However, one can still obtain an approximation to the optimal dual variable u in D r , typically through the residual corresponding to the current iterate. For this reason, one needs a way to obtain an approximation to µ from an approximation to u (i.e., given u, compute µ). Lemma 4.1 and Theorem 4.2 show that this can be done by solving the problem inf µ≥0 p τ (A T u, µ) for µ. Indeed, in the sequel we show that in many important cases there is a closed form expression for the solution µ. The following lemma serves to establish a precise relationship between the solution u to the reduced dual D r and the solution pair (u, µ) to the dual D.
If either S 1 or S 2 is non-empty, then S 1 = S 2 and equality holds in (4.4).
We choose the notation µ + ∂φ(x) to emphasize that there is an underlying limiting operation at play, e.g. see [36,Definition 8.3 and Proposition 8.12].
The final lemma of this section concerns conditions under which solutions to P and D r exist. This is closely tied to the horizon behavior of these problems and the notion of coercivity.
In particular, h is said to be 0-coercive, or simply coercive, if lim x →∞ f (x) = ∞.
5. Variational properties of the value function. Using D and representation of the conjugate of the objective of P (cf. (4.2)), we can specialize [36,Theorem 11.39] to obtain a characterization of the subdifferential of the value function, as well as sufficient conditions for strong duality.
In particular, this implies that We now derive a characterization of the subdifferential ∂v(b, τ ) based on the solutions of the reduced dual D r .
and 3. If u solves D r and (5.1b) holds, there exists x such that (x, u) satisfies (5.1c).
4. If either (4.5a) and (5.1a) holds, or (4.5b) and (5.1b) holds, then ∂v(b, τ ) = ∅ and arg min µ≥0 (5.1e) The representation (5.1e) expresses the elements of ∂v(b, τ ) in terms of classical Lagrange multipliers when µ > 0, and extends the classical theory when µ = 0. (See Lemma 4.3 for the definition of µ + ∂φ(x).) Because v is convex, it is subdifferentially regular, and so for fixed b, we can obtain the subdifferential of v with respect to τ alone [36, Corollary 10.11], i.e., 6. Applications. In this section we apply the calculus rules of section 3.3 in conjunction with Theorem 5.2 to evaluate the subdifferential of the value function in three important special cases: where φ is a gauge-plus-indicator (section 6.1), a quadratic support function (section 6.2), and an affine composition with a quadratic support function (section 6.3). In all cases we allow ρ to be an arbitrary convex function.
6.1. Gauge-plus-indicator. The case where ρ is a linear least-squares objective and φ is a gauge function is studied in [12]. We generalize this case by allowing the convex function ρ to be a possibly non-smooth and non-finite-valued, and take where U is a nonempty closed convex set containing the origin. Here, γ (x | U ) is the gauge function defined in (1.1). It is evident from the definition of a gauge that φ is also a gauge if and only if X is a convex cone. Since 0 ∈ U , it follows from [34, Theorem is the polar of the set U .
Observe that the requirement x ∈ X is unaffected by varying τ in the constraint φ(x) ≤ τ . Indeed, the problem P is unchanged if we replace ρ and φ bŷ with A and b replaced byb Hence, the generalization of [12] discussed here only concerns the application to more general convex functions ρ.
There are two ways one can proceed with this application. One can use φ as given in (6.1) or useρ andφ as defined in (6.2). We choose the former in order to highlight the presence of the abstract constraint x ∈ X. But we emphasize-regardless of the formulation chosen-the end result is the same. Lemma 6.1. Let φ be as given in (6.1). The following formulas hold: cl bar lev φ (τ ) = cl (bar (U ) + bar (X)) .
(6.3f)
If it is further assumed that
By Theorem 5.1, the subdifferential of v(b, τ ) is obtained by solving the dual problem (8.4) or the reduced dual D r . When φ is given by (6.1), the results of Lemma 6.1 show that the dual and the reduced dual take the form Moreover, if (u, s) solves (6.7), then (u, µ) solves (6.6) with µ = −γ s | U • , and We have the following version of Theorem 5.2 when φ is given by (6.1).
If either
and (6.9) holds, (6.12) then ∂v(b, τ ) = ∅ and is given by 6.1.1. Gauge penalties. In [12], the authors study the case where ρ is a linear least-squares objective, φ is a gauge functional, and X = R n . In this case, [12, Lemma 2.1] and [12, Theorem 2.2(b)] can be deduced from (6.7) and (6.13), respectively. Another application is to the case where ρ is finite-valued and smooth, φ is a norm, and X is a generalized box. In this case, all of the conditions of Theorems 5.1 and 6.2 are satisfied, solutions to both P(b, τ ) and (6.7) exist, and v is differentiable. In particular, consider the non-negative 1-norm-constrained inversion, where and ρ is any differentiable convex function. The subdifferential characterization given in Theorem 5.1 can be explicitly computed via Theorem 6.2. In the notation of (6.1), and X in (6.1) is R n + . Since the function ρ is differentiable, the solution u to the dual (6.7) is unique [34,Theorem 26.3]. Therefore, Theorem 6.2 gives the existence of a unique gradient wherex is any solution that achieves the optimal value. The derivative with respect to τ is immediately given by Theorem 6.2 as (6.14) Note that (6.14) has the same algebraic form when x is unconstrained. The non-negativity constraint on x is reflected in the derivative only through its effect on the optimal pointx.
Quadratic support functions.
We now consider the case where U ⊂ R n is nonempty, closed, and convex with 0 ∈ U , and B ∈ R n×n is positive semidefinite. We call this class of functions quadratic support (QS) functions. This surprisingly rich and useful class is found in many applications. A deeper study of its properties and uses can be found in [4]. Note that the conjugate of φ is given by If the set U is polyhedral convex, then the function φ is called a piecewise linearquadratic (PLQ) penalty function [36,Example 11.18]. Since B is positive semidefinite there is a matrix L ∈ R n×k such that B = LL T where k is the rank of B. Using L, the calculus rules in section 3.3 give the following alternative representation for φ: where the final equality follows from [34, Theorem 14.5] since 0 ∈ U . Note that the function class (6.15) includes all gauge functionals for sets containing the origin. By (6.16), it easily follows that where · B denotes the seminorm induced by B, i.e., The next result catalogues important properties of the function φ given in (6.15).
We now apply Theorem 5.2 to the case where φ is given by (6.15).
A graph of the scalar component function φ i is shown in Figure 6.1. The Huber penalty is robust to outliers, since it increases linearly rather than quadratically outside the threshold defined by κ. For any misfit function ρ, Theorem 6.4 can be used to easily compute the subgradient ∂v(b, τ ) of the value function. If the regularity condition (6.21) is satisfied (e.g., if ρ is finite valued), then Theorem 6.4 implies that ∂v(b, τ ) = u −µ (x, u) satisfy (6.23) and In particular, if ρ is differentiable finite-valued, u = ∇ρ(b − Ax) is unique and H ∈ R ν×n is injective, c ∈ R ν , U ⊂ R ν is nonempty closed and convex with 0 ∈ U and B ∈ R ν×ν is symmetric and positive semi-definite. We assume that ∃x such that Hx + c ∈ ri (dom ψ) , where dom ψ = cone U • + Ran (B) (Lemma 6.3). We show that the function φ in (6.28) is an instance of the quadratic support functions considered in section 6.2. To see this we make the following definitions: With these definitions, the two problems P(b, τ ) and minimizeρ(b −Ãx) subject toφ(x) ≤ τ are equivalent. In addition, we have the relationships Moreover, the reduced dual D r becomes Using standard methods of convex analysis, we obtain the following result as a direct consequence of Theorem 6.4 and [36, Corollary 10.11].
If either
and (6.31) holds, then ∂v(b, τ ) = ∅ and is given by Corollary 6.7. Consider the problem P(b, τ ) with φ given by (6.28). Then (x, u, r) satisfies (6.32) if and only if and either r ∈ N (Hx + c | dom ψ ) , or ∃ µ ≥ 0, w ∈ U such that Hx + c ∈ Bw + N (w | U ) and r = µw. In order to satisfy (6.32), we need to find a triple (x, u, w) with w = [w 1 w 2 ] T ∈ [0, 1] 2n so that u ∈ ∂ρ(b − Ax) and A T u = H T w = w 1 − w 2 . We claim that either w 1 (i) = 0 or w 2 (i) = 0 for all i. To see this, observe that w ∈ N Hx + c lev ψ (τ ) , so whenever ψ(y) ≤ τ . Taking y first with − as the only non-zero in the ith coordinate, and then with − in the only nonzero in the (n + i)th coordinate, we get If x(i) < 0, from the first equation we get w 1 (i) = 0, while if x(i) > 0, we get w 2 (i) = 0 from the second equation. If x(i) = 0, then taking y = 0 gives Hence, the subdifferential ∂v is computed in precisely the same way for the Vapnik regularization as for the 1-norm.
7. Numerical example: robust nonnegative BPDN. In this example, we recover a nonnegative undersampled sparse signal from a set of very noisy measurements using several formulations of P. We compare the performance of three different penalty functions ρ: least-squares, Huber (see section 6.2.1), and a nonconvex penalty arising from the Student's t distribution (see, e.g., [3,5]). The regularizing function φ in all of the examples is the sum of the 1-norm and the indicator of the positive orthant (see section 6.1.1).
The formulations using Huber and Student's t misfits are robust alternatives to the nonnegative basis pursuit problem [18]. The Huber misfit agrees with the quadratic penalty for small residuals, but is relatively insensitive to larger residuals. The Student's t misfit is the negative likelihood of the Student's t distribution, where ν is the degrees of freedom parameter. For each penalty ρ, our aim is to solve the problem via a series of approximate solutions of P. The 1-norm regularizer on x encourages a sparse solution. In particular, we solve the nonlinear equation (1.3), where v is the value function of P. This is the approach used by the SPGL1 software package [12]; the underlying theory, however, does not cover the Huber function. Also, φ is not everywhere finite valued, which violates [12, Assumption 3.1]. Finally, the Student's t misfit (7.1) is nonconvex; however, the inverse function relationship (cf. Theorem 2.1) still holds, so we can achieve our goal, provided we can solve the root-finding problem. Formula (6.14) computes the derivative of the value function associated with P(b, τ ) for any convex differentiable ρ. The derivative requires ∇ρ, evaluated at the optimal residual associated with P(b, τ ). For the Huber case, this is given by The Student's t misfit is also smooth, but nonconvex. Therefore, the formula (6.14) may still be applied-with the caveat that there is no guarantee of success. However, in all of the numerical experiments, we are able to find the root of (1.3).
We consider a common compressive sensing example: we want to recover a 20sparse vector in R 512 + from 120 measurements. We use a Gaussian measurement matrix A ∈ R 100×1024 , where each entry is sampled from the distribution N (0, 1/10). We generate measurements to test the BPDN formulation according to where w ∼ N (0, 0.005 2 ) is small Gaussian error, while ζ contains five randomly placed large outliers sampled from N (0, 4). For each penalty ρ, the σ parameter is the true measure of the error in that penalty, i.e., σ ρ = ρ(ζ). This allows a fair comparison between the penalties.
We expect the Huber function to out-perform the least squares penalty by budgeting the error level σ to allow a few large outliers, which will never happen with the quadratic. We expect the Student's t penalty to work even better, because it is non convex, and the grows sublinearly as outliers increase. The results in Figure 7.1 demonstrate that this is indeed the case. In many instances the Huber function is able to do just as well as the Student's t; however, often the Student's t does better (and never worse). Both robust penalties always do better than the least squares fit. The code is implemented in and extended version of SPGL1, and can be downloaded from https://github.com/saravkin/spgl1. The particular experiment presented here can be found in tests/spgl1TestNN.m.
It remains to establish the final statement of the theorem. By (8.1), we already have that v 1 (τ ) τ ∈ S 1,2 ⊂ S 2,1 , so we need only establish the reverse inclusion. For this, let σ ∈ S 2,1 and set τ σ = v 2 (σ). By interchanging the indices and applying the first part of the theorem, we have from (8.1) that That is, τ σ ∈ S 1,2 and, by (a), Proof of the inverse linear image (section 3.3). For λ > 0, observe that Hence, by assumption, the function in (8.3) is closed proper convex and equals h π (w, λ) on the relative interior of its domain. Since h π (w, λ) is closed, (8.2) implies that these functions must coincide.
Proof of Theorem 4.2. Combining D with (4.1b) and (4.2) giveŝ where the final equality follows from (4.1b). The equivalence (4.3a) follows from the definition of the conjugate and the equivalence (4.3b) follows from [34,Theorems 16.3 and 16.4] which tell us that The uniqueness of u when ρ is differentiable follows from the essential strict convexity of ρ * [34,Theorem 26.3].
Proof of Lemma 4.3.
Part 1. The inequality follows immediately from (4.1b). But it is also easily derived from the observation that if µ > 0 and x ∈ lev φ (τ ), then Taking the sup over x ∈ lev φ (τ ) gives the result. (ii) The Fenchel-Young inequality tells us that Lemma 26.17] or [45, Corollary 2.9.5]) Let g : R → be a convex function and τ ∈ R be such that τ > inf g. Then for every x ∈ lev g (τ ) We divide the proof into two parts: (A) if S 1 = ∅, show S 1 ⊂ S 2 , and (B) if S 2 = ∅, show S 2 ⊂ S 1 and equality holds in (4.4). Combined, these implications establish Part 2 of the lemma.
Finally, consider the case where 0 = s ∈ N (x | dom φ ). Then so 0 ∈ S 1 and 0 ∈ S 2 . If µ > 0, then this string of equivalences also implies that Putting this all together, we get that S 1 ⊂ S 2 .
To see (5.1d), note that (4.5a), (5.1a), and Part 1 of Lemma 4.5 imply that the primal objective is coercive, so a solution x exists. Hence, by Part 2, there exists u so that (x, u) satisfies (5.1c).
Analogously, (4.5b), (5.1b), and Part 2 of Lemma 4.5 imply that the solution u to the dual exists, and so by Part 3, there exists x such that the pair (x, u) satisfies (5.1c). In either case, the subdifferential is nonempty and is given by (5.1d).
Therefore, an optimal s in this infimal convolution exists giving µ = γ s | U • as the optimal solution to the first min in (6.5d).
Next we show that the λ given in (6.20) solves (6.18). First observe that the optimal λ must be greater than γ (w | U ), and from elementary calculus, the minimizer of the hyperbola 1 2λ w 2 B + τ λ for λ ≥ 0 is given by w B / √ 2τ . Therefore, the minimizing λ is given by (6.20). Substituting this value into (6.18) gives (6.19).
It is now easily shown that the function in (6.19) is lower semi-continuous. Therefore, the equivalence in (6.18) follows from (4.1b). | 2013-05-23T17:18:43.000Z | 2012-11-15T00:00:00.000 | {
"year": 2012,
"sha1": "d6156d3679812fbf57ea4c633dfdbb46d820b2c1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1211.3724",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d6156d3679812fbf57ea4c633dfdbb46d820b2c1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
237156514 | pes2o/s2orc | v3-fos-license | Definition and extraction of 2D shape indices of intracranial aneurysm necks for rupture risk assessment
Purpose Intracranial aneurysms are local dilations of brain vessels. Their rupture, as well as their treatment, is associated with high risk of morbidity and mortality. In this work, we propose shape indices for aneurysm ostia for the rupture risk assessment of intracranial aneurysms. Methods We analyzed 84 middle cerebral artery bifurcation aneurysms (27 ruptured and 57 unruptured) and their ostia, with respect to their size and shape. We extracted 3D models of the aneurysms and vascular trees. A semi-automatic approach was used to separate the aneurysm from its parent vessel and to reconstruct the ostium. We used known indices to quantitatively describe the aneurysms. For the ostium, we present new shape indices: the 2D Undulation Index (UI\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_\mathrm{2D}$$\end{document}2D), the 2D Ellipticity Index (EI\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_\mathrm{2D}$$\end{document}2D) and the 2D Noncircularity Index (NCI\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_\mathrm{2D}$$\end{document}2D). Results were analyzed using the Student t test, the Mann–Whitney U test and a correlation analysis between indices of the aneurysms and their ostia. Results Of the indices, none was significantly associated with rupture status. Most aneurysms have an NCI\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_\mathrm{2D}$$\end{document}2D below 0.2. Of the aneurysms that have an NCI\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_\mathrm{2D}$$\end{document}2D above 0.5, only one is ruptured, which indicates that ruptured aneurysms often have a circular-shaped ostium. Furthermore, the ostia of ruptured aneurysms tend to have a smaller area, which is also correlated with the aneurysm’s size. While also other variables were significantly correlated, strong linear correlations can only be seen between the area of the ostium with the aneurysm’s volume and surface. Conclusion The proposed shape indices open up new possibilities to quantitatively describe and compare ostia, which can be beneficial for rupture risk assessment and subsequent treatment decision. Additionally, this work shows that the ostium area and the size of the aneurysm are correlated. Further longitudinal studies are necessary to analyze whether stable and unstable aneurysms can be distinguished by their ostia.
Introduction
Intracranial aneurysms (IAs) are pathological dilations of the cerebral blood vessels. Such a dilation takes place locally and leads to a bulging of the vessel wall. Aneurysms can rupture, leading to hemorrhage in the brain. Approximately half of the ruptures are fatal, and one-third of surviving patients suffer long-term from neurological or cognitive deficits [27]. Approximately 20% of patients carrying IAs have multiple intracranial aneurysms [20].
Localization, internal blood flow and geometry of an aneurysm are important indicators for the risk of rupture [12,25,28]. Different indices were developed to characterize the shape of the aneurysm sac [30]. Ratios between the aneurysm height or volume and its neck size are also cal- Fig. 1 In a the parent vessel (light red), the neck curve (green) and the ostium (blue) are depicted. In b the aneurysm got separated from the vessel at its neck curve. In c the reconstructed ostium is shown culated [34]. Quantitative descriptions of the size and shape of an aneurysm enable the comparison of different cases. In the best case, these descriptors can be used to differentiate sets of aneurysms with low and high risk of rupture. The latter may give an indication of the urgency of treatment. In contrast, only size indices like the circumference and area are determined for the neck curve. However, the shape of the neck curve can have a strong impact on the course of treatment and chance of recurrence [33]. There are multiple methods of treatment for IAs, some of which, such as coiling, require precise measurement of the aneurysm's neck curve to plan the intervention [10].
By segmenting a 3D model of the vessel, the aneurysm can be extracted and the neck curve reconstructed [35]. The 3D triangulation of the neck curve forms the ostium, i.e., the area where the blood flows from the parent vessel into the pathologic dilatation. The calculation of indices using the 3D model is more reliable, since when viewing the 2D image slices or 2D angiographic projections, the perceived size of the neck curve may vary depending on the projection angle or interobserver variability [31,37].
Despite all efforts, existing features do not suffice to reliably differentiate between aneurysms with a tendency to rupture and safe ones. In this work, we want to propose descriptors for the shape of the ostium and compare quantitative size and shape indices of ruptured and unruptured IAs and their ostia.
Intracranial aneurysm selection and surface mesh extraction
For this study, we analyzed our intracranial aneurysm database comprising approximately 300 patient datasets acquired in daily clinical practice as well as the Aneurisk repository [2]. Due to the large influence of the aneurysm localization on rupture risk and other properties [13,19], we chose a subset of aneurysms at the middle cerebral artery (MCA) bifurcation with known rupture state. Intracranial aneurysms most often occur at the anterior communicating artery, the internal carotid artery and the MCA [24], whereas our database provided the largest subset for the last category. As a result, we prepared 84 MCA bifurcation aneurysms, of which 27 were ruptured and 57 were unruptured.
For the extraction of the 3D surface meshes of the aneurysm and the parent vessel including the aneurysm's neck curve, we used the previously described approach [35]. Hence, the vessel's centerline is employed for semi-automatic extraction of the neck curve that virtually separates the aneurysm from the parent vessel. The centerlines were extracted with the vascular modeling toolkit [3]. Next, we used the neck curves to separate the aneurysm from the parent vessel. Afterwards, we triangulate the neck curve to reconstruct the ostium, see Fig. 1. Based on the extracted surface meshes of the aneurysm sacs and the ostia, we can compare them and automatically extract parameters that quantitatively describe them.
Parameter extraction based on intracranial aneurysm neck curves
We developed an application to automatically calculate and display different size-and shape-describing indices using MATLAB 2020a (MathWorks, Natick, USA). Size indices are defined as size-related descriptors of the morphology, while shape indices only refer to size-invariant parameters that focus on ellipticity and concavity. All indices are rotation-independent. For the separated aneurysm sacs, volume and surface were calculated as size indices. We used the Undulation Index (UI), Ellipticity Index (EI) and Nonsphericity Index (NSI) as defined in [30] as shape indices. The ostia were projected onto a 2D plane using a principal component analysis. Based on this, we calculated the area and circumference of each projection and its convex hull as size indices. We used the shape indices defined by Raghavan et al. [30] and adapted them to work with 2D shapes: -The 2D Undulation Index UI 2D , -The 2D Ellipticity Index EI 2D , and -The 2D Noncircularity Index NCI 2D .
They are explained in the following and presented in Table 1. The 2D Undulation Index (UI 2D ) calculates the concavity of the ostium border. It is calculated from the area of the ostium A and the area of the convex hull of the ostium A ch . An UI 2D of 0 represents a convex shape of the ostium. The larger the result, the greater the curvature and therefore also the concavity on the ostium border.
The 2D Ellipticity Index (EI 2D ) is a measure for how well the ostium may be fitted to an ellipse. A ch describes the area and C ch the circumference of the convex hull of the aneurysm. The convex hull is used for the calculation to avoid that undulations of the shape influence the index. The EI 2D varies from 0 to 1, being 0 for a perfect circle and increasing with growing ellipticity.
The 2D Noncircularity Index (NCI 2D ) is a measure of the deviation of the ostium's shape from a perfect circle. It is calculated similar to the EI 2D , but uses the original area A and circumference C of the ostium.
Ostia whose index values are close to zero have the approximate shape of a circle, while larger values indicate strongly elliptical or concave shapes, see
Statistical analysis
We performed a statistical analysis of the resulting shape index values to assess their predictive power regarding the aneurysm rupture risk based on statistical comparison between ruptured and unruptured aneurysm group. Therefore, we used the two-tailed independent Student t tests with a [15]. The false discovery rate correction was applied to the significant p value [6].
In addition, the parameters of the ostium and aneurysm are tested regarding possible correlations between shape of the aneurysm sac and its ostium, using the Pearson correlation coefficient [1]. In our search for correlations, we aim to analyze how size and shape indices of the aneurysm sac and ostium influence each other.
Size and shape indices with respect to rupture state
In total, 84 MCA bifurcation aneurysms were used, including 27 ruptured and 57 unruptured cases. There are no significant results for the size and shape indices for the aneurysm sacs. Of our derived indices for the ostium, only the NCI 2D was significantly associated with the aneurysm's rupture status at first, see Table 2. However, the false discovery rate correction yields a p value for the NCI 2D of 0.2368, which is no longer below the significance level of 0.05.
More than 80% of the ostia, both ruptured and unruptured, had small values between 0 and 0.2. Eleven ostia had an NCI 2D above 0.5, of which only one belonged to a ruptured aneurysm, see Fig. 2a.
Correlation with aneurysm indices
Furthermore, we compared the parameters of the ostium with the parameters describing the aneurysm w.r.t. a possible correlation using the Pearson correlation coefficient, see Fig. 3. Even though several variables were statistically significantly correlated ( p ≤ 0.05), the only strong linear correlation can be seen between the area of the ostium and the volume and surface of the aneurysm. Therefore, ostia with increasing area indicate a corresponding increase in the size of the aneurysm sacs. Our data also show that aneurysms with a small ostium are more frequently ruptured, see examples in Fig. 2b. Thus, small aneurysms are more frequently ruptured in our dataset. However, neither aneurysm size nor ostium area was significantly related to aneurysm rupture status, see Table 2.
Discussion
The aneurysm size is an often used parameter for assessing the rupture risk of an aneurysm [14]. At the same time, many aneurysms that rupture are small [4,16]. Many other parameters, such as localization of the aneurysm and parameters describing the internal blood flow, also play important roles [13]. The PHASES score was developed to calculate the rupture risk of an aneurysm within the next five years based on easily obtainable parameters, like age, hypertension and population [17]. However, for multiple aneurysms, the PHASES score is not sufficient, since it might severely underestimate the rupture risk, as only the largest aneurysm contributes to rupture risk evaluation. Furthermore, an external evaluation has shown that the PHASES score results in low specificity for the classification of ruptured and unruptured as well as high-risk and low-risk aneurysms [7].
These data give valuable hints about rupture-prone aneurysms. Research is being conducted to find further simple and reliable parameters. Thus, many studies in the past years focused on 3D reconstructions of the aneurysm sac, its size and other shape indices. There are also various methods for extracting the ostium [21,26,35], but it is usually examined only in terms of its size, even though the additional calculation of the shape of an already extracted ostium does not represent a considerable additional effort.
We propose shape indices to reliably describe morphological features of the ostium that are difficult to obtain manually. These shape descriptors allow for a standardized specification and comparison of the ostia. To distinguish between different aspects of the shape, we presented three indices based on generally known indices for the description of the aneurysm sac. The UI 2D is a measure for the concavity of the ostium's border, while the EI 2D calculates the ellipticity of the ostium. The NCI 2D combines the two previously described indices and measures whether an ostium is rather circular and convex or elliptical and concave.
The neck shape of an aneurysm is considered important when deciding about treatment methods [18]. Currently, treatment decisions are mostly based on the size of the aneurysm and its neck. Large aneurysms (diameter > 10 mm) or wide-necked aneurysms (neck diameter ≥ 4 mm) are often said to be uncoilable [22]. Studies have shown that their treat- Fig. 3 Correlation of the shape parameters of the ostia and the parameters of the aneurysm sacs. The Pearson correlation coefficient is provided at the top left corner of each diagram, and significant correlations are highlighted in red ment using coils has low occlusion rates and high recurrence rates [5]. Therefore, additional techniques like flow diversion and intrasaccular flow disruption are mainly used for the treatment of large and wide-necked aneurysms. However, these techniques are still assumed to be influenced by the morphology of the aneurysm neck. High neck ratios could lead to lower occlusion rates [29], while a large ostium might cause longer occlusion times [36]. The exact shape of the ostium is not considered in these studies. Therefore, further studies examining the relationship of the proposed indices and occlusion rates as well as recurrence rates should be performed. They might lead to more accurate predictions of the treatment outcome and thus support choosing the appropriate treatment methods.
Besides the ostium indices, we calculated the volume, surface, UI, EI and NSI for each aneurysm sac. In our statistical analysis, no ostium index was significantly associated with rupture status. Of the ostia with an NCI 2D above 0.5, only one belonged to a ruptured aneurysm. This may indicate a higher rupture probability of MCA bifurcation aneurysms with circular ostia, see Fig. 2a. Unlike in the study of Raghavan et al. [30], none of the shape-describing indices for the aneurysm sac have shown to be significant indicators for the rupture risk. One possible explanation for this is the localization of the aneurysms. Raghavan et al. used cerebral aneurysms of different localization, whereas in our work only MCA bifurcation aneurysms were considered.
Although the area and circumference of the ostium were not significantly associated with the aneurysm's rupture status, we observed that ruptured MCA aneurysms of our dataset exhibit smaller ostium areas than the unruptured ones, see also the examples in Fig. 2b. Given the fact that ostium area exhibits a strong linear correlation with the volume and surface, and the presence of smaller ostium areas of the ruptured aneurysms, we think the theory that larger aneurysms have a higher risk of rupture than smaller ones, an assumption that is also used in the calculation of the PHASES score, might not be the best solution for this complex relationship. As shown in Table 2, the analysis of the ostium area w.r.t. rupture status got smaller p values than the aneurysm volume and the aneurysm surface. We think this analysis could reveal an important trend, and we assume that the ostium area might have a larger influence on aneurysm rupture risk than the evaluation of the aneurysm size without considering the ostium area. This trend should be analyzed in future work for various aneurysm locations. The indices presented are dependent variables, but more independent variables could also be tested in the future. However, the problem of multiple comparisons should be counteracted in this case.
In addition to the morphology of the ostium, the blood flow in the aneurysm neck is also of interest for treatment methods, e.g., flow diverters [23]. However, it is complex to calculate, and therefore, not used in clinical practice beyond study. Further studies may use the presented indices to inves- Fig. 4 On top ostia from two ruptured aneurysms with a small shape indices and b large indices are shown. Below are ostia from two unruptured aneurysms with c small and d large shape indices tigate a relationship between ostium shapes and flow patterns. If the shape of the ostium can be used to infer certain flow patterns, this can be a great advantage for treatment decisions.
None of the presented indices were significantly associated with the aneurysm's rupture status in the presented dataset and therefore do not allow a clear distinction of ruptured from non-ruptured aneurysms. However, they do provide new clues about the nature of the aneurysms and allow a further quantitative description and comparison of different cases. It is difficult or even impossible to distinguish ostia of ruptured and unruptured aneurysms qualitatively, since both classes contain ostia of various shapes, see Fig. 4, and it is hard to estimate which one is more concave or elongated only by viewing the images of the 2D shapes. Therefore, the shape indices give a more reliable rating of concavity and ellipticity. Thus, they can be supportive in the evaluation of aneurysms.
Furthermore, the division into ruptured and unruptured aneurysms is not optimal, as theoretically any aneurysm could rupture at some point in the future. Also, the aneurysms might grow and change their morphological shape. Therefore, some studies used the terms stable and unstable aneurysms, where stable aneurysms are defined as unruptured aneurysms that have not grown in size by more than 1.0 mm for at least 12 months [9]. Longitudinal studies show that the shape of the aneurysm sac of many unstable aneurysms changes over time [8]. Particularly, the NSI often differs between stable and growing aneurysms [11]. Similar to our results, in the longitudinal study of Ramachandr et al. [32] the NSI was not significantly different between stable and unstable aneurysms. They presumed that the differences between the studies are due to a selection bias. However, the necessary follow-up information required to make a distinction between stable and unstable is not available for our data set, since they were acquired within the clinical routine. Nevertheless, the parameter can influence upcoming clinical decisions. As can be seen from the correlation analysis, an increase in the size of the aneurysm can also be used to infer a change in the ostium and vice versa, see Fig. 3.
The presented morphological analysis of the ostium is embedded into a MATLAB software prototype. Thus, it is easily accessible for clinical researchers and it does not require expensive resources, e.g., for blood flow simulations. Based on our discussions with clinical researchers, the usage of MATLAB-based software prototypes is acceptable for them. On the other hand, this analysis induces additional work load for the clinicians. This is not justified for easy cases or aneurysms with high rupture risk that should be treated immediately, but it might be beneficial for complex cases where a decision must be made as to whether and in what form treatment should be carried out. In addition, the shape parameters could be important for monitoring the aneurysm growth, since they allow for a quantitative comparison of the aneurysm neck over time.
Conclusion
In this work, we propose indices to quantitatively describe the shape of the ostium. We derived parameters describing the ostia based on commonly used 3D shape parameters: the 2D Undulation Index UI 2D , the 2D Ellipticity Index EI 2D , and the 2D Noncircularity Index (NCI 2D ). For statistical evaluation of ostium shape and rupture risk, we evaluated 84 cerebral aneurysms. To account for the dependency of localization and rupture risk, we only considered aneurysms at the middle cerebral aneurysm bifurcation. None of the parameters achieved statistical significance concerning the distinction into ruptured and non-ruptured aneurysms. However, they might have potential for longitudinal analysis since they allow for quantitatively characterizing the aneurysm neck. Based on our correlation analysis, the ostium's area correlates with the aneurysm sac's surface area and volume. This might be an indication of the importance of the ostium surface area for future analyses. Finally, the shape descriptors can be beneficial in terms of treatment decisions, since the outcome also depends on the aneurysm's neck shape.
In future work, the blood flow at the ostium is of interest when choosing a treatment method. Since flow simulations are in most cases too complex to be applied in daily clinical practice, a possible relationship between the shape of the ostium and certain flow patterns would be of interest. For this purpose, flow simulations for the aneurysms could be carried out in further studies. Furthermore, the relation between the presented indices and the outcome of different treatment methods w.r.t. occlusion rates and as recurrence rates could be investigated in the future. This might support the selection of appropriate treatment methods.
Since this work was implemented in MATLAB, which is already used by clinical researchers, an extension of the functionalities is easy to implement.
Funding Open Access funding enabled and organized by Projekt DEAL.
Conflict of interest
The authors Sarah Mittenentzwei, Oliver Beuing, Belal Neyazi, I. Erol Sandalcioglu, Naomi Larsen, Bernhard Preim and Sylvia Saalfeld declare that they have no conflict of interest.
Ethical standards All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. For this type of study, formal consent is not required.
Informed consent For this type of study, formal consent is not required.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. | 2021-08-18T13:42:47.937Z | 2021-08-18T00:00:00.000 | {
"year": 2021,
"sha1": "f7caf13eb4b532d749a2eb608258923279ca445d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11548-021-02469-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f7caf13eb4b532d749a2eb608258923279ca445d",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
225411974 | pes2o/s2orc | v3-fos-license | Exploring Motifs In Towe Songke, Manggaraian Ethnic Woven Fabric, In Mathematics Perspective
Received March 21, 2020 Revised June 25, 2020 Accepted July 13, 2020 Connection between mathematical content and the cultures of learners in mathematics education should be acknowledged and explored. This research, conducted using a qualitative research approach, with ethnographic methods, explored the relationship between formal mathematics especially geometric patterns and Motifs of Manggarai Ethnic Woven Fabric, known as Towe Songke in Cibal, Manggarai Regency, a rural area in East Nusa Tenggara Indonesia. Total 3 weavers of the age ranging from 20 to 40 selected based on their weaving knowledge and communication skills. Data were obtained through interviews, observations, field notes, and documentations. The research resulted in how mathematics learning on subjects such as geometry and geometry transformation was associated with the local cultural context of Manggarai. This study identified the line symmetry and the effect of geometric transformations (translations, reflections, rotation, and reflection) of several motifs in Towe Songke. Most of motifs which are found in Towe Songke forms Frieze Pattern F7 because these motifs can be seen as translation, horizontal reflection, vertical reflection and half turn rotation symmetry.
INTRODUCTION
Mathematics, more than any subjects, was considered to be culture-free; result in the perspective of many educators was that mathematics education had no need to take culture of student into account (Kusuma et al., 2017;Pablo-hoshino, 2009;Presmeg, 2014). But now, the need for connection between mathematical content and the cultures of learners in mathematics education is being acknowledged and explored. Mathematics learning needs to provide content/bridging between mathematics in the everyday world based on local culture and mathematics in school (Hanim et al., 2019). The connection between mathematics and culture, according to experts, was known as ethnomatematics (Abdullah, 2017;Biembengut, 2016;Rosa & Orey, 2013). Biembengut (2016) defined ethnomathematics as a way for a culture to understand and use mathematics in everyday life which is usually related to interesting and informative cultural issues and mathematically rich information. Futhermores, Rosa & Orey (2013) stated that ethnomatematics were mathematical practices of cultural groups that could be identified and considered as studies of mathematical ideas found in certain cultures. Meanwhile, Abdullah (2017) stated that ethnomathematics shifted mathematics from the places where it has been established and developed (university and schools) and spread it to the world, in their cultural diversity and daily activities. So, it can be concluded that ethnomatematics is the study of mathematical concepts and activities that are rooted in people's cultural context in which people try to understand and explore the culture that exists in everyday life in relation to formal mathematics so that mathematics becomes contextual, interesting and informative. In order to accommodate the role of culture in mathematics learning, researchers need to find the link between teaching materials and students' culture to encourage student's development of conceptual learning materials The research conducted in Manggarai Regency, East Nusa Tenggara, was designed to explore the relationship between formal mathematics (e.g geometric shapes, patterns, and area) and Manggaraian Ethnic Woven Fabric, known as Towe Songke. Towe Songke (as seen in Figure 1) was traditionally done by mothers in certain areas in Manggarai-Flores-NTT especially in Todo and Cibal. This study in particular discussed the motifs found in the Towe Songke.
Figure 1 Towe Songke, Manggarai Ethnic Woven Fabric
Towe Songke, Manggarai ethnic woven fabric, is very familiar with the lives of Manggarai people. Manggarai people, men and women, wear Towe Songke at the time of traditional rituals, liturgical ceremonies in the church, at birth, marriage, and to wrap people who had died. It is also used by dancers in war dances known as Caci. In daily life, Towe Songke is also used by the Manggarai people when they bathe and sleep. Towe Songke is commonly used by the Manggarai community as a gift between families on various occasions or given to honored guests who come to visit. Nowadays, it is modified into a woven fabric as material for making a suit, shirt, and dress. In short, this is one of the mandatory attributes that are used on various occasions which became the character of Manggarai culture. Towe Songke, as seen in Figure 1, has a black base color. The colour black is used because according to Manggarai beliefs, the black color shows the greatness or majesty of the Manggarai people. But some weavers began to use basic colors like maroon, blue, pink and orange, instead of black for their weaving. This is tailored to meet market demands that want bright colors. Usually, woven fabrics like this will be used as material for making suits or dresses used in certain formal events. Although the colors used are different, Towe Songke basically has a unique motifs that is full of meaning.
Towe Songke has various motifs and patterns namely the motives usually used are Jok motifs, Wela Kaweng motifs, Ranggong motifs, Su'i motifs, Ntala motifs, Wela Runu ISSN: 2548-8163 (online) | ISSN: 2549-3639 (print) SJME Vol. 4, No. 2, July 2020: 1-12 126 motifs, and Mata Manuk motifs (Senita & Neno, 2018). The Jok motif is the basic motifs that shows unity within the Manggarai community, namely unity with God as the ruler of the universe, with humans as the inhabitants of the universe, as well as the unity of human beings with their natural surroundings. The Wela Kaweng Motifs shows the relationship of interdependence between humans and the surrounding environment, especially plants, in this case Kaweng plants, both their leaves and flowers are believed to be able to treat wounds from pets / livestock. The Ranggong motifs is is a motif that resembles the image of a spider as a symbol of honesty and hard work undertaken by the Manggarai community to get fortune in their lives. The Ntala motifs is a motif that is closely related to sky-high hopes reaching stars (Ntala) that are often echoed in the prayers of the Manggarai people to get health, longevity, and success in their lives. The Wela Runu motifs describe the humility of the Manggarai people who want to be like a small flower that is beautiful and beneficial to the surrounding. The Sui motifs that is always present in every Towe Songke symbolizes the end of everything, namely that everything has an end and a limit.
The link between Towe Songke's motifs and mathematics seemed to be important since it could be used as a bridge in teaching mathematics from contextual situations that were close to students' lives. It was interesting to see the accuracy of the calculations in making Towe Songke by using simple tools since ancient times because it required a fairly good mathematical ability for weavers. Learning mathematics in the context of Manggarai culture is intended to instill a sense of love for culture, make the educated and the younger generation aware that they want to preserve and develop cultural values, and realize that culture is the main and first place to shape one's character and personality. The focus of this research was trying to uncover mathematical facts in the motifs of Towe Songke. Knowledge gained from this first study might produce a new approach to learning mathematics, both from a mathematical or mathematical education perspective. The second focus of the problem seeks to reveal the extent to which Songke's motifs were used to facilitate students' understanding in mathematics learning. It was hoped that this research would provide an interesting perspective on how to explore mathematical concepts based on the situation of Manggarai culture. Specifically, the following research questions were addressed: (1) how to make songke motifs in relation to geometric shapes?, (2) how songke motifs were used in facilitating student understanding in mathematics learning?
METHOD
There were two concepts that suited this research well. First, spontaneous mathematics which meant each human being and each cultural group develops spontaneously certain mathematical methods (D'Ambrosio, 1985). In the Manggarai culture, unit lengths are not standardized. For example, unit of length in the daily life of students are paga and depa. Or suppose the division of land using the thumb (ponggo). Second, Oral mathematics Carraher in (Gerdes, 2000) : "in all human societies there exists mathematical knowledge that is transmitted orally from one generation to the next. For example, the exact planting time was passed down from generation to generation by basing it on the full moon." It is important to study the relationship between the history of mathematics and the reality of students. This dimension directs students to examine the nature of mathematics in terms of understanding how mathematical knowledge is allocated in their individual and collective experiences. Thus, knowledge is built from the interpretation of the way humans have analyzed and explained mathematical phenomena throughout history. This is why it is necessary to teach mathematics in a historical context so that students can understand the evolution and contributions made by others for the development of ongoing mathematical knowledge (Rosa & Orey, 2013). This research was conducted using a qualitative research approach, with ethnographic methods. According to Jerome Kirk and Marc Miller in (Kirk & Miller, 1986), qualitative research is a social science approach that observes humans in their territory and interacts with them in their own language and terms. Ethnography was originally developed by anthropologists, but with corresponding adaptations researchers in other fields including mathematicians began using it. Gall, Gall, Borg in (Gall et al., 2007) specifically states that ethnographic methods of research are qualitative research procedures to describe, analyze, and interpret patterns of behavior, beliefs, and languages of a particular cultural group that develop over time. Ethnography involves intensive study of certain cultural features and patterns in these features.
Based on the definition above, researchers in this study used ethnographic methods because this study examined the relationship between Manggaraian culture and mathematics specifically to explore mathematical concepts and ideas contained in motifs of Towe Songke, weaving ethnic woven fabric in Manggarai. Furthermore, this research focused on the relationship between motifs of Towe Songke and the mathematical concepts, particularly the geometry. By using the ethnographic method of qualitative research methods, researchers will provide a cultural perspective in the contextual learning process.
Quantitative data collection can employ interview approaches, by using more closedended procedures in which the researcher identifies set response categories (Creswell, 2012). Data collection in this research was conducted using ethnographic principles, namely interviews, observations, field notes, and documentation. Data were collected with semistructured interviews since the interview is considering as one of the most powerful ways in which researchers try to understand fellow human beings. The one-on-one interview approach is using since during the data collection process the researcher asked questions to and records answers from only one participant in the study at a time. Foster & Cresap (2012)) states that observations, namely researchers directly go to the field to observe the behavior and activities of individuals in the study location. Observations carried out in this study were frank observations, namely researchers in collecting data said frankly to the data source, that researchers were conducting research. While doing this observations, researchers make field notes to write all the informations. At this stage the researcher observes dan takes notes about all the informations related to Manggaraian Etnic Woven Fabric.
Once the data is collected, then analyzed with the measures, namely reducing the data, present data in a short description and a table, and draw conclusions. Test validity of the data is done by using credibility test, transferability test, dependability test, and confirmability test.
RESULTS AND DISCUSSION
The primary purpose of the present study was to explore mathematics element found in Manggarai Ethnic Woven Fabric, Towe Songke. Songke motifs can be used in facilitating student understanding in mathematics learning. The research was carried out in the Compang Cibal Village and Lenda Village, Cibal Barat District, Manggarai Regency, East Nusa Tenggara. To obtain research data, the researchers targeted three informants who worked as weavers and were able to explain the meaning of each motif printed on Towe Songke. But in the process of carrying out the research, there was another informant who was able to explain the meaning of each motif found in Towe Songke. This informant was not targeted as a data source and also not as a weaver, but had knowledge of the Manggarai culture. The information provided by the additional informant was quite accurate in answering the ISSN: 2548-8163 (online) | ISSN: 2549-3639 (print) SJME Vol. 4, No. 2, July 2020: 1-12 128 questions given and justified by the first informant so that the researcher was concerned as an informant. The presence. There were three phases in this research namely : (1) Research Planning Phase, (2) Data Collecting Phase, and (3) Data Analysis Phase. In phase 1, research planning phase, researchers formulated the problems and purpose of the research and spesifically determined focus and scope of the reseach. While formulating the problems and purposes of the reseach, researches conducted several interviews with culutural expert and some literature reviews to strengten knowledge about Manggarai Culture. At this stage, the researcher conducts this analysis to determine the focus of the research so that it can narrow the area of research so that it becomes more focused.
In phase 2, researchers determined the subject of research, conducted interviews with research subjects and asked several descriptive questions, made the observations regarding the process of weaving Towe Songke, made field notes, and took some pictures supporting the data. The research was carried out in the Desa Compang Cibal and Desa Lenda, Kecamatan Cibal Barat, Manggarai Regency. Site selection is based on the consideration that since generations, women in this area have woven Towe Songkebal. To obtain research data, the researchers targeted three informants who worked as weavers and were able to explain the meaning of each motif printed on Towe Songke. But in the process of carrying out the research, there was another informant (brother-in-law of the first informant) who was able to explain the meaning of each motif found in Towe Songke. This informant was not targeted as a data source and also not as a weaver, but had knowledge of the Manggarai culture. The information provided by the additional informant is quite accurate in answering the questions given and justified by the first informant so that the researcher is concerned as an informant. The presence of these additional informants was not planned but was in one place when the researcher conducted the interview. During interview, researchers asked questions, made field notes and took some pictures to support research data.
In phase 3, after reducing the data researchers analyzed the results of ethnographic interviews dan observations during the research, conducted analysis, made the conclusions including the link between mathematics espesially Frieze Pattern and motif of Towe Songke, and then wrote research reports. Researchers also interviewed customary figures to confirm the data mentioned by the weavers. Reduced interview transcripts along with pictures was analyzed and elaborated to explain the motifs of Towe Songke. At this stage,the presentation of this data is in the form of combining information that has been obtained in the form of narrative text. In this study, researchers did not only use narrative text in presenting research data. However, it supports it with a matrix that contains research questions, informant answers, theories that are likely to be related and researchers' interpretations of the informant's answers and the theories obtained.
Data obtained in this study through interviews conducted openly, namely researchers observed songke cloth (Towe Songke) and the weaving process and then asked the meaning of each motif printed in the Towe Songke as well as the weaving process to form motifs with patterns that were quite neat in Towe Songke. The results of the interview with the first informant were reinforced by the second and third informants when the researchers asked "Why is Towe Songke colored black? The three informants explained that black as the base color symbolizes greatness, majesty, strength, and submission. Through this black color the Manggarai people were aware of the greatness, majesty and power of God the Creator (Mori Jari Dedek). Manggarai people consider themselves to be only a small part of the vast world.
On the second question "Does the motif found in Towe Songke have its own meaning or is it just a design to make it look attractive? The second and third informants explained that the motifs found in Towe Songke were not just motifs to look beautiful but had conditions in the culture of the Manggarai community. Towe Songke motif is also a symbol .
Figure 3 Motifs that represents Lingko
The results of this interview were justified and reinforced by additional informants who said that in the local culture there was a term that said that "mbaru one, uma peang, and towen use". Mbaru one is a house of residence, uma peang is a garden (lingko) where farming is planted, while Towe Songke is a body armor. In addition, Towe Songke also represented Lodok which is a symbol that describes the process of how the Manggarai community divides land into several parts based on the number of community members who have the right to obtain land ownership. The Manggarai people believe that their life is a network of relations where every aspect of their lives is always related to the other 4 components in the circle of life, with every element of nature, animals, plants, spirits, and God as the center of everything (Sutam, 2012).
Towe Songke has many motifs and the motifs discussed in this paper are Wela Runu motif, Kali Motif, Impung Motif, Rempa Teke Motif, 8-symbol motif, Letik motif, and Kapal Motif. We will discuss the philosophical meanings and geometric transformation of each motifs. This geometric transformation will be viewed mathematically as a two-dimensional pattern known as Frieze Pattern. There are seven isomorfic groups of Frieze patterns that will emerge when the symmetries in the frieze patterns identified. Mathematicians have shown that these are the only possible combinations of symmetry for frieze patterns.
The Frieze group is a part of a symmetry group that is built by one-way translation that will form a linear pattern that repeats itself in one direction. Frieze Group or commonly referred as a frieze pattern has a special characteristic that is always built by translation. There are seven different patterns that may be formed from existing isometric combinations. Isometry that can build frieze patterns namely horizontal reflection, vertical reflection, rotation, and glide. The seven frieze patterns can be classified as cyclic or dihedral groups which can be seen in the following pictures. 1. Frieze Pattern with translation symmetry only (Frieze Pattern I, 1 ).
The 1 pattern does not have other isometry other than translation. The symmetry group in this pattern is an infinite cyclic group formed by the composition of the translation function. Illustration of an 1 pattern using the "footprints" that uses translation information looks like Figure 4. The 2 pattern have with translation and glide reflection symmetry. Illustration of an 2 pattern using the "footprints" that uses translation looks like Figure 5. The 3 pattern have with translation and reflection symmetry. This pattern has a reflection whose symmetry axis is parallel to the translation direction. Illustration of an 3 pattern using the "footprints" that uses translation information looks like Figure 6. The 5 pattern have with translation and vertical reflection symmetry. Illustration of an 5 pattern using the "footprints" that uses translation information looks like Figure 8. 6. Frieze with translation, vertical reflection, glide reflection and half turn symmetry (Frieze Pattern VI, 6 ). The 6 pattern have with translation and vertical reflection, glide reflection and half turn symmetry symmetry. Illustration of an 6 pattern using the "footprints" that uses translation information looks like Figure 9. The first motive that was discussed was the Wela Runu Motif. Wela Runu is a kind of small flower plant. This motif means that even if it seems meaningless, every life in this world has benefits. There is no need to be discouraged if it is not considered, because in a certain momentum someone's existence will give great meaning to others. This Wela Runu motif belongs to a group of geometric motifs because it is composed of 3 basic geometric shapes, namely, 1 rhombus, 1 regular hexagon, and 6 equilateral triangles. By assuming that the motifs for compiling the Wela Runu motif as in figure 1, it is concluded that Wela Runu motif follows the provisions of the Frieze 7 Pattern because of this motif Frieze with translation, horizontal reflection, vertical reflection and half turn rotation symmetry. Also The Kali motif, The Impung motif, The Rempa Teke motif, The Letik motif, The Kapal motif which are found in Towe Songke also forms Frieze Pattern F7 because these motifs can be seen as translation, horizontal reflection, vertical reflection and half turn rotation symmetry. All motifs can be seen in Figure 11.
Wela Runu Motifs
Kali motif Impung motif Rempa Teke motif Letik motif Kapal motif Figure 11 Motifs in Towe Songke, Manggarai Ethnic Woven Fabric By studying the existing motives and combining them with the Frieze pattern that has been learned, students can create new motifs that are interesting and can be a reference for weavers when weaving modern Towe Songke.
CONCLUSION
The results drawn from this study have implications in which ethnomathematics has a significant role in providing the necessary contextual meaning to abstract mathematical concepts. In order to accommodate the role of ethnomathematics in mathematics learning, researchers and teachers need to find the link between teaching materials and students' culture to encourage student's development of conceptual learning materials. Most of motifs which are found in Towe Songke forms Frieze Pattern F7 because these motifs can be seen as translation, horizontal reflection, vertical reflection and half turn rotation symmetry. The findings lead to the recommendation to design a new approach in mathematics educators by engaging students' culture to learning process mathematics. | 2020-09-03T09:04:59.594Z | 2020-07-29T00:00:00.000 | {
"year": 2020,
"sha1": "fb9e9949d83f11503a235bc4cabe4b904300d736",
"oa_license": "CCBYSA",
"oa_url": "https://journal.unsika.ac.id/index.php/supremum/article/download/3457/2275",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a198a709d7b5c0b3e7c9e3ec1c6bec5f5da99619",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
4991525 | pes2o/s2orc | v3-fos-license | Monitoring Annual Urban Changes in a Rapidly Growing Portion of Northwest Arkansas with a 20-Year Landsat Record
Northwest Arkansas has undergone a significant urban transformation in the past several decades and is considered to be one of the fastest growing regions in the United States. The urban area expansion and the associated demographic increases bring unprecedented pressure to the environment and natural resources. To better understand the consequences of urbanization, accurate and long-term depiction on urban dynamics is critical. Although urban mapping activities using remote sensing have been widely conducted, long-term urban growth mapping at an annual pace is rare and the low accuracy of change detection remains a challenge. In this study, a time series Landsat stack covering the period from 1995 to 2015 was employed to detect the urban dynamics in Northwest Arkansas via a two-stage classification approach. A set of spectral indices that have been proven to be useful in urban area extraction together with the original Landsat spectral bands were used in the maximum likelihood classifier and random forest classifier to distinguish urban from non-urban pixels for each year. A temporal trajectory polishing method, involving temporal filtering and heuristic reasoning, was then applied to the sequence of classified urban maps for further improvement. Based on a set of validation samples selected for five distinct years, the average overall accuracy of the final polished maps was 91%, which improved the preliminary classifications by over 10%. Moreover, results from this study also indicated that the temporal trajectory polishing method was most effective with initial low accuracy classifications. The resulting urban dynamic map is expected to provide unprecedented details about the area, spatial configuration, and growing trends of urban land-cover in Northwest Arkansas.
Introduction
More than half of the world's population currently resides in urban areas, and the coming decades are predicted to bring further profound changes to the size and spatial distribution of the global population [1].From their earliest beginnings, city landscapes have left an indelible mark on the Earth [2].Even though they are considered the engines of economic development and social transformations, rapid urbanization is creating tremendous stresses on the environment, natural resources, and public health not only within the city boundaries, but also in areas extending well beyond them [3][4][5].Urban growth is a major indicator of environmental quality and ecosystem health [6], by driving long-lasting wildlife habitat degradation [7], altering genetic diversity [8], influencing changes in biogeochemistry [9], exacerbating urban heat island effects [10], stressing the disease burden [4], increasing air pollution, and also increasing greenhouse gas emissions [11].
As the size and number of cities continues to grow, their impacts on the coupled natural and human systems will become even more apparent [12].The rate and trends of these changes in the urbanized areas and their consequences warrant careful consideration and planning by managers and policy makers to promote informed decisions that balance the positive economic effects of urban expansion with their degrading effects.Thus, accurate, consistent, and timely data on trends in urban growth are needed for assessing current and future needs.Unfortunately, for large parts of the world, this type of information is unavailable.Although traditional field-based approaches can provide detailed and spatially disaggregated information on urban change, they are not labor and cost effective, and thus have limited spatial coverage and temporal frequency [13].Remote sensing provides us with an opportunity to systematically track the magnitude and trends of urbanization.Several land-use/land-cover (LULC) datasets that contain urban layers have been developed using remote sensing, such as the 500-m Moderate Resolution Imaging Spectroradiometer global urban extent map [14], 30-m National Land Cover Database Percent Developed Imperviousness [15], and the 30-m Finer Resolution Observation and Monitoring-Global Land Cover [16].However, large complex national databases are most accurate when used to support regional and national analysis rather than local applications [17].In addition, most products lack sufficient temporal resolution and are not useful in characterizing the urban spatial dynamics over a long-term period.
Traditional techniques designed for multi-temporal change analysis can be broadly categorized into temporal trajectory analyses and post-classification comparisons.Temporal trajectory analysis involves the detection of landscape or ecological processes from the time series curves that are constructed from spectral values or vegetation indices of high-temporal-frequency satellite data [18][19][20].This method is especially useful in discriminating slow-evolving changes from abrupt events.However, the change detection results are sensitive to site-specific control parameters that are used to fine tune the trajectory, and thus require additional calibration efforts.Moreover, updating maps when new images are available requires the reanalysis of the whole temporal trajectory, which is not only computationally intensive, but may alter the original results because the trajectory trend will be changed with the inclusion of new values in the time series.Post-classification comparison is another multi-temporal change detection technique, which classifies the images at each date independently to retrieve the land-cover change information.It is easy to implement and flexible in updating, but the change detection accuracy is highly dependent on the classification performance at each date [21].Thus, errors arise from single-date image classification due to clouds, speckle noise in images, and classification uncertainties that can get accumulated and become more critical as longer time series of images are involved.In this study, a recently developed change detection method [22] that integrates the robust change detection strength of the temporal trajectory approach and the flexible advantage of post-classification comparison was applied to characterize urban growth from multi-temporal Landsat images in Northwest Arkansas (NWA) for the past two decades.
NWA is diverse in landscape types and is the fastest growing metropolitan area in Arkansas [22], which makes it a perfect choice for engineering and implementing this new methodology to detect urban growth.A previous study using this methodology was applied to urban areas in Beijing.The results from that study demonstrated the reliability and usefulness in modeling and monitoring the effects of urban planning [23].However, because of the different cultural and settlement patterns along with the variations in the urban morphology between Beijing and NWA, the applicability of this approach in a U.S. metropolitan statistical area remains uncertain.Urban landscapes in Beijing are characterized by highly concentrated hard surfaces, and most built-up patches are separated from green spaces, which make it possible to identify them as pure urban or non-urban pixels on a 30 m resolution image.In contrast, cities in NWA often have buildings interspersed with open green areas.As a result, their composite spectral responses can introduce many uncertainties as well as add confusion in long-term change detection.Thus, the objectives of this study were to: 1.
Test our algorithm in a highly heterogeneous urban landscape to map the urban extent on a yearly basis; 2.
Quantify the patterns and trends of urban growth in NWA based on the generated urban extent maps.
Study Area
Northwest Arkansas (NWA) is one of the most dynamic metropolitan statistical areas (MSA) in the U.S. [24].While NWA is comprised of four counties (Benton, Carroll, Madison, and Washington), this study focuses on the two most heavily populated counties of NWA (Benton and Washington).Within these two counties are four of the state's fastest growing cities, specifically Bentonville, Rogers, Springdale, and Fayetteville oriented in a north to south linear direction (Figure 1).These cities are bounded by the rugged Boston Mountains to the east and south while the gently rolling Springfield Plateau characterizes the western portion of both counties.
Since the mid-1990s, considerable urban expansion within NWA has been driven primarily by the influences of Wal-Mart Stores, Inc. (the world's largest retailer), Tyson Foods (the nation's largest processor of chicken, beef, and pork products), J.B. Hunt Transport Services, Inc. (one of the nation's largest freight shipping companies), and over 1300 suppliers and vendors drawn to the region by these large businesses, NWA's economic climate, and its geographical location [25].These three companies alone boasted an estimated net sales of over 500 billion dollars in 2014 [26][27][28].In addition to these three economic drivers, the University of Arkansas, located in Fayetteville, has contributed to the continual urban expansion within NWA through regular increases in student enrollment rates and is considered to be the seventh fastest growing public university in the nation [29].During the period between 1980 and 2015, the total population in the state of Arkansas increased by 30%, doubling the population in Washington County and tripling the population in Benton County (Figure S1).
Remote Sens. 2017, 9, 71 3 of 17 2. Quantify the patterns and trends of urban growth in NWA based on the generated urban extent maps.
Study Area
Northwest Arkansas (NWA) is one of the most dynamic metropolitan statistical areas (MSA) in the U.S. [24].While NWA is comprised of four counties (Benton, Carroll, Madison, and Washington), this study focuses on the two most heavily populated counties of NWA (Benton and Washington).Within these two counties are four of the state's fastest growing cities, specifically Bentonville, Rogers, Springdale, and Fayetteville oriented in a north to south linear direction (Figure 1).These cities are bounded by the rugged Boston Mountains to the east and south while the gently rolling Springfield Plateau characterizes the western portion of both counties.
Since the mid-1990s, considerable urban expansion within NWA has been driven primarily by the influences of Wal-Mart Stores, Inc. (the world's largest retailer), Tyson Foods (the nation's largest processor of chicken, beef, and pork products), J.B. Hunt Transport Services, Inc. (one of the nation's largest freight shipping companies), and over 1300 suppliers and vendors drawn to the region by these large businesses, NWA's economic climate, and its geographical location [25].These three companies alone boasted an estimated net sales of over 500 billion dollars in 2014 [26][27][28].In addition to these three economic drivers, the University of Arkansas, located in Fayetteville, has contributed to the continual urban expansion within NWA through regular increases in student enrollment rates and is considered to be the seventh fastest growing public university in the nation [29].During the period between 1980 and 2015, the total population in the state of Arkansas increased by 30%, doubling the population in Washington County and tripling the population in Benton County (Figure S1).
Image Acquisition and Preprocessing
Changes in land-cover were examined using Landsat data records from 1995 to 2015 (Table 1).NWA lies entirely within Landsat path 26 row 35.Images were primarily collected for leaf-on season conditions, from mid-March through mid-October, to ensure the strongest spectral contrast between urban areas and green vegetation.Some images were chosen outside of the ideal time of year due to excessive cloud cover.For each year, one cloud-free or mildly contaminated image was selected to build the time series.In this study, both Landsat Top of Atmosphere products and Landsat Surface Reflectance Climate Date Records were acquired from the U.S. Geological Survey (except the year 2012).All images had been processed to L1T level (i.e., passed the corrections of topography and radiation).For the atmospheric correction with the Surface Reflectance datasets, Landsat 4-5 TM and Landsat 7 ETM+ were processed uniformly with the Landsat Ecosystem Disturbance Adaptive Processing System [30] and Landsat 8 Operational Land Imager (OLI) were processed with the Fast Line-of-Sight Atmospheric Analysis of Spectral Hypercubes algorithm [31].Landsat 5 Thematic Mapper (TM) imagery was used for the years 1995 to 2011 and Landsat 8 OLI imagery was used for years 2013 to 2015.The year 2012 was omitted from the study due to the Landsat 7 Scan Line Corrector failure in 2003, the decommission of Landsat 5 due to multiple mechanical failures in 2012, and the unavailable data of Landsat 8 due to its 2013 launch date.The time series of Landsat Surface Reflectance image collection was used to calculate spectral indices that have been proven to be useful in improving the classification performance, including: Normalized Difference Vegetation Index (NDVI) to help distinguish between vegetation and non-vegetation [32], Normalized Difference Built-up Index (NDBI) to better visualize built up impervious surfaces usually associated with urban areas [33], Built-up Index (BUI) [33], and Principal Component Analysis (PCA).The first three principal components from PCA were used.In addition, Landsat Top of Atmosphere reflectance images were used to calculate the Tasseled Cap Transformation for Landsat TM [34] and Landsat 8 OLS [35].The brightness, greenness, and wetness from Tasseled Cap Transformation were included.The nine new transformations, in conjunction with the original six bands, were compiled together and used in the classifications NDVI = (ρNIR − ρRed)/ρNIR + ρRed) (1) where ρRed, ρNIR, and ρSWIR are the surface reflectance values of the red band (600-690 mm), near-infrared band (760-900 mm), and shortwave-infrared band (1550-1750 mm), respectively.
Classification System and Classification Sample Selection
Five land-cover types encompassing the majority of the NWA landscape were considered in the classification and are as follows: 1.
High-intensity urban: a highly developed area where impervious surfaces account for 80% to 100% of the total cover, mostly containing commercial and industrial property that appears to have a higher reflectance value than the low-intensity urban [36,37]; 2.
Low-intensity urban: a less developed area where impervious surfaces account for 50% to 80% of the total cover, mostly containing residential areas such as neighborhoods, apartments, and roadways [36,37].Dirt and gravel roads were not included in either of the urban classes due to their spectral signatures being similar to bare soils, harvested croplands, and water coastlines.
To minimize confusion, only paved roads were included in the classification scheme; 3.
Agriculture/Pasture/Bare Lands: open areas of crops planted by farmers, grass, or other short vegetative growth in fields and bare patches of lands that lack intense vegetation.In NWA, due to Tyson Foods Inc., there are many pasture lands that are dedicated to the cultivation and harvest of poultry and beef.Agriculture in NWA is not as common as it is in the rest of Arkansas, but is present in the region.Bare lands are areas that have not been used for agriculture or pasture for animals, and are usually land stocks for urban expansion; 4.
Forest: an area that is dominated mostly by dense tree cover.NWA contains a number of national forests and parks, including the Ozark National Forest.These forested areas are, for the most part, very homogeneous in nature, containing about 80% to 100% tree cover with very little gaps.These gaps would be classified into the Agriculture/Pasture/Bare lands classification; 5.
Water: includes lakes, ponds, rivers, streams, and creeks that are visually identifiable on the Landsat imagery.
Training samples were collected through visual interpretation of the 15-band stacked Landsat imagery.The locations of those training samples were consistent throughout the study time period and their labels were interpreted for each year.Very high resolution Google Earth imagery, Bing Maps, and the 1-m resolution U.S. Department of Agriculture National Agriculture Imagery Program (NAIP) [38] were used to verify homogeneity in training sample locations.It should be noted that a critical prerequisite for good classification performance is that the training samples be representative in that they cover the whole spectral range of the land-cover types.To achieve this goal, the following criteria were used: 1.
More than 30 samples required for each class, which is the standard number of minimum samples necessary for accurate classification based on the sample size formula as recommended by [39].
2.
Samples needed to be greater than four pixels in size.
3.
Samples needed to be homogeneous in nature.
Classification Framework
The biggest obstacle with post-classification change detection techniques are the propagated errors that arise from independent single-date classifications.For better change detection results, refinements are needed to stabilize year-to-year variation in classification labels not associated with actual land-cover changes.This issue was addressed via the Temporal Trajectory Polishing method, which involves two steps: anomaly detection and label adjustment in the trajectories of classified LULC change followed by logic reasoning ([23]; Figure 2).which involves two steps: anomaly detection and label adjustment in the trajectories of classified LULC change followed by logic reasoning ([23]; Figure 2).
Preliminary Classification
For each year, preliminary classifications were first performed with the Maximum Likelihood Classifier (MLC) and Random Forest classifier (RF) using the combined six raw spectral bands and nine spectral indices.The MLC is a parametric method widely used as a reference for its computational simplicity and robustness.All classes were assigned to have the equal prior probability and no pixel would be rejected for class assignment.The RF classifier is a machine-learning algorithm that combines the results of many (thousands) decision trees [40], and is often praised for its overall performance, as well as robustness to noise [41].For the RF parameter setting, 500 classification trees were chosen and the number of prediction variables set to the square root of the number of input features.The land-cover files were then regrouped from the original five classes into binary maps containing only urban (High and Low Intensity) and non-urban (Water, Forest, Agriculture/Pasture/Bare Lands) types.
Anomaly Detection and Temporal Filtering
Based on the preliminary binarized urban map, the land-use trajectory was extracted at the pixel level.Land use trajectory is the temporal sequence of LULC classes that are described through classified images assembled in a time-series [42].Under most circumstances, the transition from non-urban to urban is irreversible.However, due to the various factors affecting the image acquisition, data preprocessing, and classification, errors are unavoidable and can result in a noisy time series trajectory.For example, if a pixel is found labeled as urban in one year, but the adjacent
Preliminary Classification
For each year, preliminary classifications were first performed with the Maximum Likelihood Classifier (MLC) and Random Forest classifier (RF) using the combined six raw spectral bands and nine spectral indices.The MLC is a parametric method widely used as a reference for its computational simplicity and robustness.All classes were assigned to have the equal prior probability and no pixel would be rejected for class assignment.The RF classifier is a machine-learning algorithm that combines the results of many (thousands) decision trees [40], and is often praised for its overall performance, as well as robustness to noise [41].For the RF parameter setting, 500 classification trees were chosen and the number of prediction variables set to the square root of the number of input features.The land-cover files were then regrouped from the original five classes into binary maps containing only urban (High and Low Intensity) and non-urban (Water, Forest, Agriculture/Pasture/Bare Lands) types.
Anomaly Detection and Temporal Filtering
Based on the preliminary binarized urban map, the land-use trajectory was extracted at the pixel level.Land use trajectory is the temporal sequence of LULC classes that are described through classified images assembled in a time-series [42].Under most circumstances, the transition from non-urban to urban is irreversible.However, due to the various factors affecting the image acquisition, data preprocessing, and classification, errors are unavoidable and can result in a noisy time series trajectory.For example, if a pixel is found labeled as urban in one year, but the adjacent previous and following years are both classified as non-urban, it is likely that this pixel is misclassified for that urban labeled year.Generally speaking, the frequency of land-cover changes should not be high and the anomaly in the time series trajectories can be an indicator of potential misclassification.
To detect and remove the anomaly in the LULC trajectories, an iterative temporal filtering method was used.Specifically, a temporal consistency probability (P i ) was calculated for each year in the trajectory: where Li is the class label of the target year, and Tw is the time window size that starts from 1. Con is a conditional expression that returns 1 when the class labels are the same for the target year and its neighboring year, and vice versa.As indicated in Figure 3, a low P i is more likely to reflect an erroneously classified pixel in the target year and its associated label should be replaced based on the majority of its contiguous neighboring years.Thus, if the probability of the label's occurrence is less than 0.5, it was different from the designated type inferred from the dominant types in its temporal neighborhoods and then considered as anomalous.The label value will be revised to its opposite accordingly.The window size Tw for defining number of temporal neighborhoods increases gradually from 1, and the whole sequence will be updated after each iteration until all the processed Pi in the temporal trajectory are larger than 0.5.However, it should be noted that because the first year of the study and the last year of the study (the head year and the tail year) lack complete temporal neighborhood information, the filtering may not be as complete as the central years.The implementation of this post-classification step will result in a more consistent and accurate filtered trajectory with fewer spikes caused from impulse noise.
Remote Sens. 2017, 9, 71 7 of 17 previous and following years are both classified as non-urban, it is likely that this pixel is misclassified for that urban labeled year.Generally speaking, the frequency of land-cover changes should not be high and the anomaly in the time series trajectories can be an indicator of potential misclassification.
To detect and remove the anomaly in the LULC trajectories, an iterative temporal filtering method was used.Specifically, a temporal consistency probability ( ) was calculated for each year in the trajectory: where Li is the class label of the target year, and Tw is the time window size that starts from 1. Con is a conditional expression that returns 1 when the class labels are the same for the target year and its neighboring year, and vice versa.As indicated in Figure 3, a low is more likely to reflect an erroneously classified pixel in the target year and its associated label should be replaced based on the majority of its contiguous neighboring years.Thus, if the probability of the label's occurrence is less than 0.5, it was different from the designated type inferred from the dominant types in its temporal neighborhoods and then considered as anomalous.The label value will be revised to its opposite accordingly.The window size Tw for defining number of temporal neighborhoods increases gradually from 1, and the whole sequence will be updated after each iteration until all the processed Pi in the temporal trajectory are larger than 0.5.However, it should be noted that because the first year of the study and the last year of the study (the head year and the tail year) lack complete temporal neighborhood information, the filtering may not be as complete as the central years.The implementation of this post-classification step will result in a more consistent and accurate filtered trajectory with fewer spikes caused from impulse noise.
Urban Change Logic Rule
The temporal filtering in Section 2.4.2 can only ensure the consistent classification (urban or non-urban) over temporally consecutive years, whose length was determined iteratively with increased window length.However, it cannot make the whole sequences follow the conversion from non-urban to urban consistently.Therefore, an additional logical reasoning was needed to achieve this goal.For urban environments, the land-use trajectory often follows the temporal non-reversal rule, i.e., land-cover dynamics occur in an irreversible order and the urban impervious pixels cannot revert to non-urban pixels through the remainder of the time series.This leads to the design of the change logic rules with the purpose of modifying the temporal trajectory into a logical order (Figure 4): if non-urban labels happen before urban labels in the original sequence, then it obeys the change logic rule and remains unchanged.If non-urban labels occur after urban labels, the rule is broken.Different strategies were taken to process depending on the dominant land-cover type of the sequence, which is calculated by counting the number of urban and non-urban years within the sequence.If the processed sequence is urban dominated, the unreasonable non-urban labels will be modified to urban.Otherwise, years that contain pixels labeled urban occurring before non-urban labeled pixels would be relabeled to the opposite status.
To achieve the best performance, the urban change logic rules were not directly applied to the whole trajectory because the partial loss of temporal contexts for the head and tail year may impede the deduction of proper logic.Instead, the whole LULC trajectory was divided into three segments, namely the prior segment (Seg 1), the main segment (Seg 2), and the posterior segment (Seg 3).The rules were applied to the main segment first because of its completeness in neighborhood information.The class label of the first and last year of the main segment after logic reasoning were further composited into the prior and posterior segment respectively to aid their logic reasoning.The implementation of this step is to examine the LULC change history at the pixel level and minimize the degree of uncertainty.
Urban Change Logic Rule
The temporal filtering in Section 2.4.2 can only ensure the consistent classification (urban or non-urban) over temporally consecutive years, whose length was determined iteratively with increased window length.However, it cannot make the whole sequences follow the conversion from non-urban to urban consistently.Therefore, an additional logical reasoning was needed to achieve this goal.For urban environments, the land-use trajectory often follows the temporal non-reversal rule, i.e., land-cover dynamics occur in an irreversible order and the urban impervious pixels cannot revert to non-urban pixels through the remainder of the time series.This leads to the design of the change logic rules with the purpose of modifying the temporal trajectory into a logical order (Figure 4): if non-urban labels happen before urban labels in the original sequence, then it obeys the change logic rule and remains unchanged.If non-urban labels occur after urban labels, the rule is broken.Different strategies were taken to process depending on the dominant land-cover type of the sequence, which is calculated by counting the number of urban and non-urban years within the sequence.If the processed sequence is urban dominated, the unreasonable non-urban labels will be modified to urban.Otherwise, years that contain pixels labeled urban occurring before non-urban labeled pixels would be relabeled to the opposite status.
To achieve the best performance, the urban change logic rules were not directly applied to the whole trajectory because the partial loss of temporal contexts for the head and tail year may impede the deduction of proper logic.Instead, the whole LULC trajectory was divided into three segments, namely the prior segment (Seg 1), the main segment (Seg 2), and the posterior segment (Seg 3).The rules were applied to the main segment first because of its completeness in neighborhood information.The class label of the first and last year of the main segment after logic reasoning were further composited into the prior and posterior segment respectively to aid their logic reasoning.The implementation of this step is to examine the LULC change history at the pixel level and minimize the degree of uncertainty.
Accuracy Assessment
The accuracy assessments were conducted in both quantitative and qualitative ways.First, validation samples were collected through a stratified random sampling scheme on the preliminary
Accuracy Assessment
The accuracy assessments were conducted in both quantitative and qualitative ways.First, validation samples were collected through a stratified random sampling scheme on the preliminary classification results almost every five years, as well as the starting and ending years (1995, 2001, 2006, 2011, and 2015).The years 2001, 2006, and 2011 were selected because of the National Land Cover Database (NLCD) availability in these years.The years 2006 and 2015 are also the NAIP available years in this region.There were 50 random points generated in the classified urban area and 100 random points for the non-urban areas for each of the test years (Figure S2).The total sample number was calculated based on a well-accepted sample size formula [43] with an expected accuracy of 90% at an allowable error of 5% for a binary-class classification.Considering the importance of the urban category within the objectives of this project, we did not follow the area proportion assignment rule that would make the sample size for urban very small.Instead, we adjusted the number of urban samples to 50 according to the rule of thumb proposed by [44,45].For the five validation years, a total of 750 points were generated, which were then visually interpreted and labeled one at a time against a combination of NAIP (for year 2006, 2015), Google Earth (for all validation years), and Bing Maps imagery (for year 2015).All of the validation samples were cross-checked by team members for quality control.When samples were not easy to interpret, quality controllers flagged them and the points were visually revisited for interpretation until a consensus was achieved.A confusion matrix was then created for each test year and associated accuracy measures were generated, including the overall accuracy (OA), Kappa, producer's accuracy (PA), and user's accuracy (UA) [46].These accuracy measures use and summarize different information contained in the confusion matrix.Overall accuracy accounts for the overall proportion of area correctly classified.Producer's accuracy measures errors of omission while user's accuracy measures errors of commission.
Results were then visually evaluated by comparing the mapped spatio-temporal pattern of the urban area with the developed area layer included in the NLCD.The NLCD dataset is a Landsat-based 30 m resolution national land-cover dataset, whose years 2001, 2006, and 2011 are directly comparable, and were used for comparison to this product.NLCD 2001, NLCD 2006, and NLCD 2011 were selected because they covered the study period for comparison.The NLCD classification scheme was adapted from the Anderson Land Cover Classification System with eight Level I classes and 16 Level II classes.Both NLCD 2001 and NLCD 2006 were accomplished using decision tree classifiers and their accuracy assessment was completed through a set of fixed location stratified randomly selected samples that were interpreted based on multi-temporal high resolution digital imagery.The reported Level I class overall accuracies were 85% in 2001 and 84% in 2006 [47].For the two urban classes (class 23: developed medium intensity and class 24: developed high intensity) that match with our urban definition, their producer's accuracies were 75% for class 23 and 81% for class 24 in NLCD 2001 and 80% for class 23 and 26% for class 24 in NLCD 2006.Their user's accuracies were 67% and 87% in NLCD 2001 and 69% and 83% in NLCD 2006 [47].NLCD 2011 was completed by spatially integrating NLCD 2006 with the spectral change map of 2006-2011 obtained from the Multi-Index Integrated Change Analysis model [48].Four Level II classes that fall into the category of developed area were grouped to generate the developed area layer for comparison with the maps.
Accuracy Assessment
The overall accuracies achieved by the benchmark approach MLC for the test years were 82%, 77%, 84%, 81% and 88% (mean: 82%), and were increased to 88%, 94%, 92%, 95% and 88% (mean: 91%) after temporal filtering (Figure 5; Supplementary File 3).Year 2015 reported the same accuracies during the temporal filtering process due to the fact that no post year existed to determine if growth of urban areas was logical.Except for year 2015, the average overall accuracy improvement achieved by the post-classification method was over 10%.The producer's accuracies for the urban class range were from 87% to 98%, and for the non-urban class range from 68% to 84%.After temporal filtering, the PAs for the urban class dropped about 10% on average, but the improvements of user's accuracies were over 24%.For the nonparametric classifications based on RF, the average overall accuracy, producer's accuracy, and user's accuracies were 88%, 83%, and 91%, and those after filtering were 91%, 76%, and 98%, respectively (Figure 5; Supplementary File 3).
producer's accuracy, and user's accuracies were 88%, 83%, and 91%, and those after filtering were 91%, 76%, and 98%, respectively (Figure 5; Supplementary File 3).Visual examination showed that temporally filtered maps depicted substantially more spatial heterogeneity of impervious surfaces within urban areas, as opposed to the overestimation of urban lands in the preliminary classification and NLCD developed area layer (Figure 6).In terms of change detection, the initial MLC classification had the worst performance in providing reliable information, as many pixels showed illogical land-cover transition from urban to non-urban.This can severely impair the true change detection accuracy.NLCD data shows more consistent results in urban expansion, as the latest map was generated through the integration of an older land-cover map with the spectral change map.This temporal trajectory polishing method achieved the best results in keeping both the spatial contiguity and temporal consistency.
Overall, RF outperformed in the preliminary classification, and because of its already high accuracies, the improvements with the temporal filtering approach were less significant.Visual examination showed that temporally filtered maps depicted substantially more spatial heterogeneity of impervious surfaces within urban areas, as opposed to the overestimation of urban lands in the preliminary classification and NLCD developed area layer (Figure 6).In terms of change detection, the initial MLC classification had the worst performance in providing reliable information, as many pixels showed illogical land-cover transition from urban to non-urban.This can severely impair the true change detection accuracy.NLCD data shows more consistent results in urban expansion, as the latest map was generated through the integration of an older land-cover map with the spectral change map.This temporal trajectory polishing method achieved the best results in keeping both the spatial contiguity and temporal consistency.
Overall, RF outperformed in the preliminary classification, and because of its already high accuracies, the improvements with the temporal filtering approach were less significant.
Urban Expansion in Northwestern Arkansas (NWA)
The urban sprawling is very prominent in all the three cities of NWA, as can be observed from the remote sensing-derived maps (Figure 7; Figure S3).
Bentonville experienced noticeable growth from 1995 to 2000 mainly on the western side of the city in areas lying to the west and south of the major highway.The period between the years 2000 and 2005 was also a period of rapid urban expansion in the Bentonville area, mostly in a westward direction.The period between 2005 and 2010 did not experience as much growth in the Bentonville area save for some developments in the southern Bentonville area and a major added highway in the north running east and west constructed between 2010 and 2015.
Springdale's most notable urban expansion took place in the years between 1995 and 2000 in the southeastern section of the city limits area as well as the western section of the city limits.The growth in Springdale in the period between 2000 and 2010 was very spatially diverse across the city, with the most notable concentration of urban expansion in the western section of the city.The urban growth in the city of Springdale between 2010 and 2015 can be seen in the far southeastern parts of the city as well as some construction of major roads in the northwest and southwest presumably to allow easier access to the local airport.
The urban expansion in Fayetteville is, overall, very spatially diverse and dynamic during the time period of the study, 1995 to 2015.In the time period between 1995 and 2000 a major highway connection running north and south allowing access to the region was built in the area just south of Fayetteville's city limits.The greatest urban expansion was experienced in the first half of the study period, between 1995 and 2005, with less intense growth taking place in the latter half of the study, between 2005 and 2015.
Urban Expansion in Northwestern Arkansas (NWA)
The urban sprawling is very prominent in all the three cities of NWA, as can be observed from the remote sensing-derived maps (Figure 7; Figure S3).
Bentonville experienced noticeable growth from 1995 to 2000 mainly on the western side of the city in areas lying to the west and south of the major highway.The period between the years 2000 and 2005 was also a period of rapid urban expansion in the Bentonville area, mostly in a westward direction.The period between 2005 and 2010 did not experience as much growth in the Bentonville area save for some developments in the southern Bentonville area and a major added highway in the north running east and west constructed between 2010 and 2015.
Springdale's most notable urban expansion took place in the years between 1995 and 2000 in the southeastern section of the city limits area as well as the western section of the city limits.The growth in Springdale in the period between 2000 and 2010 was very spatially diverse across the city, with the most notable concentration of urban expansion in the western section of the city.The urban growth in the city of Springdale between 2010 and 2015 can be seen in the far southeastern parts of the city as well as some construction of major roads in the northwest and southwest presumably to allow easier access to the local airport.
The urban expansion in Fayetteville is, overall, very spatially diverse and dynamic during the time period of the study, 1995 to 2015.In the time period between 1995 and 2000 a major highway connection running north and south allowing access to the region was built in the area just south of Fayetteville's city limits.The greatest urban expansion was experienced in the first half of the study period, between 1995 and 2005, with less intense growth taking place in the latter half of the study, between 2005 and 2015.
Pros and Cons of the Temporal Trajectory Polishing Algorithm
The feasibility of using remote sensing for monitoring long-term urban expansion is largely limited by change detection accuracy.Urban landscapes are highly spatially and spectrally heterogeneous due to complicated structures, materials, density, and spatial forms, which make them easier to be confused with other land-cover types based only on their spectral signatures [49].While it is not easy to accurately map single-date images, it is even more difficult to keep the change detection accuracy high.For the conventional single-date image classification, the RF achieved better overall accuracy than MLC in both initial classifications and post-filtering results by around 5%.This can be mainly attributed to two reasons: (1) MLC relies on assumptions about the data distribution (e.g., normally distributed), whereas the ensemble learning techniques employed by RF do not [50]; (2) MLC is affected less by the selection of input layers than RF.The performance of RF usually is improved with a larger amount of input features because the classification uncertainties are usually negated by using the ensemble of results from many individual trees that are built upon the random selection of input layers [51].
With the temporal trajectory polishing method proposed in this study, the classification results obtained were improved at all dates (except for the last year) for both the parametric MLC and the non-parametric RF, which implies the usefulness of this approach.Improvements were also noticed to be more substantial over the MLC maps, reporting lower accuracies in the initial classifications.This suggests that this method works more effectively with multi-date classifications containing larger amounts of mapping errors.In addition, the comparison with NLCD data proved the
Pros and Cons of the Temporal Trajectory Polishing Algorithm
The feasibility of using remote sensing for monitoring long-term urban expansion is largely limited by change detection accuracy.Urban landscapes are highly spatially and spectrally heterogeneous due to complicated structures, materials, density, and spatial forms, which make them easier to be confused with other land-cover types based only on their spectral signatures [49].While it is not easy to accurately map single-date images, it is even more difficult to keep the change detection accuracy high.For the conventional single-date image classification, the RF achieved better overall accuracy than MLC in both initial classifications and post-filtering results by around 5%.This can be mainly attributed to two reasons: (1) MLC relies on assumptions about the data distribution (e.g., normally distributed), whereas the ensemble learning techniques employed by RF do not [50]; (2) MLC is affected less by the selection of input layers than RF.The performance of RF usually is improved with a larger amount of input features because the classification uncertainties are usually negated by using the ensemble of results from many individual trees that are built upon the random selection of input layers [51].
With the temporal trajectory polishing method proposed in this study, the classification results obtained were improved at all dates (except for the last year) for both the parametric MLC and the non-parametric RF, which implies the usefulness of this approach.Improvements were also noticed to be more substantial over the MLC maps, reporting lower accuracies in the initial classifications.This suggests that this method works more effectively with multi-date classifications containing larger amounts of mapping errors.In addition, the comparison with NLCD data proved the superiority of this approach in characterizing the urban landscape details, especially in areas where impervious surface and urban vegetation are highly mixed.
Although the temporal trajectory polishing approach is promising in many ways, one shortcoming is revealed from this case study.Regardless of the improved OA and UA for urban classification, the omission errors for urban classification were elevated after the post-classification process.The higher omission rate was partly due to the finer texture measurements of Landsat 8 OLI imagery and the inconsistency in land-cover characterization between the two sensor types we included: Landsat 5 TM and Landsat 8 OLI [52].Although both sensors share the same spatial resolution, Landsat 8 imagery has several new features: (1) an enhanced radiometric resolution of 12-bits (16-bits when processed into Level-1 data products) compared to the 8-bit resolution of its predecessor, which improves the spectral record precision and avoids spectral saturation [53]; (2) enhanced signal to noise ratios, almost twice as good as Landsat 7 [54]; (3) narrowed NIR band to avoid the effect of water vapor absorption at 0.825 um.These changes in the instrument's design allow for a much finer texture and refined spectral range of the OLI bands, especially in the near-infrared bands [52].Consequently, more land-cover details with slight differences in energy can be characterized by Landsat 8 images.The improved data performance usually results in more satisfactory land-cover classification results, which has been proven by a previous study [52].For this case study, the OLI image-derived classifications help to better separate the green spaces or bare lands that are intermixed among the built-up pixels when compared with the relatively blurry look of the TM images.However, the test samples interpreted based upon Landsat 5 can hardly reflect the complex urban composition.For instance, a pixel containing a mixture of a high percentage of green vegetation and a low percentage of paved roads could be interpreted as urban on TM images due to its higher spectral reflectance value rather than pure vegetation pixels, but would be labeled as non-urban on an OLI image.
Thus, without better validation samples, it can hardly be concluded that the temporal filtering method will result in higher omission error.For instance, in the previous Beijing case study, consistent improvements were observed over almost all types of accuracy metrics after the temporal polishing [23].This is because in cities like Beijing, where built-up pixels are relatively purer with less intermixed green spaces, the improvement in classification relies upon the enhanced image sensor and is not as obvious in cities where mixed urban/green space pixels are commonly found.
Socio-Economic and Environmental Explanations of the Urban Sprawling Patten in NWA
The successful depiction of the dynamic urban expansion in NWA reveals some interesting patterns.The examination of the urban expansion for NWA appears to suggest a reinforcing loop that is being driven by economic and topographic factors.
First of all, Northwest Arkansas saw an annual population increase of approximately 11,328 from 2000 to 2010, indicating an annual growth rate of 3.15% [55].This annual influx of people into the region may be related to the economic opportunities that are offered due to the location of the four central cities that comprise the NWA.As noted by Gascon and Varley [56], the economic growth of the NWA is unique because it is centered on all four cities, whereas other metropolitan statistical areas typically revolve around one major city.This is supported by a report prepared by the Northwest Arkansas Regional Planning Committee [55] which shows that eight companies within these four cities employ over 1000 individuals each with Wal-Mart leading the group with over 11,000 employees.A result of this has been the rapid development of subdivisions and transportation networks to accommodate the regular influx of people into the region.
Secondly, all four principal cities exhibited a general trend of westward expansion with noticeable developments occurring within close proximity to the primary roadways passing through the region.The construction of the new roadways also corresponds to noticeable increases in urban development (i.e., appearance of subdivisions and strip malls).This is not surprising given the fact that roadways are generally viewed to be a driving force for influencing urbanization [57].The other factor influencing the westward expansion for this region seems to be associated with the general topography.That is, while the Boston Mountains, located in the eastern half of both counties and the southern half of Washington County, inhibit large-scale development, the regional topography of the Springfield Plateau provides a conducive setting for urban expansion.For example, within all four primary cities, examination of the directionality of the urban expansion from the classification maps agrees with the current status of development within all four principal cities.
Conclusions
Reconstructed urbanization histories for fast growing regions such as Northwest Arkansas (NWA) are needed for dealing with a wide range of challenges in terms of the environment, climate, urban planning, population health, and natural resources.This article presents a readily implemented and effective approach to generate high frequency, high accuracy, and consistent trajectories of urban land-cover change in This temporal trajectory polishing algorithm was designed to take advantage of the two most widely applied change detection methods and achieved satisfying results in this case study.The site-specific accuracy assessment in the five independent validation years indicated that the mean overall accuracy improved over 10%.This method can substantially improve the time series classifications with relatively low accuracies, which could benefit mapping efforts that lack adequate budget, time, and/or labor.Considering the higher availability of temporally dense images, this post-classification change detection approach will have wider implications for land change science and other relevant disciplines.
Figure 1 .
Figure 1.The Northwest Arkansas study area.
Figure 1 .
Figure 1.The Northwest Arkansas study area.
Figure 2 .
Figure 2. The classification workflow.VHR-very high resolution; NAIP-National Agriculture Imagery Program; MLC-Maximum Likelihood Classifier; RF-Random Forest classifier; LULC-land use and land cover.
Figure 2 .
Figure 2. The classification workflow.VHR-very high resolution; NAIP-National Agriculture Imagery Program; MLC-Maximum Likelihood Classifier; RF-Random Forest classifier; LULC-land use and land cover.
Figure 3 .
Figure 3. Process of temporal filtering.The left figure shows a standard workflow diagram of temporal filtering.The process starts with a stack of classified images and a temporal window size of one.Probability of urban occurrence was calculated for each year.After adjusting the potential misclassified labels, the iteration will stop when probabilities for all years are larger than 0.5.The right diagram uses one simple example to illustrate how the workflow works.The gray boxes in the extracted spectral curves from image stack indicate the potentially misclassified years.
Figure 3 .
Figure 3. Process of temporal filtering.The left figure shows a standard workflow diagram of temporal filtering.The process starts with a stack of classified images and a temporal window size of one.Probability of urban occurrence was calculated for each year.After adjusting the potential misclassified labels, the iteration will stop when probabilities for all years are larger than 0.5.The right diagram uses one simple example to illustrate how the workflow works.The gray boxes in the extracted spectral curves from image stack indicate the potentially misclassified years.
Figure 4 .
Figure 4. Illustration of urban change logic rule.The sequential process begins with an initially classified land-use trajectory and is then followed by the logical reasoning.Two examples of logic rules are given in the grey box.
Figure 4 .
Figure 4. Illustration of urban change logic rule.The sequential process begins with an initially classified land-use trajectory and is then followed by the logical reasoning.Two examples of logic rules are given in the grey box.
Figure 6 .
Figure 6.An example in south Bentonville showing the comparison of maximum likelihood classification results, trajectory polished results, and NLCD maps in years 2001, 2006, and 2011.
Figure 6 .
Figure 6.An example in south Bentonville showing the comparison of maximum likelihood classification results, trajectory polished results, and NLCD maps in years 2001, 2006, and 2011.
Figure 7 .
Figure 7. Urban expansion in NWA from 1995 to 2015, and enlarged view of Bentonville, Springdale, and Fayetteville.
Figure 7 .
Figure 7. Urban expansion in NWA from 1995 to 2015, and enlarged view of Bentonville, Springdale, and Fayetteville.
Supplementary Materials:
The following are available online at www.mdpi.com/2072-4292/9/1/71/s1,FigureS1: Total population changes from 1980 to 2015 in Arkansas State, Benton County and Washington County.Data was acquired from: Minnesota Population Center.National Historical Geographic Information System: Version 11.0 [Database].Minneapolis: University of Minnesota.2016.
Figure S2: Validation point locations overlaid on sampling strata for test years.Supplementary File 3: Confusion matrices of MLC and RF classification.
Figure S3 :
Urban area changes in Northwest Arkansas Area from 1995 to 2015.
Table 1 .
Landsat acquisition date and sensor type. | 2017-01-15T08:35:26.413Z | 2017-01-13T00:00:00.000 | {
"year": 2017,
"sha1": "4e9eea948086f9a0d9175f2ff7122e71e9e64b9e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/9/1/71/pdf?version=1484309986",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "4e9eea948086f9a0d9175f2ff7122e71e9e64b9e",
"s2fieldsofstudy": [
"Environmental Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Geography"
]
} |
47020164 | pes2o/s2orc | v3-fos-license | Model of the adaptive immune response system against HCV infection reveals potential immunomodulatory agents for combination therapy
A regulated immune system employs multiple cell types, diverse variety of cytokines and interacting signalling networks against infections. Systems biology offers a promising solution to model and simulate such large populations of interacting components of immune systems holistically. This study focuses on the distinct components of the adaptive immune system and analysis, both individually and in association with HCV infection. The effective and failed adaptive immune response models have been developed followed by interventions/perturbations of various treatment strategies to get better assessment of the treatment responses under varying stimuli. Based on the model predictions, the NK cells, T regulatory cells, IL-10, IL-21, IL-12, IL-2 entities are found to be the most critical determinants of treatment response. The proposed potential immunomodulatory therapeutic interventions include IL-21 treatment, blocking of inhibitory receptors on T-cells and exogenous anti-IL-10 antibody treatment. The relative results showed that these interventions have differential effect on the expression levels of cellular and cytokines entities of the immune response. Notably, IL-21 enhances the expression of NK cells, Cytotoxic T lymphocytes and CD4+ T cells and hence restore the host immune potential. The models presented here provide a starting point for cost-effective analysis and more comprehensive modeling of biological phenomenon.
immune responses against the viral infection is mainly because of evolving viral escape strategies which includes mutations and changes in the effector functions 2 .
Up till now, several studies have proposed the probable mechanisms leading towards the failure of host adaptive immune response. However, it is yet hard enough to extricate the exact causes and consequences of viral persistence. We believe a holistic model of the biological adaptive immune signalling mechanism is essential for deciphering the HCV disease pathology and designing alternative and possibly new multi-drug therapies. However, the plethora of signalling pathways involved in HCV infection comprise a multifaceted dynamical system whose complexity and wide interacting network makes it difficult to study via conventional experimentation approaches. Similarly, there are limitations in the existing methodologies as they can only interpret limited number of proteins and their interactions with other proteins and immunomodulatory agents and thus may not be able to cover the whole system, at a time. Systems biology approaches offers good alternative to existing strategies to model and analyse large networks 8,9 . Mechanistic hypotheses related to biological problems could easily be tested by applying appropriate mathematical models. In this context, several mathematical models have been employed successfully to analyse and investigate the integrated signalling networks and dynamic behaviours of the entities (Genes, RNAs and Proteins) involved 10,11 .
Biological systems are modelled using several mathematical frameworks including stochastic or differential equations (PDEs, ODEs, PLDEs, DDEs) or networks based on graph theory (Logical, Boolean, Bayesian) 12 . Usually, biological networks remain highly complex and dynamic in nature and there is no experimental data available for all the entities, such as enzyme kinetics or logical parameters to manually construct ODE models of larger networks. Moreover, it also remains a challenge for these mathematical models to handle large networks which are subjected to state space explosion phenomenon 13 . Therefore, alternative approaches can be adopted which can approximately model the dynamic behaviours of biological systems and get insights into the signal flow within associated biological networks 11 . One such approach is the use of Petri nets (PN) theory 14 . PNs are based on graph theory and have the potential to model different types of frameworks including biochemical processes, chemical reactions, biological networks (cellular or molecular), industrial models, etc., with flexible and simple representation 14 . PN models are usually used to describe generic principles and can be applied on abstracted models efficiently. The ease of modelling and interpretation makes PN theory suitable approach to model large networks where kinetic knowledge is lagging for few or most of the entities involved. PN approach used in the study emphasizes the structure of biological signalling pathway, as it is believed that the molecular interactions within a network have evolved to such an extent that they have a stabilizing effect on the signalling network. Thus it can be assumed that the network connections are foremost and critical factor for signal propagation through the signalling pathway 11,15 .
We have employed the computational systems biology methods coupled with mathematical modelling to comprehensively analyse the integrated HCV-induced adaptive immune signalling pathway. The unavailability of a comprehensive mathematical model explaining the adaptive immune responses against HCV led us to design a PN model, that enabled us to characterize the host immunogenic entities playing a significant role in the clearance of viral infection. Since the selected network is quite complex with large number of entities thus PN modelling is considered the best suited approach. The proposed model represents the wide-ranging HCV-induced adaptive immune system while preserving the behaviours of the signalling proteins, previously established experimentally. The model is able to perceive those properties and behaviours which are not apparently evident in the individual experimental studies. The models enhanced our understanding of the phenomenon of adaptive immune response system during HCV infection. Once the model was verified, few of the perturbation experiments were applied to propose immunomodulatory therapies to augment the existing IFN-α/RBV therapy. The ease of in silico experimentations provided in PNs model could assist in the development of new immunomodulatory treatment regimens along with the discovery therapeutic vaccines against chronic HCV and other viral infections. The modelling approach applied in this study is straightforward and can be extended to other biological systems to better explore specific behaviours.
Results
Logic based diagram depicting the essential features of the adaptive immune signalling during HCV infection. The logic based diagram of adaptive immune response (Fig. 1) is a comprehensive and detailed visual representation of the complex molecular regulation, orchestrated by series of signalling pathways based on experimental data. This illustration highlights the complexity of the model and the extent of interactions amongst the diverse cell populations. As demonstrated in Fig. 1, significant signalling cascades triggered in adaptive response by the activation of interferons, several immune related cytokines, DCs, NK cells, CD4+ and CD8+ T cells, B-cells which cooperate to defend hepatic cells against HCV infection. These signalling cascades are generally observed and studied as distinct entities instead of an integrated complex of molecular interactions. Thus, in order to study the system holistically, a comprehensive network is formulated by assimilating significant interactions recognized in various earlier experimental studies. The arrangement of the components/entities and their interactions in the designed pathway facilitate the visualization of paths and events followed by the system from the preliminary cause to the ultimate outcome.
Typically, the immune signalling pathways are triggered by the HCV particle which targets the hepatocytes and releasing its RNA. The primary line of defence is the generalized innate immune response against the viral infection which afterwards activates more specified adaptive immune response elements 16 . The intercellular signalling (systematized by chemokine, cytokines and cell surface receptors) and intracellular signalling (achieved by the network of signalling pathways) is induced through several vital regulatory immune elements 17 . Once the HCV RNA is released, the type I and III interferons are produced through innate response 4 . Subsequently, it stimulates the adaptive responses via activating the NK cells and DCs residing in the liver 18 . DCs are crucial for recognizing pathogens in addition to the triggering of adaptive immunity. Then they further activate NK cells SCIeNTIfIC REPORTs | (2018) 8:8874 | DOI:10.1038/s41598-018-27163-0 (reciprocal activation), T-cells (CD4+ cells, CD8+ cells and T-regulatory cells, Cytotoxic T lymphocytes (CTLs)), by releasing interferon γ, λ along with several other cytokines as shown in the Fig. 1 18,19 . Crosstalk between innate and adaptive immunity mediated by NK cells. Initially, NK cells establish host innate defence to counter viral infections. In case of HCV infection, NK cells are triggered through type IFN-I (α, β) produced by affected hepatocytes 20 . In addition to it, NK cells can also be activated via IL-12 secreted by DCs, hence sanctioned to eradicate infected cells 21 . Although classically NK cells are viewed as components of innate immune system, but it has been shown clearly in several studies that they have a substantial part to play during adaptive immune response as well 22 . They are a major source of IFN-γ and tumour necrosis factor-alpha (TNF-α) which hinder viral replication without destroying the hepatocytes 22 . Moreover, they may bring about partial or complete DCs maturation. NK cell activity is rigorously directed by the stimulating and inhibiting NK receptors (NKRs) which mainly consist of inhibitory killer Ig-like receptors (KIRs) [20][21][22][23][24] . Furthermore, NK cells . HCV infection and key adaptive immune responses: As soon as HCV RNA is recognized by host cells in the liver (Hepatocytes). In response, innate immune system produces Type I and II interferons to trigger an antiviral state 16,18 . While adaptive immune response is initiated by several main entities in the immune system. They include, (a) memory Dendritic cells (mDC) that activate CD8+ cells, CD4+ cells and natural killer (NK) cells by releasing cytokines IL-15, IL-4, IL-12 18 (b) NK cells produce interferon-γ to mediate antiviral effects. (c) CD8+ cells and CD4+ cells control the T-helper cells (Th1 & Th 2) which as a result regulate the macrophages functionality, induce Cytotoxic T cells (CTLs) and T-regulatory cells (Tregs) 7 (d) CD81 NK receptor is blocked by HCV E2 protein, reducing release of interferon-γ and cytotoxic particles by NK cells 23 (e) MHC class I expression is increased on affected hepatocytes by HCV core protein, hence reducing activity of NK cell against affected cells 88 . HCV also rises the regulatory T cells in liver 24 . (f) NK cells activity is reduced by regulatory T cell secreting IL-10 and transforming growth factor-β (TGF -β) 24 . (g) Humoral responses are activated via CD4+ cells by the release of IL-21, IL-4, IL-6 and IL-5.
SCIeNTIfIC REPORTs | (2018) 8:8874 | DOI:10.1038/s41598-018-27163-0 also exhibit reciprocal and regulatory interactions with DCs, macrophages, T and B cells thus operating to intensify or diminish immune response reactions 25,26 . the main component of the immune system to control viremia 7 . Many T cell subsets have been characterized depending upon the expression of distinct cell surface markers and/or the effector molecules produced by a particular T-cell population. HCV-specific CD8+ T cells are able to control virus by two effector approaches (a) they can cytotoxically destroy affected target cells (mainly liver cells) that use HLA class I molecules to present viral antigens on their surface and (b) they also play a role in controlling viruses by non-cytolytic mechanisms, including cytokines secretion (such as TNF-α, IFN-γ) 2,3,5,16,17,20,27 . Helper CD4+ T cells support the functional mechanisms of cytotoxic CD8+ T cells 28 . Along with that, the initiation of co-stimulatory signalling pathways facilitate the cytotoxic potential by the release of cytokines including IFN-γ and IL-2 29 . Various regulatory T cells (Tregs) also participate in HCV immunology. Tregs have been associated to HCV-specific exhausted T cell phenotype during early infection phase, causes failure of T-cell and chronic infection. Tregs act as a shield from immunopathological effects by chronic inflammation 7 . Induction of Tregs can be enhanced by elevated levels of TGF-β, mainly produced by affected liver cells 24,30 . Humoral immune responses during HCV infection. Majority of the patients chronically infected with HCV have HCV-specific neutralizing antibodies but they are limited in functionality. Several mechanisms are involved in evasion of viral particles from humoral immune response 2,4,16 . Hence, viral quasispecies evolution that exhibit alterations within target epitopes leading to viral escape from neutralizing antibodies 31 . Thus, their ability to control and clear the infection is limited in case of HCV.
Virus-specific T-cell mediated responses. T-cells specific antiviral response is
Adaptive immune response failure in controlling HCV. Several mechanisms that exhibit an HCV-specific anomaly towards immunity have been stated in previous studies 2, 16,27 . Amongst them, failure of sustained antiviral responses by T cell is major determinant of persistent/chronic HCV infection. The important mechanisms in T-cell response failure include (a) incapability of effector T cells to migrate towards the site of infection in liver as well reduced antigenic presentation (b) another contributing mechanism to T cell dysfunction include expression of inhibitory receptors. The key factors in failure of immune process include constant antigen activation, reduced help of CD4+ T cell along with the activation of Tregs 2,3,27 . T cell responses (CD4+ and CD8+) specifically against HCV are noticeable in persistent infection but ineffective against HCV. Nevertheless, CD4+ and CD8+ T cells obtained from HCV infected chronic patients exhibit maturation anomalies consisting of decreased cytotoxic capacities, lesser Th1-type cytokines secretion as well as a minimum proliferative potential as a result of ex vivo antigenic stimulation 16 . T-helper cells prompt DCs to prime CD8+ T cells, identify antigens by CD8+ T, CD4+ T cells on the same APC (antigen-presenting cell) is probably the major characteristic of antigen-specific T cell help. Therefore CD4+ T cells failure may restrict chances of CD8+ T cells priming via completely stimulated HCV antigen-loaded DC 2,3,27 .
Model based insights into the adaptive immune system during HCV infection. Several studies
demonstrate that the consequences of a viral infection are governed by the host potential to elicit strong antiviral responses as well as viral preventive mechanisms [2][3][4]7,16,17,20,21,32 . The current study attempts to model these diverse mechanisms and explores the critical balance and limiting factors associated with the elimination of the virus. The HCV adaptive immune response model comprises of almost all the known players of the adaptive immune system. The cells include mainly CD4+ and CD8+ T cells, NK cells, macrophages, Tregs, Exhausted T cells, B-cells, antibodies and various cytokines (IFN-y, TNF-α, TGF-β, IL-10, IL-12, IL-21, IL-15, IL-2) mediating the cellular signalling ( Fig. 1). All of these key players have been seen to be involved in viral clearance as reported in various studies 3,5,27 . Therefore, it is quite plausible to have a well-synchronized interaction of diverse kind of immune cells to estimate an effective immune response against HCV, nevertheless, very limited information about the exact interaction amongst these cross-talks are available.
We have attempted to generate PN models taking the knowledge based logic diagram ( Fig. 1) into account, depicting various states during HCV infection. First, a baseline model (Supplementary File 1) was constructed exhibiting the basal levels of all the entities in the absence of any kind of infection. We refer to it as Baseline Model. The Baseline model efficiently demonstrates the normal behaviours of a control system. It was then extrapolated to include the effects of HCV infection and its proteins (Core, E2) on the system which resulted into two additional models. One representing the successful immune response leading towards clearance of infection, referred to as Effective Adaptive Immune Response Model, while the second model signifies that the system is moving towards chronicity and persistence of infection, referred to as Failed Adaptive Immune Response Model. Subsequently, a treatment response model was created to analyse the effect of IFN-α/RBV therapy on the immune system. Later, the predictive ability of the PN models was used to examine various immunomodulatory conditions and to predict and propose the best immunomodulatory therapeutic possibilities.
The model assumes that once acute infection is established, host machinery reacts to eliminate the virus by triggering immune responses. Alongside, the virus continues propagation by utilizing the available nucleotides and amino acids for replication and translation within the host. Viral and host proteins/cells and subtypes in the model are represented by continuous places, whose input transitions continuously add tokens to them with time. The immune cells and various subtypes, based on the expression of markers, are given a separate "place" in the model in order to easily differentiate and study the behaviour. Initially a token is transferred to each place by input transitions, which represents the existence of a protein and does not ascribe to the definite expression level of a protein. Furthermore, the processes occurring in the cells are represented by transitions. Here, the groupings of input and output edges (arcs) are dependent entirely upon the kind of cellular mechanisms and molecular SCIeNTIfIC REPORTs | (2018) 8:8874 | DOI:10.1038/s41598-018-27163-0 interactions involved within. Thus, input and output edges are used in combinations to model various biological mechanisms that facilitate the flow of cellular signals. Time units are denoted by specific time blocks, in which each transition fires once. The mass action kinetics (1) is used to fire the transitions and the signal flow is primarily based on the interconnecting arcs of the network.
To verify the correctness of the model, simulations run was used to get relative expression values for all the cellular and cytokine responses. These values were then compared with the Baseline model (supplementary file 1) to get insights into the differential system behaviours during each state. These results emphasize on the multifaceted dynamics which triggers the adaptive immune response against HCV and also specifies the variable levels of biological implications at several stages of infection. For clarity in PN illustrations, cytokines are highlighted by green colour while immune cells are represented by blue colour.
Model of the Effective Adaptive Immune Response system: NK cells mediators of innate and adaptive immune response crosstalk. Cytokine dynamics is thought to be the main regulatory factor in the entire immune system 6,33,34 . Cytokines have generally been divided into various categories depending upon its function, (a) pro-inflammatory cytokines including TNF-α and IL-6 (b)Th1 cytokines (IL-2, IL-12, IFN-γ), that are produced by Th1 activated lymphocytes, (c) Th2-type cytokines (IL-5, IL-4, IL-10). Th2 cytokines play a significant role in the downregulation of Th1 response, this leads to the inhibition of antigen-presentation on macrophage and promotes B-cell proliferation, resulting in specific antibody release.
PN model of the Effective Adaptive Immune Response is illustrated in Fig. 2. The behaviour of significant entities and plausible relationship amongst various cytokines concentration is observed via simulations of the model and have been represented in Figs 3 and 4. These and many other biological observations are reproduced efficiently by the models generated in this study which verify the correctness and soundness of the models.
As observed in the model simulations of cellular response (Fig. 3), there is a surge in HCV CD4+ response ( Fig. 3A) which helps in the increase of percentage and functional effects of CD8+ T cells (Fig. 3B). HCV CD4+ T cells are vital to adaptive response as they trigger both cytotoxic and humoral responses 35 . They are able to produce Th1-cytokines like IFN-γ that aids in the recruitment of neutrophils and macrophages and cause a robust inflammatory response. A multi-specific, robust, continuous, CD4+ -T-cell-specific Th1 response during HCV infection results in the clearance of infection. CTLs being the key effector cells, facilitate viral clearance using apoptosis-related cytolytic mechanisms by releasing type I cytokines; IFN-γ and TNF-α. The model clearly depicts that CTLs are substantially activated (Fig. 3C), leading to strong cytotoxicity and have the ability to produce strong IFN-γ response. The relative effect of helper CD4+ cells in spontaneous clearance of acute HCV infection was observed by Smyk-Pearson et al. and established the although HCV-specific CTLs are present during infection, and they are able to produce IFN -γ, proliferate, have cytotoxic behaviour but they still did not certify resolution of infection, but either these CTLs are initially primed with CD4+ T cell help or not was a vital factor 36 . An initial increase in exhausted T cells (Fig. 3D) is observed but later on the relative expression tappers down as the system moves towards resolution of infection. Amongst other cellular responses, a noticeable increase in NK cells (Fig. 3E) is detected during early infection. NK cells employ their antiviral activity via direct, non-MHC-restricted cytotoxic processes and production of IFN-γ 21 . NK cells also regulate other adaptive immune mechanisms either directly or indirectly. The crosstalk of NK cells particularly with DCs macrophages and T cells is important 21 . NK cells seem the chief cellular player mediating a crosstalk amongst innate and adaptive immune components. A plausible role of NK cells during HCV immune biology is also reinforced by the fact that they are stimulated during acute infection phase, as shown by an amplified expression of subsets NK cells 37 . This is contemporary to strong induction of IFN-γ and associated cytotoxicity. NK cells are crucial to the resolution of infection because NK cell lines that have been stimulated by cytokines and primary NK cells obtained from naive individuals and might lead to the lysis of HCV-replicating cells, predominantly at elevated effector-to-target ratios 38 also, simultaneously secreting IFN-γ which inhibits HCV replication 39 . A small amount of Tregs (Fig. 3F) is also observed to be expressed during adaptive immune response. Although Tregs are negative regulators of T cells and promote the infection towards chronicity but their regulatory effect is required to protect the liver from damaging effects of inflammation 40 . The inhibitory effect is negligible here in these circumstances.
The notable cytokines whose variations strongly affect the outcome of infection are shown in Fig. 4. Among cytokine responses, IFN-γ is the main mediator of adaptive immunity, sets in motion various cellular responses and seems to be highly expressed as a response to infection. However, it was noted that the higher amounts of IFN-γ do not ensure recovery alone. Thus, in all the models' simulations a very high amount of IFN-γ is present. A surge in the amount of TNF-α (Fig. 4A) signifies clearance of infection as it is an antiviral cytokine which effect the viral clearance as well as limiting the tissue damage. TGF-β (Fig. 4B) promotes infection related tumour development and tissue injury. In the resolved infection, comparatively lesser amount of TGF-β is observed. Importantly, the ratio of TGF-β and TNF-α is important rather than absolute standalone quantities. In resolved infection higher TNF-α to TGF-β is present (Fig. 4). As Tregs are activated in a small number thus, thus IL-10 ( Fig. 4C) is also noted to be low as compared to chronic infection. IL-12 (Fig. 4D) is highly important cytokine regulating CD8+ cells and sanctioning their differentiation. On the other hand, IL-2 ( Fig. 4E) is important for CD4+ mediated immune response and survival of CD8+ cells. The increase in the levels of IL-21 (Fig. 4F) is noted which is also a critical determinant in CD4+ T-cell response 41 . It helps in proliferation of NK cells and also mediating crosstalk amongst B-cells and T cells leading towards humoral responses. Thus, it induces both an adequate isotype switching in B-cells and an ADCC specialized NK cell subset response ( Fig. 1) 41,42 . Main effects of successful adaptive immune response include but not limited to, increase in cytotoxicity via CTLs, increase in TNF-α and IFN-γ production, lower level of Tregs and IL-10. These effects have been reciprocated by our model making the model a good base to experiment further. It is observed in Fig. 6, in the absence of viral clearance, there is a decrease in CD4+ cellular response (Fig. 6A) as compared to a resolved infection (Fig. 3A). This results in scarce production of Th1-type cytokines along with decreased proliferation in response to stimulation by an antigen. CTLs are lower in concentration (Fig. 6C) and with impaired proliferation and cytokine production as a result of comparatively low IFN-γ production. Evidence also suggests that during chronic HCV infection, HCV specified CTLs are few exhibiting decreased performance, they also display anergic features with reduced inducement of type I IFNs 6 . The CD8+ T cells experience exhaustion phenomenon (Fig. 6D) due to chronic antigenic stimulation leading to the increased expression of inhibitory receptors such as Programmed cell death protein 1 (PD-1), Cytotoxic T-lymphocyte-associated protein 4 (CTLA4) and T cell immunoglobulin and mucin domain 3 (TIM3) receptors. T-cell exhaustion is believed to be a critical determinant during chronic infection leading towards reduced cytotoxicity and persistence of infection 43 . Convincing evidences are present for exhaustion and anergic T cells during HCV chronic infection 44,45 . Elevated levels of PD-1, TIM3 and CTLA4 expression are related to CTL dysfunction, resulting in lesser quantities of TNF-α and IFN-γ, as compared to their counterparts 44,45 . The core protein of HCV also disrupts host adaptive immune response by effecting the expression of PD-L1, which supports T cell dysfunction. Thus, leading towards lower CD8+ cell functionality. NK cells count lowers (Fig. 6E) as a result of HCV E2 protein inhibiting the function of NK cells by crosslinking CD 81receptor 46 . Furthermore, a relatively high level of Tregs (Fig. 6F) is observed during chronic infection which downregulates the effecter function of immune cells.
Model of the
The analysis of cytokines response revealed that TNF-α to TGF-β ratio is low as compared to resolved infection. It is observed that TNF-α (Fig. 7A) is relatively reduced while TGF-β (Fig. 7B) production is increased. IL-10 ( Fig. 7C) is highly increased which strongly attenuate the proliferation of CD8+ T cells and CD4+ T cells. TGF-β being a regulator of thymic T-cell development and differentiation, maintains T-cell homeostasis 47 . These immunomodulatory cytokines are believed to be negative repressors of inflammation and regulate hepatic immunity. Alternative probable mechanism of aberrantly regulated cytokines in chronic infection results from Tregs mediated immune regulation. These cells release both IL-10 and TGF-β in high amount thus inhibiting proliferation as well as cytokine release via T cells, either directly or by other related mechanisms 48 . Suppression of IL-12 (Fig. 7D) and IL-2 (Fig. 7E) occurs during chronic infection leading to low production of IFN-γ via T cells. Also, it leads towards more Th-2 mediated mechanisms. IL-21 expression is also lowered (Fig. 7F) which might lead to impaired humoral responses. IL-21 is negatively correlated with HCV RNA 49 thus IL-21 producing CD4+ cells might help in the rescue of HCV specified CD8+ T cells for the control of viremia. Treatment response model. Once it is established that the model satisfies all the major features of various stages of infection, we further explored how the system supposed to behave on introduction of any treatment into the system. Hence, the current immunomodulatory treatment of HCV, which comprises of PEGylated interferon-α (IFN-α) and ribavirin (RBV), has been employed in the model. As PegIFN-α/RBV have been regarded as standard of care (SOC) for a long time, with sustained virological response (SVR) rate estimated at 40-50% in patients infected with genotypes 1 and 4 and up to 80% SVR rates in individuals with genotypes 2 and 3 50 . Where SVR is defined as the HCV RNA that is undetectable after completion of treatment and even after six months 50 . Although direct antiviral agents have shown a promising result in terms of lowering the viral infection but these treatments are reportedly hindered by resistance and relapse. The prime goal of DAAs is to inhibit the specific viral proteins which helps in its replication. Contrary to it, the immunomodulatory treatments enhance the hosts own immune response mechanisms to eliminate the virus from the body. Therefore, the immunomodulatory treatments cannot be ruled out of HCV treatment regimens.
Consequently, we extended the model by introducing the entities such as PEGylated-IFN-α and RBV to explore the performance of various cellular and cytokine responses during treatment stage (Fig. 8). Accordingly, a Treatment response model was generated and verified that it can accurately recapitulate the basic mechanisms and adaptive responses during IFN-α/RBV treatment of chronic HCV infection. The effects observed during the treatment response model are analysed according to the mechanism of action of both agents mentioned in literature.
Several theories regarding the mechanism of action of ribavirin have been developed. RBV is known to be a guanosine analogue supposed to elevate the effector function of IFN 51 and avert relapse by raising the mutation rate of HCV 52 . RBV is also known to deplete the intracellular guanosine triphosphate (GTP) reservoir by acting as an inhibitor of inosine mono-phosphate dehydrogenase (IMPDH) 53 . HCV RNA-dependent RNA polymerase is directly affected by antiviral RBV [51][52][53][54][55] . Moreover, RBV greatly effects the adaptive immune responses as it is involved in tipping the T helper balance from a Th2 cytokine profile to an effective antiviral Th1 cytokine profile (Fig. 8).
Besides, IFN-α therapy induces high rates of SVR by preservation of substantial multi-specific HCV-specific CD4+ T cell responses (Fig. 8C) 56 . It is predicted that the treatment restores the IFN-γ production (Fig. 8H) via NK cells as they induce cytotoxicity that is related to virologic response 57 . Therefore, NK cell activation specifies responsiveness to IFN-α-based treatment and proposes the connection of the innate immune cells to viral clearance. However, in the model the there is no significant elevation in the NK cells levels which might explain the treatment failure or reduced response to treatment. IFN-α has several effects in stimulating the immune system. However, some mechanisms of IFN-α could perhaps have negligible effects or more significant ones as treatment lingers on. It was observed through the model simulations that HCV replication is halted to downregulate the production of HCV proteins. However, we are interested to demonstrate the important cytokine and cellular regulators of adaptive immunity which are differentially affected during the course of HCV treatment. Several cellular responses and cytokines levels altered during treatment perturbations introduced in the model. Amongst them, a noteworthy rise in CD4+ cells (8 C), cytokine such as IL-21 (8I) and IL-4 (8 J) and a strong reduction in IL-10 is observed (8 G). It is also known that higher IL-10 concentration levels are present in the chronic infection [58][59][60] . After therapy response, the IL-10 levels were observed to be downregulated (Fig. 8G). Moreover, the IL-12 levels are found low in response to treatment as compared to chronic infection (Fig. 8L). However, high expression levels of IL-12 are determinants of resolved infection 33,55 . Both IFN-α and RBV differentially regulate the Th1 and Th2 cytokines as shown in several studies 51,55,61 . Their combined effects result in the suppression of IL-10 production but maintains a relatively good expression levels of IL-12, this eventually favours effector T cells for viral clearance. However, it is not always the case during the patients' treatment response. That is why this balance can be toppled very easily by various other related partners involved which might result in the failure of the treatment. In the case of treatment failure, the chronic increase in PD-1 and other inhibitory receptors expression on T lymphocytes is observed, and hence an increase in exhausted T cells and Tregs. The most critical factors highlighted during the analysis of the Treatment response model include IL-10, IL-21, IL-12, IL-2 which determine the treatment outcome in terms of clearance of infection by modulating pro-inflammatory response. These determinants were selected for further analysis to propose combination therapy of immunomodulatory agents in conjunctions with PEGylated-IFNα/RBV treatment.
In summary, IFN-α/RBV acts to enhance the immune system and regulate the negative effects of immune activation. The discrepancies observed in responders and non-responder patients seem to be mediated by the inherent defects or differences in baseline activation of several cytokines in various individuals. It is primarily based on the immune signatures unique to every individual. It is noteworthy that the microenvironment of each cell is different and it might affect the cytokine balance very precisely. Pinpointing those limiting factors during treatment response is of great importance for immunomodulatory therapies of HCV.
Proposed therapeutic interventions: Effect of perturbations by inhibition of specific targets.
In the context of PN, we had a liberty to showcase various types of inhibitions and knockout experiments. therefore, we studied various host immune regulatory mechanisms which effect the proliferation and survival of immune cells (CD8+ and CD4+ T cells, CTLs, Tregs, Exhausted T cells, and NK cells). Such regulatory mechanisms are necessary in maintaining the normal physiology and to help maintain a balance amongst immune related responses by attenuating them and limit the injury to the tissue due to increased inflammation 24,34,[62][63][64] . As discussed earlier, in the context of viral infections, specifically the HCV, similar mechanisms are activated to help the survival and propagation of the virus 58,64,65 . Based on these facts, it is assumed that HCV regulates hepatic adaptive immunity by activating such regulatory mechanisms. These mechanisms are correspondingly potential therapeutic targets, in order to recover the host immune responses. The critical factors selected during the study of treatment response model (IL-12, IL-21, IL-10, exhausted T cells) were perturbated in further in silico experiments by increasing or decreasing the levels of various cytokines in the model. IL-12 was not included in further analysis as literature shows that IL-12 is not an effective treatment option for HCV 66 . The simulation analysis enabled us to propose three therapeutic options which might be helpful in immunomodulation during HCV infection treatment along with classical therapy.
Reversal of T cells exhaustion. The exhausted HCV-specific T cells displaying anergy are present in
chronic HCV infection. This could be exploited through a possible therapeutic approach which could effectively reverse the T-cell exhaustion and enable the immune cells to control the virus. As T-cell dysfunction/exhaustion is promoted by several immune regulatory mechanisms including but not limited to as presented in the Fig. 9. The proposed therapeutic options are discussed below where the varying levels of CD4+, CD8+, CTLs, Exhausted T cells, NK, and Tregs can be exploited for the proposed treatments. The proposed Immunomodulatory treatments may include IL 21 therapeutics, blocking of inhibitory receptors and introduction of anti-IL-10 antibodies to helps in a sustained adaptive immune response.
IL-21 therapeutics can partially restore cytolytic activity. As discussed earlier, IL-21 is a crucial factor in CD4+ T-cell response and also helps in NK cells proliferation as well as mediating the crosstalk amongst T-cells and B-cells 41 . IL-21 may exert its controlling function on Tregs while positively contributing to CD8+ T-cell responses 49 . In the first perturbation experiment, a hypothetical recombinant IL-21 was studied as a co-stimulating immunomodulatory agent in HCV treatment. The levels of IL-21 were increased by introducing a continuous source place in the model, such that its effect on various cells involved in HCV clearance could be analysed simultaneously. Analysis of the simulation results (Fig. 9C (i)) revealed that not only CD4+ T cell response is improved (Black line, Fig. 9C (i)) but also there is a marked increase in the CTLs activity (Blue line). This observation reveals that the hypothetical recombinant IL-12 treatment recovers the CD4+ T-cells in the system and also significantly stimulates cytolytic mechanisms. It aids the control of viremia and facilitate HCV clearance. In terms of a co-stimulatory agent, IL-21 appears be a quite effective cytokine, which can sustain T-cells responses and thus has a potential to be considered in combination therapy to augment the current therapies in controlling the viral infection 49,67,68 .
Blocking of inhibitory receptors lowers exhausted T-cells count.
As discussed earlier the loss of CD4+ T cell help in proliferation of CD8+ T-cells results in CD8+ T cell exhaustion and persistent. Helper CD4+ T cells are significant in the activation of DCs to prime CD8+ T cells. However, the sustained up-regulation of inhibitory receptors PD-1, CTLA4 and Tim3. The HCV core protein strongly upregulates the expression of PD-L1, a ligand for PD-1 receptor, which is known to promote T cell dysfunction 69 . Blocking these inhibitory receptor poses a significant therapeutic option as several studies on various viruses have shown a significant degree of therapeutic promise 70 . Subsequently, in the next perturbation experiment, we blocked the T cells expressing inhibitory receptors to measure the effects on cellular responses. PD-1 blockage does not seem clinically wise decision based on its importance in maintaining normal physiology of the liver. Thus blockade of Tim-3 71 or CTLA-4, may hold some immunotherapeutic promise. Upon inhibition of cells expressing the inhibitory receptors a considerable reduction in the viremia along with expression of Th1 profile is observed. From Fig. 9C (ii), it can be observed that the exhausted T cells are comparatively low in proportion (Pink line) and the CD8+ T cell (red line) are seen to be slightly highly expressed.
Anti-IL-10 antibody helps in a sustained NK cell response. Another plausible immunomodulatory strategy could be counteracting IL-10 to reduce its regulatory effect on T-cell maturation and proliferation. Enhanced production of IL-10 cytokine is noted in earlier studies due the effects of core protein of HCV 72 . As IL-10 is an immunomodulatory cytokine which is primarily considered to reduce the cytotoxic potential of T cells as well as NK cells 58 . However, the immunomodulatory potential of IL-10 is also essential for regulating inflammatory responses and helps to reduce immune related tissue injury 59 . IL-10 also inhibits IL-12 production 65,73 , even though IL-12 helps in the activation of NK cells via DCs. The inhibitory effects of IL-10 on activated macrophages has also evidence by decreasing TNF-α 74 . Furthermore, it is also known that IFN-γ suppresses the IL-10 concentration levels 58,60 . Thus, the ratio of IL-10 to IFN-γ is critical for deciding the fate of infection 6 . The hypothesis saying that if IL-10 production is blocked/inhibited, it will result in the improvement of HCV specific T-cell response was tested further. Hence, we designed an in-silico experiment to check the effect of decreased concentrations of IL-10 in the system by introducing anti-IL-10 antibody. Figure 9C (ii) shows the simulation results of the perturbation experiment. Apart from the rescue of HCV-specific CD8+ T cell responses, an increase in CD4+ response is also observed. IL-10 antibody treatment result in a sustained NK cell response which is quite necessary for the upregulation of innate and adaptive immunity. Thus, it seems that the higher percentage of IL-10 Hence, it is suggested that inhibition of various co-inhibitory pathways via hypothesized inhibitors, IL-21 treatment and IL-10 antibody may differentially enhance CTL effector functions, CD4+ cells and likely to improve a therapeutic response when used in combination to other conventional therapies available such as IFN-α/RBV.
Discussion
Adaptive immune responses take weeks or sometime months to initiate after established viral infection. The role of liver specific aspects as well as viral proteins in the attenuation of adaptive immune responses during HCV infection is still unclear. Immunomodulatory activity of HCV proteins mediated by envelope protein E2 and core proteins is reinforced by in vitro cell culture experiments 46,75 . NK cells are a vital direct mediator of immune responses and it might be suppressed by the HCV E2 protein. Similarly, HCV core-mediated DC dysfunction occurs via attenuating of IL-12. Thus, inhibition of antigen presentation on DCs might be interceded further by the effects of viral proteins. Furthermore, T-cell exhaustion is critical determinant in chronic infection. Similarly, it is observed that those patients who can clear the infection show mature CD8+ memory cells maintained by increased number of HCV specific CD8+ cells. Failure to clear the infection results in the persistent display of HCV peptides on the hepatocyte surface, Immune mediate liver injury is the consequence of chronic activation or presence of CTLs. Additionally, the increased production of TGF-β and other related pro-inflammatory cytokines activates stellate cells, which are the primary cause of fibrosis, which results in significant liver damage. A strong antibody response to HCV infection is detected quite early in the infection phase. However, the functional capability of the neutralizing antibodies is quite low. Clearance of infection require vigorous, robust and multi-specific antiviral host immune response 76 .
However, the fact that despite such an adverse scenario, a considerable percentage of individuals clear the virus without any treatment, termed as spontaneous clearance. This phenomenon offers hope that somehow fine-tuning the system towards T cells may provide a plausible direction for the resolution of viral infection in the infected individuals. The treatment options available for now are classical IFN-α/RBV immunomodulatory therapy and recently introduced DAAs which work by blocking specific viral proteins. Immunomodulatory therapy of HCV involving IFN-α is still considered SOC therapy for most of the patients. The introduction of direct acting antivirals (DAAs) have significantly increased the treatment outcomes but it is important to note that these treatments have associated limitations. Resistance, toxicity and pre-mature cessation of therapy is a major concern for these targeted therapies. This suggests that the host immune response is essential element of therapy for HCV elimination. Thus, immune modulation might prove to be an effective regimen when considering combination therapies. The strongest evidence that immune modulation is a key component of DAA-based therapies is the difficulty to remove ribavirin from the treatment regimen 77 . Also, it is important to note that it is not enough to stop RNA replication (the goal of DAAs). Eradication of virus completely from the host is also quite significant in maintaining SVR and preventing relapse. Intrinsic immune signatures of the individual host may also determine the outcome of treatment. That is why it is quite significant to overcome immune failure in order to clear the virus from the host body. Also, those patients who have already not responded to IFN-α/RBV therapy, need new therapies which are a long way off. Although both treatments have resulted in good response to therapy in various individuals but still large proportion of patients are null responders and many patients suffer from toxicity and negative side effects of these treatments. Consequently, a need for new therapies and treatments are still quite challenging task ahead. In this regard, new immune modulatory agents which can tweak the system towards more specific T-cell responses with Th1 profile are need of the hour. However, limitation of wet-lab studies does not have the liberty to study all the related immune parameters in single experiment. Similarly, the high rate of data being generated by various individual studies also needs to be analysed as a single system.
In this regard, we have successfully applied PN approach to model a biological signalling network which can be used to study various dynamics of the system in the presence of internal or external stimulus. Most mathematical/computational models require detailed parameters describing the kinetic characteristics of the network, which are typically quite difficult to obtain for all the entities present in the highly interconnected signalling network of proteins. Instead, method used in this study does not necessarily require the detailed quantitative data rather it models signal flow in the PN by token accumulation and dissipation within places (proteins) over time. The tokenized activity-levels computed by this kind of method are abstract quantities whose changes over time correlate to changes that occur in the relative quantities of active proteins present in the cell. Furthermore, it can be assumed that the in silico experiments performed, compared the changing levels of proteins relative to the "control". Moreover as several researchers have observed that the connectivity of a biological network commands, to a great degree, the network's dynamics 15,78,79 . Many have postulated that biological network connectivity has evolved to have a stabilizing effect on the overall network dynamics, making the network more robust to local fluctuations. Thus, quantitative data such as network parameters, kinetic rates and protein binding affinities are not necessarily required to qualitatively model the network. After verifying and validating the models with reference published data, we utilized the predictive ability of the model to narrow down and tested various immunomodulatory agents in combination with IFN-α/RBV. IL-10 antibody, IL-21 treatment and blocking of inhibitory receptors of T cells which produced promising results in terms of improving CD8+ and CD4+ T cells responses including the reversal of exhausted T cells, increased cytotoxic potential of CTLs and NK cell response. Thus, in our opinion current treatments should be used in conjunction with immunomodulatory agents which can remove the repressive effects of those cytokines responsible for failure of the adaptive responses. IL-10 antibody therapy showed promising results via PN model analysis, the finding postulate that it might be able to better use in prognosis of disease, leading towards reduction in chronicity. This observation is in close agreement with experiments carried out on HBV by Brooks et al. 80 . Although IL-10 helps to decrease the disease activity by SCIeNTIfIC REPORTs | (2018) 8:8874 | DOI:10.1038/s41598-018-27163-0 reducing inflammation mediated tissue damage 81,82 . It is also known from some instances that IL-10 inhibits CD8+ priming and cytolytic mechanisms in HCV infection 83 . Nevertheless, the blockade of IL-10 via antibody therapy showed promising simulation results (Fig. 9). Also, IL-21 perturbation experiment demonstrated decent outcomes in terms of NK cells, CD4+ T cells and CTL responses. IL-21 has been previously shown to regulate effector function of CD8+ T cells 49 . The constructed therapeutic PN model also highlighted the importance of IL-21 cytokine in relation to improvement in treatment response.
Thus, in our opinion these treatment options should be considered for combination therapy regimens including other direct acting antivirals and immunomodulatory agents. hence, HCV replication could be better controlled simultaneously DAA along with improving immune responses. These and other potential combinations can be widely tested through our PN models and the best outcome can be subjected to in vitro experimentations. We believe that these approaches will contribute immensely to complement other methods in the biological predictions regarding immune control of other infections as well.
Methodology
The modelling approach employed in this study is demonstrated in Fig. 10. The approach is adapted from our previous pilot study 32 to systematically build an adaptive immune signalling PN model.
Overview of the modelling approach. In-depth literature survey of the experimental studies facilitated in generating a logic-based diagram of a comprehensive adaptive immune signalling pathways in response to HCV infection. It signifies imperative signalling pathways triggered during HCV infection in the form of a comprehensive integrated network. This constructed logical network was then subjected to PN modelling, a mathematical formalism for network construction, analysis, and simulation 13,14,84 . After model generation, various dynamic behaviours were studied via simulations run to check for uniformity and agreement with the published data. The modelling framework efficiently represent the constructed immune signalling network along with computing the eminence of various components and processes occurring within the network. The basic model was then extended to perform some in silico experiments to predict various outcomes under altering inducements.
Conversion of the pathway into computable format. PN model of integrated signalling pathway for
adaptive immune responses against HCV was constructed employing continuous Petri net approach using tool Snoopy 2.0 85 . It was a step-wise process to build the baseline model and later extending it to include other related parameters of treatment response. The model was then checked and verified for correctness, completeness and consistency according to the PN theory. 84 in which the marking is given by positive real number represented by tokens. The value of token represents concentration. The semantic of a continuous PN is given by the corresponding set of ordinary differential equations (ODEs), describing the continuous change over time on the token value of a given place. Where the pre-transition flow results in a continuous increase and post-transition flow results in a continuous decrease. CPNs are presented as a single holistic system to analyse the biological behaviour of each entity especially for in silico experiments. Biological regulatory networks, physiochemical networks, gene regulation, transcriptional, epigenetic, protein-protein interactions and signal transduction can be easily modelled using CPN 86 .
Continuous Petri net (CPN). CPN is an extension of PN
In a typical PN model, circles represent places and boxes represent transitions which are continuous in nature, depicting their true nature within complex biological processes. Transitions in PN represent interactions among proteins exhibiting the effects of a source entity on a target entity. Transition firing triggers the source place to release the assigned tokens, called as the token-count, which as a result influences the target place. Flow of token in such a way enables the signal propagation through directed interactions within a cellular signalling pathway. This token flow also depends upon the rates of transitions which corresponds to relative concentration levels of reactants and thus can be used to model biological interactions and related enzyme kinetics. PNs can easily create models having all continuous transitions, whose rates are differential equations depending on the place markings or tokens (represented by dots or numbers within the places). The constructed PN models include places (representing genes/proteins) and transitions (representing processes such as activation, inactivation) connected via arcs (edges). Inhibitory arcs are also supported in CPNs where an inhibitory arc inhibits the token flow from either an input place to a transition or from a place to the transition. This feature is very useful to model gene/ protein repression in regulatory networks and to perform various in silico knockout/inhibitory experiments, network dynamics of the designed model were obtained by executing the simulation run for each model. The transitions are fired randomly to simulate the signalling rates through random interaction occurrences. The averaged token-counts are comparable with experimentally measured variations in the relative expression concentrations of distinct entities in the signalling pathway, which reduces the need of kinetic parameters 11 . Thus, the exact kinetic parameters for each enzymatic reaction are not used in the model rather, the relative activity change is determined by simulations run. The model is based on the assumption that the main protagonist in the signal propagation through a network are the connections amongst various entities involved 15 . These connections (positive, negative) determine the effector functions of the signalling network. The model uses this interconnectivity of the entities and forms a dynamic system which evolves which time.
Formal definition of Petri nets are as follows: Directed Bipartite Graph. A directed bipartite graph is a special case in graph theory having two distinct subsets of vertices in such a way that subsets do not have any common elements, and edges always link the members from different subsets. Verification of the model. The initial verification of the model representing HCV-induced adaptive immune signalling was performed using theoretical assumptions, where token numbers represent the relative expression level of an entity, corresponding to a gene inhibitory/knockout model. Simulation of the system is carried out using the PN algorithm in which the initial marking is either 0 (null expression) or 100 (expressed gene) for entry nodes (HCV). It demonstrated that the model suitably imitates the observations derived from multiple experimental analysis. In order to verify and validate our generated models, appropriate simulations were used to measure the key parameters of immune function which presented an integrated view of the entities involved in acute, chronic and resolved HCV infection under stress of various internal and external stimuli. Comparing the results of these simulations with the inferences from published data led to the verification of the model. Subsequently, perturbations were introduced in the model to get insights into the trends in molecular activity in response to exterior stimuli such as therapy (IFN-α/RBV) or immune modulation. In this study, we only analysed the qualitative aspect of data and behaviours and not the absolute quantitative values, to present the system in a convenient and manageable way and to provide simplifications over the in vivo conditions to study the phenomenon of cytokine dynamics and treatment responses against HCV infection.
Conclusion
In addition to expanding the knowledge of signalling pathways of the immune system, a major challenge for the future studies is to pinpoint exact functions of the entities involved in these pathways in the context of infectious diseases. The role of several cytokines in varying conditions and/or in conjunction with other inflammatory responses are determining factors of the outcome of infection, and hence critical in understanding the dynamics of specific immune responses. This will also be very essential for progressing towards the design of an effective vaccine against HCV. As one of major challenges in this area has been the lack of understanding of the requirements for the induction of protective immunity. Though, challenging but we were able to construct a comprehensive model of the adaptive immune system provoked in responses to HCV infection. The model provided significant insights into the immune dynamics in response to viral infections as well as effective perturbation experiments. We strongly believe that similar prior knowledge based modelling approach could be better exploited to test several hypotheses and would poses a great benefit in terms of analysis time, cost and labour. Such model-based studies can be extended to other infections systems to test specific hypotheses and introduce a real-time interventions experiments, the best scenarios could later be verified and confirmed for promising outcomes through in vitro/in vivo experiments. | 2018-06-12T13:31:57.392Z | 2018-06-11T00:00:00.000 | {
"year": 2018,
"sha1": "200cc9ba880066daee45825f4016d47c9e2cb8fb",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-27163-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d18f2715b83d1ea7755f0d4b1a8346e47affa339",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
129580835 | pes2o/s2orc | v3-fos-license | SOME ASPECTS OF THE ECOLOGY OF FRESHWATER ALGAE IN THE DENSU RIVER AND TWO TRIBUTARIES IN SOUTHERN GHANA
Studies on the composition and abundance of the phytoplankton in River Densu and two of its tributaries, Rivers Adeiso and Nsakir were carried out at ten sampling sites, including seven from different regions of the river basin namely Afuaman, Akwadum, Densuso, Manhean, Machigeni, Nsawam and Weija, as well as Adeisoand Pokuase. Sampling was done monthly at each sampling site from January to December, 2006. Physical and chemical parameters of river water were studied. The parameters of the river water varied with the sampling sites and the time of the year. Water samples for phytoplankton identification and enumeration were collected at each sampling site. Physico-chemical conditions of the river were assessed during sample collection or in the laboratory. Correlation analysis showed that there was a positive correlation between algal genera and the measured physical and chemical parameters of the river water.
Introduction
Ecologically, the algae occur in all types of habitats where they are major primary organic producers and therefore a fundamental part of the food chain especially in most aquatic environments, with a profound influence on life on earth.The major freshwater bodies have characteristic features that would influence their algal flora.
Rivers provide habitats which are very different from those of ponds and lakes because they are subject to changes along their course as well as complication from seasonal changes.The problems of maintaining a floating population in a river are enormous since the products of divisions are continuously being transported downstream.Thus, it would seem that a true phytoplanktonic community maintaining itself by active reproduction of the cells is only possible in rivers under conditions of reduced flow.Indeed, Rzóska et al. (1955) observed that the phytoplankton of River Nile is reduced considerably during flood periods of fast water flow.Eaton (1965) recorded in River Niger, planktonic algae of about 1000 algal cells per ml at the time of lowest river level in July.The number continually declined as the water level rose.The fall in algal numbers was attributed mainly to the dilution of the river by an increasing volume of plankton-poor water entering from its tributaries.
The hydrology and phytoplankton of River Sokoto, also in Nigeria, which had been studied earlier in 1956 and 1957 by Holden (1960), were influenced by the water level during the flood period from April to October.
Phytoplankton production in River Oshun lying between Ibadan and Ile-Ife cities in Nigeria showed a good positive correlation with dissolved nutrients, conductivity and water transparency, and an inverse relationship with water level and current velocity (Egborge, 1974).Iltis (1982Iltis ( , 1984) ) found that the numbers of planktonic algae on Rivers Bagoe, Comoe and Leraba in the north of Cote d'Ivoire had lower mean algal numbers of 27.8, 33.9 and 15.5 per ml, respectively, with a tropical transitional flood regime.The annual peaks were dominated by euglenophytes and chlorophytes.Rivers Maraoue, N'zi and White Buandana in central Cote d'Ivoire with an attenuated equatorial transitional flood regime, studied together with the northern rivers in 1977 had higher mean algal numbers of 351.6, 138.2 and 216.0 per ml, respectively.The lowest densities were recorded over the flood period from September to December and the cyanophytes and chrysophytes generally formed an insignificant proportion of the phytoplankton.Biswas (1968) also observed two peak phytoplankton populations in the Black Volta in February and April 1964.The February peak was dominated by centric diatom Aulacoseira granulata and the April peak by the diatom Synedra acus.However, the populations of Synedra acus as dominant species were eventually replaced by Anabaena aphanizomenoides by the third week of May 1964.Biswas (1968) found the diatoms and chlorophytes to be the most diverse groups of the Black Volta, and recorded 35 taxa each for these two groups.Of the remainder, there were seven taxa of cyanophytes, two each of cryptophytes and dinoflagellates and one euglenophyte taxon, making 82 taxa in all.Egborge (1974) reported 60 taxa for River Oshun with diatoms as the most diverse group with 31 taxa, followed by chlorophytes with 20 taxa, then five cyanophytes and lastly a single dinoflagellate taxon.According to Livingstone (1963), the dominance of the diatoms in West African rivers is perhaps due to the presence of silica as the most abundant oxide.Amuzu (1976) found appreciable levels of dissolved oxygen in River Densu while Kpekata and Biney (1979) reported a pH range of 6.9 to 7.7 and conductivity values ranging from 360 to 2300 µScm -1 .
River Densu forms part of the coastal river basins and is one of the most important water sources for the Eastern and Greater Accra Regions of Ghana.This River has been studied in this investigation to provide its much needed pertinent information on the identification and enumeration of the phytoplankton population at 7 stations along the river and 2 tributaries over a year.
Study site (a) River Densu
This river takes its source from the Atewa range of hills at an altitude of 0.64 km above mean sea level, lying between Latitudes 6º 04'N and 6º 10'N, and between Longitudes 0º 40' W and 0º 03'W, near Kibi, in the Akim Abuakwa District of the Eastern Region of Ghana.It flows in a south-easterly direction till it reaches Mangoase, where it changes course and flows generally southwards till it enters the Gulf of Guinea, covering a distance of 116 km (Fig. 1).The sampling sites and their localities are Adeiso, Akwadum, Ashalaja, Densuso, and Nsawam in the Moist Semi-Decidous Forest; and Afuaman, Machigeni, Manhean, Pokuase and Weija in the Coastal Savanna Zone as shown in Fig 1. Leg end Members of the alga groups, Bacillariophyta, Chlorophyta and Cyanophyta of ten sites,eight along the course of River Densu situated in 2 different ecological zones, and two tributaries, Rivers Adeiso and Nsakir, were studied (Fig. 1).(i) Collection and Preservation of Phytoplankton samples Water samples from the photic zone for phytoplankton assessment were collected at each indicated site of the Densu River and the tributaries monthly, for 24 months from January 2010 to December 2012.Where the water was relatively shallow, the sample was taken from just off the bottom to the surface.About 1 ml of Acid-Lugol's solution (Lugol's iodine) was added to each 30 ml water sample from each site at the time of collection (Prescott, 1970).The samples were then transported to the laboratory for later identification and enumeration of algae.(ii) Enumeration and Identification of Phytoplankton species The phytoplankton was enumerated using 25 ml of each of the iodized samples, following the procedure of Biswas (1966).Phytoplankton density measured as numbers of individuals per ml was estimated by counting individuals whether single cells, colonies or filaments in a counting cell, after about an hour of sedimentation.Identification of the species of planktonic flora was done according to the works of Huber-Pestalozzi (1938) for Cyanophyta, Kramer and Lange-Bertalot (1988) for Bacillariophyta and those of Ettl (1983) and Ettl and Gartner (1988) for Chlorophyta.(c) Studies and on the river chemistry and physical factors All determinations were done following established pertinent methods (APHA, AWWA, and WEF (1995); UNESCO/ WHO, 1978;WHO, 1987)
Results
Mean monthly Temperature and pH of the water of Rivers Densu and the tributaries, River Adeiso (Adeiso) and River Nsakir (Pokuase) over a 12 months period (January to December 2009) is shown in Fig 2 .Mean monthly concentrations of Ammonia, Nitrate and Phosphate (mg l -1 ) of the water of River Densu and the Tributaries River Adeiso (Adeiso) and River Nsakir (Pokuase) over a 12 month sampling period is shown in Table 1.Similarly, mean monthly concentrations of Silica (mg l -1 ) of the water of River Densu and the tributaries Adeiso (Adeiso) and Nsakir (Pokuase) over a sampling period is shown in Fig 3.The pH ranged from 7.21 to 8.57.The ranges of the concentrations of the principal nutrients, in mg l -1 , were Ammonia, 0.38 -0.0.69;Nitrate, 0.43 -0.87; Phosphate, 0.27-0.53.Silica concentrations at the ten sampling stations ranged from 13.0 to 24.1 mg l -1 (Table 2).The data in Table 2 illustrates the total phytoplankton cell numbers at the ten sampling sites in one year, from January to December.The Chlorophyta (green algae) dominated the flora with 23 genera identified.The second largest, the Bacillariophyta (diatoms) was represented by 22 genera, and the smallest number of 12 genera represented the Cyanophyta (blue-green algae).On the other hand, the diatoms constituted the smallest percentage of the total number of cells, and the blue-greens the largest percentage.The algae were generally very low in numbers at seven out of the ten sampling sites.Some genera of these groups were consistent and were encountered throughout the sampling period.These included the blue-green algae; Anabaena, Anacystis and Oscillatoria, the green algae, Ankistrodesmus, Pediastrum, Scenedesmus, and Ulothrix; and the diatoms, Asterionella, Cyclotella, Fragilaria, Navicula and Synedra.The genera varied with the sampling sites.With regards to the diatoms, temperature, pH, ammonia, phosphate and silicon had p-values of less than 0.05.Table 2 shows that the diatoms constituted the smallest percentage of the total number of cells.That also happened over the period of the investigations.
Discussion
Livingstone (1963) concluded that the dominance of diatoms in West African rivers is due to the presence of silica as the most abundant cation.Silica values recorded for River Densu are similar to those reported by Biswas (1968), Egborge (1971) and Rai (1974) for River Volta, Ghana (18.9 mg l -1 ), River Oshun Nigeria (10.0 -26.9mg l -1 ) and River Bandama, Cote d'Ivoire (11.1mg l -1 ), respectively.The concentration in River Densu and the two tributaries was, therefore, adequate to support growth of diatoms.There was an initial increase in diatoms from May to July 1986followed by a gradual decline to September 1986 and the population thereafter increased steadily to its maximum in mid-December 1986.The chlorophyta were the second most abundant group, constituting 20 per cent of the phytoplankton population (Egborge, 1974).The population of Desmidiaceae increased from October 1968 to February 1969 when they attained a peak of 320 per 0.1 m 3 , followed by a decline in numbers from May 1969.The Cyanophyta virtually disappeared from May to September 1968 and then reappeared in October and attained a peak of 165 per 0.1m 3 in February 1969.The desmids and Cyanophyta combined accounted for less than 10 per cent of the phytoplankton population (Egborge, 1974).
The Cyanophyta genera were dominated by Anabaena, Anacytstis and Oscillatoria and the Cholorophyta by Ankistrodesmus, Spirogyra and Ulothrix.These genera contributed substantially to the high numbers of cells of the blue-greens and greens at Manhean, Machigeni and Weija.
Comparing the sizes of the water body at the different sections of River Densu, it is reasonable to conclude that the large size of the water body at Manhean, Machigeni and Weija, and drastic reduction of unidirectional flow led to sudden increase in the density of the phytoplankton algae.Indeed, Machigeni and Weija collecting points were located in Weija Lake, and Manhean sampling point close to the lake and thereby had more standing water than other parts of the rivers.
The physico-chemical factors of the river may have, individually or collectively, also caused the uneven phytoplankton distribution, bearing in mind that it is often impossible to explain changes in algal abundance on the basis of the actions of a single ecological factor.
The mean temperatures of the water of the tributaries of River Densu -26.1 0 C for River Adaiso, 28.3 0 C for River Doblo, 20.5 0 C for River Kuia and 27.0 0 C for River Obopeko determined by Akpabli and Drah (2001) were almost within the range obtained during this work.The water temperature will be expected to remain favorable for phytoplankton growth throughout the year at all locations.Also, their pH range of 7.12 to 8.57 is ideal for phytoplankton growth.Photosynthetic activity of the high algal cell population might have been responsible for the increased values of pH.
Phosphate concentrations along River Densu and the two tributaries also ranged from 0.2 mg l -1 0.5 mg l -1 and lower than concentrations of 1.04, 0.6 and 1.185 mg l -1 recorded for River Bandama, Cote d' Ivoire (Rai, 1974), River Black Volta, Ghana (Biswas, 1968) and River Niger, Kainji, area, Nigeria (Imevbore and Visser, 1969), respectively.Rivers Densu, Adaiso and Nsakir therefore had sufficient amounts of nitrogen and phosphorus to support phytoplankton growth.On the basis of the concentrations of nitrate and phosphate encountered in Rivers Adaiso and Nsakir at Adeiso and Pokuase, it appears that the tributaries of River Densu do not unduly enrich it.
It is reasonable to expect changes in the concentrations of these nutrients from time to time.The use of different amounts of fertilizers in the catchment area and the oxidation of ammonia will affect the nitrogen and phosphate levels.So the nitrogen supplied to River Densu will partially be derived from fertilizers and that produced by nitrogen-fixing systems in the soil.The four tributaries studied by Akpabli and Drah (2001) showed very high concentrations of phosphate with an average of 3.76 mg l -1 .River Kuia and Obopeko registered the highest levels of 6.40 and 6.39 mg l -1 , respectively.Their nitrate levels were lower than those of phosphate.The nitrate concentrations of Rivers Adaiso, Doblo, Kuia and Obopeko were 0.14, 0.30, 0.57 and 1.32 mg l -1 , respectively.They acquired their nitrate and phosphate loads from agricultural runoffs from the numerous commercial farms along their banks.
Many algae utilize ammonia nitrogen and its presence in River Densu and its two tributaries is valuable.In fact, Moss (1973) in an experimental study, demonstrated that many freshwater algae including Cosmarium botrytis, Haemotococcus droebakensis, Pandorina morum Pediastrum duplex, and Volvox aureus grew at the same rate in nitrate and ammonium media.In exceptional cases, some, for example, Chlamydomonas reinhardii and Euglena gracilis which used ammonium for growth did not grow at all in the nitrate medium.
The Machigeni and Weija sampling sites, located in the Weija Lake, provided quite a different habitat which explains the relatively high concentrations of nutrients.The lake is inhabited by numerous flowering plants and ferns.Decomposition of parts or entire dead plants of the aquatic vegetation will contribute substantial nutrients to the water.Furthermore, the report of Gaudet (1974) that standing macrophytes excrete organic matter into the water can be of consideration.It was not possible during this investigation to analyse the water for all the ions often included in studies of this sort for lack of the requisite chemicals for this extended work.
It can be suggested that factors which control the occurrence of freshwater free-floating algae generally occur in the Weija Lake where the Machigeni and Weija sampling sites were located.One very significant ecological observation of the present investigations was the distribution of the phytoplankton population along the River Densu.Table 3 provides the data to show how the course of the river was demarcated into two very contrasting sections, viz., Akwadum to Ashalaja with very low mean monthly phytoplankton populations and Manhean with very high populations.The massive population of the phytoplankton recorded at Manhean, Machigeni and Weija is a source of feed for fish which is naturally, supplemented by the abundant organic matter of the macrophytes.Indeed, Manhean and Machigeni are important fishing landing sites for the thriving Tilapia fishing industry of the inhabitants of the villages in the vicinity of the three sampling sites.
Conclusion
This paper is mainly about the population density and productivity of phytoplankton in River Densu and the tributaries, Rivers Adaiso and Nsakir, as well as the factors that determine the rise and fall of the population.The study has shown that seasonal occurrence is not determined by a single factor, and it was, thus difficult to evaluate the effects of each environmental factor separately.As more knowledge on the physical, chemical and biological factors accumulates, it will be possible to understand the combined effects of the numerous factors.Those aspects which could not be covered in the present studies could be considered for future investigations.
Fig. 1
Fig. 1 Map of the Densu River and two tributaries showing the sampling sites ZONES O F THE D EN SU B ASIN # S am p l in g S ite s
Fig. 2 :
Fig. 2: Mean monthly Temperature and pH of the water of Rivers Densu and the tributaries, River Adeiso (Adeiso) and River Nsakir (Pokuase) over a 12 months period
Fig. 3 :
Fig.3: Mean monthly concentrations of Silica (mg l -1 ) of the water of River Densu and the tributaries Adeiso (Adeiso) and Nsakir (Pokuase) over a sampling period | 2018-12-11T09:48:02.890Z | 2014-06-19T00:00:00.000 | {
"year": 2014,
"sha1": "3112e9305d100ba5bf59e59e40def4a6938dff52",
"oa_license": "CCBYNC",
"oa_url": "https://www.nepjol.info/index.php/IJE/article/view/10637/8614",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3112e9305d100ba5bf59e59e40def4a6938dff52",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
21751374 | pes2o/s2orc | v3-fos-license | Universality of Generalized Parton Distributions in Light-Front Holographic QCD
The structure of generalized parton distributions is determined from light-front holographic QCD up to a universal reparametrization function $w(x)$ which incorporates Regge behavior at small $x$ and inclusive counting rules at $x \to 1$. A simple ansatz for $w(x)$ which fulfills these physics constraints with a single-parameter results in precise descriptions of both the nucleon and the pion quark distribution functions in comparison with global fits. The analytic structure of the amplitudes leads to a connection with the Veneziano model and hence to a nontrivial connection with Regge theory and the hadron spectrum.
INTRODUCTION
Generalized parton distributions (GPDs) [1][2][3] have emerged as a comprehensive tool to describe the nucleon structure as probed in hard scattering processes. GPDs link nucleon form factors (FFs) to longitudinal parton distributions (PDFs), and their first moment provide the angular momentum contribution of the nucleon constituents to its total spin through Ji's sum rule [2]. The GPDs also encode information of the three-dimensional spatial structure of the hadrons: The Fourier transform of the GPDs gives the transverse spatial distribution of partons in correlation with their longitudinal momentum fraction x [4].
Since a precise knowledge of PDFs is required for the analysis and interpretation of the scattering experiments in the LHC era, considerable efforts have been made to determine PDFs and their uncertainties by global fitting collaborations such as MMHT [5], CT [6], NNPDF [7], and HERAPDF [8]. Lattice QCD calculations are using different methods, such as path-integral formulation of the deep-inelastic scattering hadronic tensor [9][10][11], inversion method [12,13], quasi-PDFs [14][15][16][17][18], pseudo-PDFs [19,20] and lattice cross-sections [21] to obtain the x-dependence of the PDFs. The current status and challenges for a meaningful comparison of lattice calculations with the global fit of PDFs can be found in [22].
There has been recent interest in the study of parton distributions using the framework of light-front holographic QCD (LFHQCD), an approach to hadron structure based on the holographic embedding of light-front dynamics in a higher dimensional gravity theory, with the constraints imposed by the underlying superconformal algebraic structure [23][24][25][26][27][28][29]. This effective semiclassical approach to relativistic bound-state equations in QCD captures essential aspects of the confinement dynamics which are not apparent from the QCD Lagrangian, such as the emergence of a mass scale λ = κ 2 , a unique form of the confinement potential, a zero mass state in the chiral limit: the pion, and universal Regge trajectories for mesons and baryons.
Various models of parton distributions based on LFHQCD use as a starting point the analytic form of GPDs found in Ref. [53]. This simple analytic form incorporates the correct high-energy counting rules of FFs [54,55] and the GPD's t-momentum transfer dependence. One can also obtain effective lightfront wave functions (LFWFs) [28,56] which are relevant for the computation of FFs and PDFs, including polarization dependent distributions [44,45,48]. LFWFs are also used to study the skewness ξ-dependence of the GPDs [42,46,49,51,52], and other parton distributions such as the Wigner distribution functions [36,39,44]. The downside of the above phenomenological extensions of the holographic model is the large number of parameters required to describe simultaneously PDFs and FFs for each flavor.
Motivated by our recent analysis of the nucleon FFs in LFHQCD [57], we extend here our previous results for GPDs and LFWFs [53,56]. Shifting the FF poles to their physical location [57] does not modify the exclusive counting rules but modifies the slope and intercept of the Regge trajectory, and hence the analytic structure of the GPDs which incorporates the Regge behavior. As a result, the x-dependence of PDFs and LFWFs is modified. Furthermore, the GPDs are defined in the present context up to a universal reparametrization function; therefore, imposing further physically motivated constraints is necessary.
GPDs IN LFHQCD
In LFHQCD the FF for arbitrary twist-τ is expressed in terms of Gamma functions [28,53], an expression which can be recast in terms of the Euler Beta function B(u, v) arXiv:1801.09154v2 [hep-ph] 6 Feb 2018 where and . For fixed u We thus recover the hard-scattering scaling behavior [54,55] with M 2 n = 4λ n + 1 2 , n = 0, 1, 2 · · · τ − 2, corresponding to the ρ vector meson and its radial excitations. Notice that the Beta function in (1) can be rewritten as slope α = 1 4λ and intercept α(0) = 1 2 . This expression is identical to the Veneziano amplitude [58] in the t-channel. In the s-channel it leads to a fixed pole 1 − α(s) → τ − 1, since no resonances are formed [59]. The shift in the pole structure [28] incorporated in Eq. (1) thus yields the leading Regge trajectory for the ρ-meson (5).
Writing the flavor FF in terms of the valence GPD Eqs. (1) and (2) imply that the twist-τ PDF, q τ (x), and the profile function f (x) are Therefore, q(x) and f (x) in (7) are both determined from (8) and (9) in terms of the arbitrary reparametrization function y = w(x), which satisfies and is monotonically increasing in the interval 0 ≤ x ≤ 1. The simplest choice for w(x), with conditions (10), which is the Regge theory motivated ansatz for small-x given in Ref. [60]. We therefore impose the constraint to incorporate the small-x Regge behavior in the GPDs.
To study the behavior of w(x) at large-x we perform a Taylor expansion near x = 1: Upon substitution of (13) in (8) we find that the leading term in the expansion, which behaves as we find which is precisely the perturbative QCD (pQCD) inclusive hard counting rule for large-x [61][62][63]. From Eq. (9) it follows that the conditions (14) are equivalent to f (1) = 0 and f (1) = 0. Since log(x) ∼ 1 − x for x ∼ 1, the simplest ansatz for f (x) consistent with (10), (12) and (14) is with a being a flavor independent parameter. From (9) an expression which incorporates Regge behavior at small-x and inclusive counting rules at large-x.
Nucleon GPDs
The nucleon GPDs are extracted from nucleon FF data [64][65][66][67][68] choosing specific x-and t-dependences of the GPDs for each flavor. One then finds the best fit reproducing the measured FFs and the valence PDFs. In our analysis of nucleon FFs [57], three free parameters are required: These are r, interpreted as an SU (6) breaking effect for the Dirac neutron FF, and γ p and γ n , which account for the probabilities of higher Fock components (meson cloud), and are significant only for the Pauli FFs. The hadronic scale λ is fixed by the ρ-Regge trajectory [28], whereas the Pauli FFs are normalized to the experimental values of the anomalous magnetic moments.
Helicity Non-Flip Distributions
Using the results from [57] for the Dirac flavor FFs, we write the spin non-flip valence GPDs for the u and d PDFs normalized to the valence content of the proton: 1 0 dx u v (x) = 2 and 1 0 dx d v (x) = 1. The PDF q τ (x) and the profile function f (x) are given by (8) and (9), and w(x) is given by (17). Positivity of the PDFs implies that r ≤ 3/2, which is smaller than the value r = 2.08 found in [57]. We shall use the maximum value r = 3/2, which does not change significantly our results in [57]. The PDFs (18) and (19) are evolved to a higher scale µ with the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) equation [69][70][71] in the MS scheme using the HOPPET toolkit [72]. The initial scale is chosen at the matching scale between LFHQCD and pQCD as µ 0 = 1.06±0.15 GeV [73] in the MS scheme at next-to-next-toleading order (NNLO). The strong coupling constant α s at the scale of the Z-boson mass is set to 0.1182 [74], and the heavy quark thresholds are set with MS quark masses as m c = 1.28 GeV and m b = 4.18 GeV [74]. The PDFs are evolved to µ 2 = 10 GeV 2 at NNLO to compare with the global fit by the NNPDF Collaboration [75] as shown in Fig. 1. The value a = 0.507 ± 0.034 is determined from the first moment of the GPD, 1 0 dx xH q v (x, t = 0) = A q (0), from NNPDF3.0 [75]. The model uncertainty (red band) includes the uncertainties in a and µ 0 . The t-dependence of H q (x, t) is illustrated in Fig. 2. Since our PDFs scale as q(x) ∼ x −1/2 for small-x, the Kuti-Weisskopf behavior for the non-singlet structure func-
Helicity-Flip Distributions
The spin-flip GPDs E q (x, t) = e q (x) exp [tf (x)] follow from the flavor Pauli FFs in [57] given in terms of twist-4 and twist-6 contributions normalized to the flavor anomalous magnetic moment 1 0 dx e q (x) = χ q , with χ u = 2χ p + χ n = 1.673 and χ d = 2χ n + χ p = −2.033. The factors γ u and γ d are where the higher Fock probabilities γ p,n represent the large distance pion contribution and have the values γ p = 0.27 and γ n = 0.38 [57]. Our results for E q (x, t) are displayed in Fig. 2. We use Ji's sum rule [2] to compute the nonpertubative contribution to the total spin of the nucleon We compare our results for J q in TABLE I, at the initial scale µ 0 = 1.06 ± 0.15 GeV, with model fits constrained by nucleon FFs [66,68] and lattice simulations [77][78][79].
The pion PDFs are evolved to µ 2 = 27 GeV 2 at nextto-leading order (NLO) to compare with the NLO global analysis in [81,82] of the data [83]. The initial scale is set at µ 0 = 1.1±0.2 GeV from the matching procedure in Ref. [73] at NLO. The result is shown in Fig. 3, and the t-dependence of H q (x, t) is illustrated in Fig. 4. We have also included the NNLO results in Fig. 3, to compare with future data analysis.
Our results are in good agreement with the data analysis in Ref. [81] and consistent with the NNPDF results through the GPD universality described here. There is however a tension with the data analysis in [82] for x ≥ 0.6 and with the Dyson-Schwinger results in [84] with (1 − x) 2 falloff at large-x. Our nonperturbative results falloff as 1 − x from the leading twist-2 term in (22).
CONCLUSION AND OUTLOOK
The results presented here for the GPDs provide a new structural framework for the exclusive-inclusive connection which is fully consistent with the LFHQCD results for the hadron spectrum. The PDFs are flavor-dependent and expressed as a superposition of PDFs q τ (x) of different twist. In contrast, the GPD profile function f (x) is universal. Both q(x) and f (x) can be expressed in terms of a universal reparametrization function w(x), which incorporates Regge behavior at small-x and inclusive counting rules at large-x. A simple ansatz for w(x), which satisfies all the physics constraints, leads to a precise description of parton distributions and form factors for the pion and nucleons in terms of a single physically constrained parameter. In contrast with the eigenfunctions of the holographic LF Hamiltonian [28], the effective LFWFs obtained here incorporate the nonperturbative pole structure of the amplitudes, Regge behavior and exclusive and inclusive counting rules. The analytic structure of FFs and GPDs leads to a connection with the Veneziano amplitude (6) which could give further insights into the quark-hadron duality and hadron structure. The falloff of the pion PDF at large-x is an unresolved issue [85] Form factors in light-front quantization can be written in terms of an effective single-particle density [86] F (Q 2 ) = 1 0 dxρ(x, Q), (A. 23) where ρ(x, Q) = 2π ∞ 0 db b J 0 bQ(1−x) |ψ eff (x, b)| 2 with transverse separation b = |b ⊥ |. From (7) we find the effective LFWF (A.24) in the transverse impact space representation with q τ (x) and f (x) given by (8) and (9). The normalization is 1 0 dx d 2 b ⊥ |ψ eff (x, b ⊥ )| 2 = 1, provided that 1 0 dx q τ (x) = 1. In the transverse momentum space | 2018-04-06T22:49:57.000Z | 2018-01-28T00:00:00.000 | {
"year": 2018,
"sha1": "61b1d94e92cda4ebbacf245f4b220329a395ed22",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.120.182001",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "61b1d94e92cda4ebbacf245f4b220329a395ed22",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
259482163 | pes2o/s2orc | v3-fos-license | A thin section micromorphology photomicrographs dataset of the infilling of the Sennacherib Assyrian canal system (Kurdistan Region of Iraq)
Here we present a compendium of 212 photographs of archaeological soils and sediments thin sections (micrographs) from the backfill of the Sennacherib Assyrian canal system of Northern Mesopotamia. The micrographs were produced using an optical petrographic microscope (Olympus BX41) mounting a digital camera (Olympus E420) for image acquisition. The dataset is composed of two folders containing (1) every micrograph in full resolution JPEG, and (2) a PDF file with scale bars and brief captions for each one. The dataset represents a photographic comparison collection for individuals working on similar geoarchaeological contexts and can be used for the composition of figures in novel publications, as well as being the first example of published large compendium for shared use in the field of archaeology.
a b s t r a c t
Here we present a compendium of 212 photographs of archaeological soils and sediments thin sections (micrographs) from the backfill of the Sennacherib Assyrian canal system of Northern Mesopotamia. The micrographs were produced using an optical petrographic microscope (Olympus BX41) mounting a digital camera (Olympus E420) for image acquisition. The dataset is composed of two folders containing (1) every micrograph in full resolution JPEG, and (2) a PDF file with scale bars and brief captions for each one. The dataset represents a photographic comparison collection for individuals working on similar geoarchaeological contexts and can be used for the composition of figures in novel publications, as well as being the first example of published large compendium for shared use in the field of archaeology.
© 2023 The Author(s
Value of the Data
• The dataset is useful because it is a large photographic compendium of microscopic natural and anthropogenic diagnostic micro-pedofeatures from a geographic and archaeological context that has received limited geoarchaeological attention despite the prolific archaeological endeavors in the region. • Other geoarchaeologists and archaeological scientists may benefit from the dataset, as it provides graphic reference for many identified, explained and dated [1] natural and anthropogenic features and processes connected with the use, abandonment and repurposing of the Assyrian canals of Northern Mesopotamia. • Data can be reused as a mere study reference or as source of material for the composition of novel images in original manuscripts, textbooks, and teaching courses.
Objective
This dataset accompanies research that explores the complexity of the socioeconomic transformations of the human communities in Northern Mesopotamia. Within the project, geoarchaeological fieldwork is carried out to understand the processes involved in the natural and anthropogenic transformations of the landscape [1] . Among these, the creation of extensive canal systems is the most prominent [2] .
In this context, the published research [1] tackles the paleoenvironmental significance of the infilling of some traits of the Sennacherib canal system. Therein, it is shown how every stratigraphic feature is the outcome of specific processes tied both ways with climatic/environmental changes and major shifts in land use. Among the analyses employed for the study, thin section soil micromorphology had a prominent role in disclosing pedogenetic peculiarities such as, but not limited to, hydromorphism, bioturbation, colluviation, and traces of pastoralism. Contextu-ally, a large micrographs dataset was created. Here we present it, in order to provide graphic reference for individuals working on similar topics. We suggest that the sharing of photomicrographs datasets of archaeological soils and sediments will positively support archaeological micromorphological research, because existing atlases do not cover the great variability of observed pedofeatures.
Data Description
Data is organized in two main files, plus an additional .txt file containing instructions for navigating them.
File 1: Micrographs.zip
This contains the primary data. It is a .zip compressed folder containing full-resolution photographic JPEG files (micrographs) ( Fig. 1 A), with folder names corresponding to the name of each subset as they appear in the .pdf file (File 2) containing the interpretation of the data. Each micrograph measures 2560 × 1920 pixels with a resolution of 314 dpi and occupies approximately 1 MB of storage space. Every shot is presented with a Plane Polarized Light (PPL) and Cross-Polarized Light (XPL) version. PPL micrographs are characterized by a slightly yellow hue caused by the microscope's light source; we present them unmodified in order to avoid compression alteration.
File 2: Micrograph captions.pdf
This is a .pdf file containing scaled-down versions of each micrograph with name and scale bar, subdivided by Site and Stratigraphic Unit (SU) ( Fig. 1 B). Micrographs are presented in two columns, with each line representing the PPL and XPL version of the same shot. At the end of each group of micrographs belonging to a certain Site and SU, brief descriptions are provided to report highlights and notable features.
File 3: Read me.txt
This is a .txt file containing a plain text brief guide explaining how the two main files are organized and interlaced.
Experimental Design, Materials and Methods
Soil samples destined to thin section micromorphology were collected during archaeological fieldwork carried out in the Kurdistan Region of Iraq. Stratigraphic sections cleared during archaeological excavations ( Fig. 2 ) were sampled for micromorphology according to the recorded stratigraphy, addressing notable features that required further investigation. Sampling was carried out by carving the stratigraphic sections to obtain undisturbed and oriented blocks of soil that were later destined for thin section manufacturing. This was carried out by Dr. Massimo Sbrana's "Servizi per la Geologia" laboratory (Piombino, Italy), following the resin consolidation, slicing, mounting and thinning procedure described by Murphy [3] . The finished thin section product is a 30 μm thick, 55 × 95mm wide slice of consolidated soil mounted on a glass support and covered with a thin glass protection. The thin sections were observed employing an optical petrographic microscope (Olympus BX41) mounting a digital camera (Olympus E420) for image acquisition. Observation was carried out at various magnifications (20x, 40x, 100x, 400x) under Plane Polarized Light (PPL) and Cross-Polarized Light (XPL). Micrographs ( Fig. 1 A) were taken both as PPL and XPL shots whenever features such as mineral/organic macroscopic components and pedological/sedimentological figures were deemed potentially diagnostic of formation processes for each stratigraphic context that the thin section samples represented.
Captions contained in the "Micrograph captions.pdf" ( Fig. 2 B) of the produced dataset were created following the guidelines and terminology suggested by Stoops [4] , with the aid of the coloured atlases created by Nicosia & Stoops [5] , Verrecchia & Trombino [6] , and Stoops et al. [7] .
Ethics Statements
No human subjects, animal experiments or data collections from social media platforms were involved in the creation of this dataset. Archaeological fieldwork permits were issued by the General Directorate of Antiquities of the Kurdistan Regional Government, the Directorate of Antiquities of Dohuk, and the State Board of Antiquities and Heritage in Baghdad.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data Availability
A thin section micromorphology photomicrographs dataset of the infilling of the Sennacherib Assyrian canal system (Kurdistan Region of Iraq) (Original data) (Zenodo). | 2023-07-11T01:16:39.318Z | 2023-06-15T00:00:00.000 | {
"year": 2023,
"sha1": "b83d948e3ed76a27e3904eadb96cdc797170a341",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2023.109319",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4187ff53c7ada351be1b70a383c13a6477a6be10",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236772752 | pes2o/s2orc | v3-fos-license | Highly excited pure gauge SU(3) flux tubes
Flux tube spectra are expected to have full towers of levels due to the quantization of the string vibrations. We study a spectrum of flux tubes with static quark and antiquark sources with pure gauge $SU(3)$ lattice QCD in 3+1 dimensions up to a significant number of excitations. To go high in the spectrum, we specialize in the most symmetric case $\Sigma_g^+$, use a large set of operators, solve the generalized eigenvalue and compare different lattice QCD gauge actions and anisotropies.
Introduction -Motivation
So far only energy levels up to = 2 have been published for the pure gauge QCD flux tube [1] as in Fig. 1. Can we go higher in the spectrum?
To understand why there is a spectrum, we notice that if we neglect the flux tube intrinsic width, it is equivalent to a quantum string. Flux tubes and strings have been studied for a long time.
• In 1911 Onnes discovered superconductivity, the Meissner effect was discovered in 1933.
• In 1935, Rjabinin and Shubnikov experimentally discovered the Type-II superconductors. In 1950, Landau and Ginzburg, then continued by Abrikosov arrived at superconductor vortices, or flux tubes.
• When confinement was proposed for quarks inside hadrons in 1964 by Gell-Mann and Zweig, the analogy with flux tubes also led to a literature explosion in the quantum excitations of strings.
• The proposal of QCD in 1973 By Gross, Wilczek and Politzer shifted the interest back to particles.
• Nevertheless Lattice QCD by Wilson in 1974 was inspired in strings.
• The interest in strings returned in 1997 with Maldacena's and others AdS/CFT correspondence.
• The AdS/CFT and Holography has also been used as a model to compute spectra in hadronic physics.
An approximation to the flux tube spectrum is given by Effective String Theories (EST), say the Nambu-Goto model, which action is the area of the transverse bosonic string surface in time and space, it is classically equivalent to the Polyakov action which introduces an auxiliary einbein field, to remove the square root, Its spectrum for an open string with ends fixed at distance with Dirichlet boundary conditions is given by the Arvis potential [2], where are the principal transverse modes, and the continuum field theory computation of the zero mode energy is obtained using the Riemann Zeta regularization [3,4], which in lattice QCD is provided by the lattice regularization. The Coulomb approximation in Eq.
(3), Lüscher term [5], can also be computed with a discretization of the String as in Fig.2. However the Nambu-Goto model is certainly not the EST of QCD flux tubes.
• There is a wider class of EST, the Nambu-Goto is just one of the possible EST [6][7][8].
• Contrary to Nambu-Goto, the zero mode of the QCD flux tubes has no tachyon with negative square masses at small distances R.
• There is lattice QCD evidence for an intrinsic width of the QCD flux tube [9].
• The QCD flux tube has a rich structure in chromoelectric and chromomagnetic field densities [10,11].
Moreover, a quarkonium with flux tube excitations corresponds to an exotic, hybrid excitation of a meson. Thus we study the excited spectrum to learn more about the QCD flux tubes [12]. We restrict to Σ + flux tubes, the most symmetric ones, to go as high as possible in the spectrum.
Our lattice QCD framework for Σ + flux tubes
To compute the very excited spectrum, we use a large basis of spacial operators composed of generalized Wilson lines, and example of their spacial part is illustrated in Fig. 3 .
It turns out that using operators embedded only in axis planes, shown in Fig. 4, we decrease the degeneracy of states in our spectrum. This suppresses the Λ states. Although we do not want to study them, due to the cubic symmetry of the lattice they may be as well generated by our operators.
The first step to compute the energy levels, is to diagonalise the Generalized Eigenvalue Problem for the correlation matrix of our operators ( , 0) for each time extent of the Wilson loop, and get a set of time dependent eigenvalues ( ). With the time dependence, we study the effective mass plot and search for clear plateaux consistent with a constant energy in intervals ∈ [ ini , fin ] between the initial and final time of the plateau. We use the anisotropic Wilson action [13] computed with plaquettes, where = 1 3 Re Tr(1 − ), , denotes the spatial plaquette, , the spatial-temporal plaquettes and is the (unrenormalized) anisotropy. Moreover, to improve our signal we also resort to the improved anisotropic action developed in Ref. [14], with = 1 3 Re Tr 1/4 , = 1. , and , , instead of plaquettes, include 2 × 1 rectangles. The results with more excited states shown in the literature [1], have been obtained with this action. The anisotropy is used in order to have a smaller temporal lattice spacing , to obtain more precise plateaux for excited energies since we have more time slices for the same time intervals.
For the 4 ensemble and for the Wilson ensembles with anisotropy, we use MultiHit with 100 iterations in time followed by Stout smearing in space with = 0.15 and 20 iterations.
We use GPUS, and we find it is more economical to perform all our computations on the fly, rather than saving configurations. Our ensembles are summarized in Table 1.
Using only on axis operators, we are then able to suppress the = 4 degeneracy. At least for larger distances, we find up to = 8 levels for the 4 ensemble, shown in Fig. 6.
At smaller distances, we are so far unable to avoid some degeneracy; possibly it is due to higher harmonics since = 1, 3, 5 · · · are as well Σ + states.
Analysis of our Σ + spectrum
We cannot fit accurately the very high spectrum with the Nambu-Goto spectrum which energy levels are more compressed than ours. We thus generalize the Nambu-Goto model using two different string tensions, the 2 replaces the inside the square root. With a global non-linear fit, shown in Fig. 7, we extract as well the renormalized anisotropy and the string tensions , 2 . Parametrizing the deviation to the Nambu-Goto spectrum, the second 2 is apparently slightly smaller than . We find a deviation, in Fig. 8, of up to 10% for the 4 ensemble. A deviation is also present for the other ensembles albeit smaller In the 2 ensemble.
Conclusion and discussion
We compute the potentials for several new excitations of the pure (3) flux tubes produced by two static 3 and3 sources, specializing in the radial excitations of the groundstate Σ + . Using a large basis of operators, employing the computational techniques with GPUs of Ref. [10] and utilizing different actions with smearing and anisotropy we go up to = 8 excitations.
In general the excited states of the the Σ + flux tubes are comparable to the Nambu-Goto EST with transverse modes, only depending on the string tension and the radial quantum number .
A detailed analysis shows a deviation of up to 10% to the excited spectrum of the Nambu-Goto Model. We leave the confirmation of this deviation for futures studies.
A subsequent feasible study is the computation of the widening in the different wavefunctions in the spectrum, and comparing them to from the zero mode widening.
An important outlook would be the study of hybrid quarkonium resonances. | 2021-08-03T01:16:13.458Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "04aecc971df1af0a0e592ba117fb95ab96b2c03e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "04aecc971df1af0a0e592ba117fb95ab96b2c03e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
260833126 | pes2o/s2orc | v3-fos-license | Experimental Investigation on the Flow Boiling of Two Microchannel Heat Sinks Connected in Parallel and Series for Cooling of Multiple Heat Sources
Cooling methods for multiple heat sources with high heat flux have rarely been reported, but such situations threaten the stable operation of electronic devices. Therefore, in this paper, the use of two microchannel heat sinks is proposed, with and without grooves, labeled Type A and Type B, respectively. Experimental investigations on the flow boiling of two microchannel heat sinks connected in parallel and in series are carried out under different mass fluxes. In addition, a high-speed camera is used to observe flow patterns in the microchannels. The cold plate wall temperature (Tw), heat transfer coefficient (HTC), and pressure drop (PD) are obtained with the use of two microchannel heat sinks. The flow patterns of the bubbly flow and elongated bubbles in the microchannels are observed. The results of the analysis indicated that the Tw, HTC, and PD of the two microchannel heat sinks connected in parallel were degraded, especially when using the Type A-B parallel connection. Compared to the use of a single heat sink, the maximum decrease in HTC was 9.44 kW/(m2K) for Type A heat sinks connected in parallel, which represents a decrease of 45.95%. The influence of the series connection on the Tw, HTC, and PD of the two heat sinks is obvious. The Type A-A series connection exerted the greatest positive effect on the performance of the two heat sinks, especially in the case of the postposition heat sink. The maximum increase in HTC was 12.77 kW/(m2K) for the postposition Type A heat sink, representing an increase of 72.88%. These results could provide a reference for a two-phase flow-cooling complex for multiple heat sources with high heat flux.
Introduction
From the chip level to the system level, there are an enormous number of high-power electronic modules that are widely used to perform multi-threaded tasks requiring high strength, such as in supercomputing [1,2], high-power lasers, automotive power battery packs [3], phased-array radar, etc. [4].To ensure stability and reliability, the problem of cooling multiple heat sources with high heat flux needs to be addressed.As stated, eliminating high heat flux from multiple heat sources is important, but research on relevant strategies have rarely been reported [5].At present, traditional air and liquid-flow cooling strategies are widely used to cool electronics [6].Air cooling is typically used when the heat flux is lower than 50 W/cm 2 .Liquid-flow cooling can be used to address heat flux reaching 100 W/cm 2 but with a greater degree of uniformity of temperature between the inlet and outlet [7,8].Furthermore, the heat flux of highly integrated electronic devices can reach 1000 W/cm 2 [9].The traditional cooling methods are not able to meet the heat dissipation of these high-heat-flux devices.Moreover, cooling problems for multiple heat sources with high heat flux are more challenging to overcome.Hence, there is a pressing need to develop and test advanced cooling techniques.One efficient method for solving high-heat-flux cooling problems is microchannel flow boiling [10,11].
Two-phase flow cooling has a higher heat transfer capacity and lower coolant flow velocity compared to single-phase flow [12,13].This is because it mainly utilizes the latent heat of the coolant to absorb and carry away large amounts of heat.Due to the rapid evaporation of liquid, a large number of bubbles are produced, which are accompanied by variable and complex flow patterns [14,15].In addition, the heat transfer properties of flow boiling in microchannels are strongly dependent on the flow pattern [16,17]; for example, the liquid-vapor flow distribution in the microchannels can affect the heat transfer capacity and pressure drop of flow boiling [18].In addition, when flow boiling occurs in a long tube, there are several regions with increased superheat [10].It has been reported that flow boiling processes are classically divided into five categories, including liquid convection, subcooled boiling, saturated boiling, transition boiling, and film boiling [19].In addition, saturated boiling has the highest heat transfer capacity.However, as the superheating increases, there is a critical heat flux.When the heat flux exceeds the value of the critical heat flux, the heat transfer capacity of the flow boiling is suppressed.As a result, many methods have been tested with the aim of enhancing heat transfer and critical heat flux, such as modifying the geometrical structure of the microchannel.
Flow boiling in microchannel heat sinks has been widely studied, and the structure of the microchannels has been optimized to further optimize flow boiling [20,21].Flow boiling in a copper foam fin microchannel heat sink was investigated experimentally by Fu et al. [20].The results showed an 80% improvement in HTC and a 25% improvement in critical heat flux compared to the solid fin microchannel heat sink.Similarly, two microchannel heat sinks made of porous copper and solid copper were investigated by Yin et al. in flow boiling experiments [21], and the experimental results showed that the HTC of the porous open-microchannel was greatly improved compared with that for the solid copper open-microchannel heat sink.In addition, as reported by Li et al. [22], the critical heat flux was increased by 33.8~57.2% in a bidirectional counter-flow microchannel heat sink compared to the parallel-flow microchannel heat sink.A novel porous heat sink with reentrant microchannels was developed by Deng et al. [23], and the test results showed that the HTC value of the porous reentrant microchannels was 2-5 times higher than that of the solid copper microchannels.Furthermore, a bidirectional counter-flow microchannel heat sink was designed, and experimental study was performed [24,25].The results showed that the average HTC of these novel microchannels was 33.5~62.0%higher than the traditional parallel-flow microchannels [24,25].Subsequently, as reported by Jiang et al. [26], the counter-flow microchannels were modified with expanding angle, and flow boiling was performed in this heat sink at a higher heat flux of 2677 kW•m −2 .Flow boiling has been extensively studied in microchannel heat sinks, and it is expected that it will be useful for cooling multiple heating sources with high heat fluxes.
Although flow boiling in microchannel heat sinks has been widely studied as a highly promising high-flux cooling method, only single-heat-source cooling has routinely been considered.Multiple-heat-source cooling problems are frequently encountered in advanced technology, and urgently need to be solved.Xu et al. [27] fabricated a cold plate with four microchannel regions made of copper to cool four heat sources.The experimental results showed that the temperature distribution of electronic devices with multiple heat sources was focused on single-phase flow.However, single-phase flow cooling was not a better method for high heat flux dissipation.Conversely, Tan et al. [28] modeled heat sources as point sources using the Dirac Delta function, and then extended this to locate the positions of the multiple heat sources in order to optimize the heat pipe performance.Similarly, an improved quasi-dynamic multiple-heat-source model was developed by Dan et al. [7] with the aim of optimizing the temperature distribution of a vapor chamber with multiple heat sources.It can be seen there are a small number of studies considering multiple heat sources with a vapor chamber in mathematical modeling.Recently, Zhang et al. [5] considered flow boiling in parallel/tandem microchannel heat sinks for cooling multiple heat sources.The experimental results showed that the phase transition heat of the upstream heat sink had a strong effect on the downstream heat sink.The connection of heat sinks is important for flow boiling with the aim of cooling multiple heat sources with high heat flux, but there is a lack of research results that would help to better understand this cooling method.As discussed, the studies on the cooling of multiple heat sources have rarely been reported, and more studies are needed to address the cooling problems related to multiple heat sources with high heat flux.
The importance of removing high heat flux from multiple heat sources has been widely reported, but research into relevant strategies is rare.Therefore, in this study, a method is proposed based on two efficient rectangular radial expanding microchannel heat sinks with and without grooves, designed previously [29], which were labeled as Type A and Type B, respectively.Figure 1a presents the rectangular radial expanding microchannel heat sink assembled with a heat source.In addition, two cold plates with and without grooves are shown in Figure 1b,c.In the present work, two microchannel heat sinks are connected in either series or parallel, and share a circulating system in order to perform experimental studies on flow boiling at a fixed heating load of 300 W. The T w , HTC, and PD characteristics are obtained for the two heat sinks under different connection modes with different mass fluxes.The main corresponding flow patterns were observed using a high-speed camera.In addition, these characteristics are analyzed and compared in order to understand flow boiling in the radial expanding microchannels.Furthermore, the performances of the two heat sinks in different connection modes are compared in order to determine the most efficient connection mode for multiple-heat-source cooling.The results are expected to provide a reference for solving the high-heat-flux and complex multiple-heat-source cooling problems, thus improving the reliability of highly integrated electronic devices.
Micromachines 2023, 14, x FOR PEER REVIEW 3 of 18 chamber with multiple heat sources.It can be seen there are a small number of studies considering multiple heat sources with a vapor chamber in mathematical modeling.Recently, Zhang et al. [5] considered flow boiling in parallel/tandem microchannel heat sinks for cooling multiple heat sources.The experimental results showed that the phase transition heat of the upstream heat sink had a strong effect on the downstream heat sink.The connection of heat sinks is important for flow boiling with the aim of cooling multiple heat sources with high heat flux, but there is a lack of research results that would help to better understand this cooling method.As discussed, the studies on the cooling of multiple heat sources have rarely been reported, and more studies are needed to address the cooling problems related to multiple heat sources with high heat flux.
The importance of removing high heat flux from multiple heat sources has been widely reported, but research into relevant strategies is rare.Therefore, in this study, a method is proposed based on two efficient rectangular radial expanding microchannel heat sinks with and without grooves, designed previously [29], which were labeled as Type A and Type B, respectively.Figure 1a presents the rectangular radial expanding microchannel heat sink assembled with a heat source.In addition, two cold plates with and without grooves are shown in Figure 1b,c.In the present work, two microchannel heat sinks are connected in either series or parallel, and share a circulating system in order to perform experimental studies on flow boiling at a fixed heating load of 300 W. The Tw, HTC, and PD characteristics are obtained for the two heat sinks under different connection modes with different mass fluxes.The main corresponding flow patterns were observed using a high-speed camera.In addition, these characteristics are analyzed and compared in order to understand flow boiling in the radial expanding microchannels.Furthermore, the performances of the two heat sinks in different connection modes are compared in order to determine the most efficient connection mode for multiple-heat-source cooling.The results are expected to provide a reference for solving the high-heat-flux and complex multiple-heat-source cooling problems, thus improving the reliability of highly integrated electronic devices.
Experimental System and the Connection of Two Heat Sinks
The schematic diagram of a two-phase cooling system is presented in Figure 2a.The main experimental equipment includes a peristaltic pump, a flowmeter, a test section of the heat sink with the heat source, two thermostatic water baths, a reservoir, and a filter.
Experimental System and the Connection of Two Heat Sinks
The schematic diagram of a two-phase cooling system is presented in Figure 2a.The main experimental equipment includes a peristaltic pump, a flowmeter, a test section of the heat sink with the heat source, two thermostatic water baths, a reservoir, and a filter.In the test section, two heat sinks are connected in series or parallel to experimentally investigate flow boiling and to further analyze the performance of the two heat sinks.For the two heat sinks, one cold plate with annular grooves at the downstream microchannel was labeled Type A, while the other one without grooves was labeled Type B, as shown in Figure 1b,c, respectively.In addition, details regarding the size of the two microchannel cold plates can be found in refs [30,31].The two heat sinks connected in parallel and series are shown in Figure 2b,c, respectively.Subsequently, the locations at which the temperature and PD parameters of the two heat sinks connected in parallel are measured are indicated in Figure 2b, as well as the inlet temperature of the working medium, T inlet , the outlet temperature of the working medium T 1,out , T 2,out , the wall temperature of the two cold plates, T 1 , T 2 , and the PD of the two heat sinks, PD 1 , PD 2 .Similarly, the locations at which the parameters for two heat sinks connected in series are measured are marked in Figure 2c.Real-time temperature and PD data are collected at 1 s intervals using an Agilent data acquisition instrument.In addition, the location of the cold plate wall temperature measurement point is shown in Figure 2d, and there is a groove with a depth of 1 mm in the back of the cold plate for welding the thermocouple measurement points.Furthermore, a high-speed camera is positioned directly above the heat sink in order to visually record the flow states, boiling bubbles, and flow patterns in the microchannels.
Micromachines 2023, 14, x FOR PEER REVIEW 4 of 18 In the test section, two heat sinks are connected in series or parallel to experimentally investigate flow boiling and to further analyze the performance of the two heat sinks.For the two heat sinks, one cold plate with annular grooves at the downstream microchannel was labeled Type A, while the other one without grooves was labeled Type B, as shown in Figure 1b,c, respectively.In addition, details regarding the size of the two microchannel cold plates can be found in refs [30,31].The two heat sinks connected in parallel and series are shown in Figure 2b,c, respectively.Subsequently, the locations at which the temperature and PD parameters of the two heat sinks connected in parallel are measured are indicated in Figure 2b, as well as the inlet temperature of the working medium, Tinlet, the outlet temperature of the working medium T1,out, T2,out, the wall temperature of the two cold plates, T1, T2, and the PD of the two heat sinks, PD1, PD2.Similarly, the locations at which the parameters for two heat sinks connected in series are measured are marked in Figure 2c.Real-time temperature and PD data are collected at 1 s intervals using an Agilent data acquisition instrument.In addition, the location of the cold plate wall temperature measurement point is shown in Figure 2d, and there is a groove with a depth of 1 mm in the back of the cold plate for welding the thermocouple measurement points.Furthermore, a high-speed camera is positioned directly above the heat sink in order to visually record the flow states, boiling bubbles, and flow patterns in the microchannels.
Experimental Procedures and Conditions
The two-phase cooling system is presented in Figure 2a.The two-phase flow medium from the heat sink flows directly into an open cooling box in a short pipeline to cool the liquid and condensate steam.In order to evaluate the Tw, HTC, and PD of the two heat sinks in different connection modes at ambient atmospheric pressure, initially, the working medium consisted of deionized pure water.The heating load was set to a constant
Experimental Procedures and Conditions
The two-phase cooling system is presented in Figure 2a.The two-phase flow medium from the heat sink flows directly into an open cooling box in a short pipeline to cool the liquid and condensate steam.In order to evaluate the T w , HTC, and PD of the two heat sinks in different connection modes at ambient atmospheric pressure, initially, the working medium consisted of deionized pure water.The heating load was set to a constant value of 300 W. The heating source was controlled separately using the voltage regulator and insulated with asbestos.The voltage ammeter was used to measure the power in real time.
The water temperature at the inlet of the heat sink was kept constant at 88 • C (±0.2 • C) by a preheater, and a thermostatic water bath was used as the preheater.Subsequently, two thermocouples were used to monitor the inlet temperature at the inlet of the heat sink.In addition, 45 experimental cases were then designed under different mass fluxes with different connection modes.
As shown in Table 1, for two heat sinks connected in parallel, the main pipe volume flow rate was changed from 0.12 L/min to 0.40 L/min, with a gradient of 0.04 L/min, while the heating loads of the two heat sinks were kept constant at 300 W. In particular, the volume flow rates of two heat sinks connected in parallel were adjusted to be consistent before the phase change.Meanwhile, for the two heat sinks connected in series, the volume flow rate was changed from 0.12 L/min to 0.32 L/min and the heating loads of the two heat sinks was 300 W. In addition, the mass flux, G, was calculated using Equation (1) on the basis of the volume flow rate listed in Table 1.In addition, the T w , HTC, and PD characteristics for the two heat sinks were then experimentally investigated under different mass fluxes.The volume flow rate was measured before liquid boiling, and was controlled by changing the rotational speed of the peristaltic pump.Therefore, the heating loads were turned off to measure the next volume flow rate.Due to the delay in the increase in the cold plate wall temperature, it generally took about 4 min for the heat sink to reach the heat transfer balance.When the trend of T w remained unchanged, or T w was no longer increasing, the heat sink reached the heat transfer balance.The time of the heat transfer balance must last for at least 5 min.In addition, the data of the temperature and PD were then calculated by averaging the data over 4 min.
Table 1.Forty-five experimental cases were designed at a fixed heating load of 300 W.
Types
No.
Heat Sink Type Volume Flow Rates and Mass Fluxes
In parallel Case
Calculated Parameters and Uncertainties
Firstly, the averaged mass flux, G, in the expanding microchannel was calculated using Equation (1).
where M is the mass flow rate, A in is the total cross-sectional area of the channel entrance, at 3.21 × 10 −5 m 2 , and A exit is the total cross-sectional area of the channel exit, at 2.88 × 10 −4 m 2 .
Then, the heat flux, q eff , was calculated using Equation ( 2), and the HTC of the heat sink was calculated on the basis of the average cold plate wall temperature using Equation (3).
where Q is the heating load, at 300 W, and A is the total efficient heat transfer area, which includes the base area and fin area.The areas of the Type A and Type B heat sinks are 4.868 × 10 −3 m 2 and 5.033 × 10 −3 m 2 , respectively.T w is the cold plate wall temperature measured by the thermocouple sensor, and T sat is the saturated temperature in the heat sink, where the value of the T sat is represented by the outlet temperature of the heat sinks.
The heat loss of the heat sink was determined, using the increase in the temperature for single-phase flow, as flow: where C p is the specific heat capacity of water, at 4.2 × 10 3 J/(kg•k), and M is the mass flow.
T out is the liquid temperature at the outlet of the heat sink, and T in is the inlet temperature of the liquid.All the measuring sensors were calibrated before testing.In addition, the dimensions of the microchannel are ±0.01 mm.The uncertainty of HTC was obtained using Equation (5).
The maximum measured heat loss was 5.68%.The uncertainty of the calculated HTC was 7.68%.Similarly, the uncertainties of the other measured parameters were obtained using the same calculation method.The uncertainties for the other measurement parameters are listed in Table 2 with measuring ranges and errors.
Comparison with Existing Correlations
The experimental HTCs of the two Type A and Type B heat sinks, used individually, were compared to existing correlations under different mass fluxes.As listed in Table 3, the HTC correlations of Kandlikar [32], Fang and Zhou [33], and Fang and Wu [34] were used.Initially, these HTC correlations were applied to the straight microchannels, while the flow rate and the hydraulic diameter in the expanding microchannels were calculated on the basis of the average.In addition, the HTC in the two expanding microchannel cold plates was calculated in three stages when the fins changed.The experimental HTC was compared with the predicted HTC calculated using the three correlations.The calculation of the mean absolute deviation (MAD) is shown in Table 3.It can be seen that the MAD of the HTC between the experimental and the predicted results for a single Type A heat sink is within 30%, while the smallest MAD was 21.67%, and was obtained using the correlation of Fang and Zhou.For a single Type B heat sink, the MAD is within about 20%, and the best was 6.00%, using Kandlikar's correlation.Therefore, based on existing HTC correlations, the experimental HTCs of the two heat sinks were first compared, in order to show a certain degree of accuracy under different mass fluxes at a heating load of 300 W. , where N is the number of data.
Results
Based on the above experimental steps, T w , PD, HTC, and the corresponding flow patterns were obtained at different mass fluxes.Subsequently, these characteristic parameters of T w , PD, and HTC were analyzed.The influences of the parallel and series connections of the two heat sinks on the values of T w , PD, and HTC were quantified.In addition, the influence of the connection modes on Type A and Type B heat sinks was compared.
Results for Two Heat Sinks Connected in Parallel
Two Type A heat sinks connected in parallel are presented in Figure 3b, with the measurement parameters marked out, where the subscript 1 and 2 differentiate the measurement data for the two heat sinks.In Figure 3a, the solid curves indicate the temperature data for two Type A heat sinks connected in parallel.In addition, the dotted line is the T w of a single Type A heat sink.The abscissa of Figure 3a is the mass flux for a heat sink.In addition, the volume flow rate of each heat sink is adjusted to be the same before liquid boiling.Thus, the main pipe volume flow rate is evenly distributed.It can be seen that the T w of the two heat sinks connected in parallel is similar.When the mass flux is increased from 23.08 kg/(m 2 s) to 57.67 kg/(m 2 s), the change in the T w is small for two heat sinks.However, the T w values for two heat sinks connected in parallel are higher than those for single heat sinks at the same mass flux.For example, the maximum temperature difference is 3.26 • C under a mass flux of 34.60 kg/(m 2 s).In addition, the outlet temperature for two heat sinks connected in parallel first increases and later decreases with increasing mass flux.As analyzed, the T w values for heat sinks utilizing the Type A-A parallel connection are degraded compared to those of single heat sinks.
, where N is the number of data.
Results
Based on the above experimental steps, Tw, PD, HTC, and the corresponding flow patterns were obtained at different mass fluxes.Subsequently, these characteristic parameters of Tw, PD, and HTC were analyzed.The influences of the parallel and series connections of the two heat sinks on the values of Tw, PD, and HTC were quantified.In addition, the influence of the connection modes on Type A and Type B heat sinks was compared.
Results for Two Heat Sinks Connected in Parallel
Two Type A heat sinks connected in parallel are presented in Figure 3b, with the measurement parameters marked out, where the subscript 1 and 2 differentiate the measurement data for the two heat sinks.In Figure 3a, the solid curves indicate the temperature data for two Type A heat sinks connected in parallel.In addition, the dotted line is the Tw of a single Type A heat sink.The abscissa of Figure 3a is the mass flux for a heat sink.In addition, the volume flow rate of each heat sink is adjusted to be the same before liquid boiling.Thus, the main pipe volume flow rate is evenly distributed.It can be seen that the Tw of the two heat sinks connected in parallel is similar.When the mass flux is increased from 23.08 kg/(m 2 s) to 57.67 kg/(m 2 s), the change in the Tw is small for two heat sinks.However, the Tw values for two heat sinks connected in parallel are higher than those for single heat sinks at the same mass flux.For example, the maximum temperature difference is 3.26 °C under a mass flux of 34.60 kg/(m 2 s).In addition, the outlet temperature for two heat sinks connected in parallel first increases and later decreases with increasing mass flux.As analyzed, the Tw values for heat sinks utilizing the Type A-A parallel connection are degraded compared to those of single heat sinks.The HTC values for two Type A heat sinks connected in parallel are shown in Figure 4a.It can be seen that the HTC for heat sinks connected in parallel first increases and later decreases.However, the HTC for single heat sinks decreases with increasing mass flux.The HTC values for heat sinks connected in parallel are lower than those for single heat sinks under the same mass flux.As shown, the maximum difference in HTC values between single heat sinks and those connected in parallel is 9.17 kW/(m 2 K).The decrease in the HTC is 43.32%.As shown in Figure 4b, the difference in PD between two Type A heat sinks connected in parallel is small, and the maximum difference occurs at a mass flux of 51.90 kg/(m 2 s).It can be seen that the trend of PD values of two heat sinks connected in parallel increases slightly until the mass flux reaches 51.90 kg/(m 2 s), and decreases sharply at the end.This may be because boiling is inhibited by higher mass flux.The PD values for heat sinks connected in parallel are higher than those for single heat sinks at mass flux values of 34.60 and 46.14 kg/(m 2 s).It can be seen that the HTC and PD values for two Type A heat sinks connected in parallel are degraded.
sinks connected in parallel is small, and the maximum difference occurs at a mass flux of 51.90 kg/(m 2 s).It can be seen that the trend of PD values of two heat sinks connected in parallel increases slightly until the mass flux reaches 51.90 kg/(m 2 s), and decreases sharply at the end.This may be because boiling is inhibited by higher mass flux.The PD values for heat sinks connected in parallel are higher than those for single heat sinks at mass flux values of 34.60 and 46.14 kg/(m 2 s).It can be seen that the HTC and PD values for two Type A heat sinks connected in parallel are degraded.
(b) (a)
∆HTC=9.17 kW/(m 2 K) 20 The change in the flow patterns of two heat sinks connected in parallel is shown in Figure 6.In addition, the change in the flow patterns could reflect differences in the heat transfer mechanism under different mass fluxes.It can be seen that, when the mass flux increases from 23.08 kg/(m 2 s) to 57.67 kg/(m 2 s), the main flow patterns of two heat sinks connected in parallel correspond to elongated bubble flow, elongated bubble with liquid flow, and bubbly flow.In addition, this change in flow pattern indicates that the heat transfer mechanism has also changed, with thin liquid film evaporation occurring at a lower mass flux and bubble nucleation at a higher mass flux.Furthermore, the change in the heat transfer mechanism leads to a change in the Tw and HTC in a non-monotonic relationship with mass flux.At lower mass flux, the heat transfer mechanism is the evaporation of thin liquid film, which has a higher heat transfer capacity compared to the bubble nucleation heat transfer mechanism [35].Thus, the Tw increases with increasing mass The change in the flow patterns of two heat sinks connected in parallel is shown in Figure 6.In addition, the change in the flow patterns could reflect differences in the heat transfer mechanism under different mass fluxes.It can be seen that, when the mass flux increases from 23.08 kg/(m 2 s) to 57.67 kg/(m 2 s), the main flow patterns of two heat sinks connected in parallel correspond to elongated bubble flow, elongated bubble with liquid flow, and bubbly flow.In addition, this change in flow pattern indicates that the heat transfer mechanism has also changed, with thin liquid film evaporation occurring at a lower mass flux and bubble nucleation at a higher mass flux.Furthermore, the change in the heat transfer mechanism leads to a change in the T w and HTC in a non-monotonic relationship with mass flux.At lower mass flux, the heat transfer mechanism is the evaporation of thin liquid film, which has a higher heat transfer capacity compared to the bubble nucleation heat transfer mechanism [35].Thus, the T w increases with increasing mass flux when the flow rates are low.Furthermore, the evaporation decreases at high mass flux, which may lead to a decrease in the outlet temperature of the medium.
connected in parallel correspond to elongated bubble flow, elongated bu flow, and bubbly flow.In addition, this change in flow pattern indica transfer mechanism has also changed, with thin liquid film evaporatio lower mass flux and bubble nucleation at a higher mass flux.Furthermo the heat transfer mechanism leads to a change in the Tw and HTC in a relationship with mass flux.At lower mass flux, the heat transfer mechan oration of thin liquid film, which has a higher heat transfer capacity comp ble nucleation heat transfer mechanism [35].Thus, the Tw increases with flux when the flow rates are low.Furthermore, the evaporation decrea flux, which may lead to a decrease in the outlet temperature of the medi The HTC values of two heat sinks of Types A and B connected in parallel are shown in Figure 7a.It can be seen that the HTC values of the Type B heat sinks connected in parallel first increase and later decrease with increasing mass flux.The HTC values for Type A heat sinks connected in parallel decrease slightly with increasing mass flux.The HTC values for two heat sinks connected in parallel are lower than those of single heat sinks under the same mass flux.In addition, the difference in the HTC values between single Type A heat sinks and those connected in parallel is more obvious.It can be seen that the maximum difference in HTC values between single Type A heat sinks and those connected in parallel is 9.44 kW/(m 2 K) at a mass flux of 46.14 kg/(m 2 s).The decrease in HTC is 45.95%.For Type B heat sinks, the maximum difference in HTC values between single heat sinks and those connected in parallel is 2.93 kW/(m 2 K) at a mass flux of 57.67 kg/(m 2 s).As shown in Figure 7b, the PD values for two heat sinks connected in parallel are fluctuation, with one increasing and the other decreasing under the same mass flux.The PD change trend for two heat sinks connected in parallel shows a slight upward trend compared to single heat sinks.The PD for single heat sinks is lower than that of heat sinks connected in parallel at 34.60 and 51.90 kg/(m 2 s).It can be seen that the HTC values for two heat sinks connected in parallel is degraded, along with PD, at low mass flux.
that the maximum difference in HTC values between single Type A heat sinks and those connected in parallel is 9.44 kW/(m 2 K) at a mass flux of 46.14 kg/(m 2 s).The decrease in HTC is 45.95%.For Type B heat sinks, the maximum difference in HTC values between single heat sinks and those connected in parallel is 2.93 kW/(m 2 K) at a mass flux of 57.67 kg/(m 2 s).As shown in Figure 7b, the PD values for two heat sinks connected in parallel are fluctuation, with one increasing and the other decreasing under the same mass flux.The PD change trend for two heat sinks connected in parallel shows a slight upward trend compared to single heat sinks.The PD for single heat sinks is lower than that of heat sinks connected in parallel at 34.60 and 51.90 kg/(m 2 s).It can be seen that the HTC values for two heat sinks connected in parallel is degraded, along with PD, at low mass flux.Boiling bubbles developed for the two heat sinks when utilizing a Type A-B parallel connection, as shown in Figure 8.The boiling bubbles grow and exit periodically in the downstream microchannels of two heat sinks.As shown in the area marked in the red box, it takes approximately 270 ms for the bubbles to nucleate and grow to completely fill in the microchannels at a heating load of 300 W and a mass flux of 46.14 kg/(m 2 s).For the Type A heat sinks connected in parallel, the bubble first grows in the downstream annular grooves and then extends into adjacent channels.Subsequently, the bubbles coalesce in the middle channel to form a large bubble that fully fills the microchannel.For the Type B heat sink, the bubbles nucleate in downstream microchannels, and then the bubbles coalesce first in the middle microchannels.Subsequently, the coalescent bubbles first extend downstream, and upstream later.The development of the boiling bubbles in microchannels reflects the heat transfer state of two heat sinks.The main flow patterns are the bubbly flow and the elongated bubbles in the microchannel.The main heat transfer mechanism is bubble nucleation, and liquid boiling evaporation absorbs heat for bubble growth.Boiling bubbles developed for the two heat sinks when utilizing a Type A-B parallel connection, as shown in Figure 8.The boiling bubbles grow and exit periodically in the downstream microchannels of two heat sinks.As shown in the area marked in the red box, it takes approximately 270 ms for the bubbles to nucleate and grow to completely fill in the microchannels at a heating load of 300 W and a mass flux of 46.14 kg/(m 2 s).For the Type A heat sinks connected in parallel, the bubble first grows in the downstream annular grooves and then extends into adjacent channels.Subsequently, the bubbles coalesce in the middle channel to form a large bubble that fully fills the microchannel.For the Type B heat sink, the bubbles nucleate in downstream microchannels, and then the bubbles coalesce first in the middle microchannels.Subsequently, the coalescent bubbles first extend downstream, and upstream later.The development of the boiling bubbles in microchannels reflects the heat transfer state of two heat sinks.The main flow patterns are the bubbly flow and the elongated bubbles in the microchannel.The main heat transfer mechanism is bubble nucleation, and liquid boiling evaporation absorbs heat for bubble growth.
Figure 9b presents a schematic diagram of two Type B heat sinks connected in parallel on which the measured parameters are marked.The temperatures for two heat sinks connected in parallel are shown in Figure 9a.It can be seen that T w values of the two Type B heat sinks are similar, and they have a slightly increasing trend with the increase in mass flux.The values are higher than those for single heat sinks, with the maximum difference between the single Type B heat sinks and those connected in parallel being 3.78 • C. The outlet temperatures of two heat sinks first increase and later decrease.This may be because boiling is inhibited by a higher mass flux.It can be seen that the T w for two Type B heat sinks connected in parallel is degraded.
The HTC values of two Type B heat sinks connected in parallel are shown in Figure 10a.It can be seen that the HTC of heat sinks connected in parallel first increases and then decreases.The HTC values of the parallel heat sink are lower than those of single heat sinks under the same mass flux.It can be seen that the maximum HTC difference between the parallel and single heat sinks is 2.77 kW/(m 2 K) at a mass flux of 40.37 kg/(m 2 s).The decrease in the HTC is 24.72%.As shown in Figure 10b, the changing trend of PD B,2 in the heat sinks connected in parallel shows an upward trend, reaching a peak at a mass flux of 46.14 kg/(m 2 s), after which PD B,2 decreases slightly.Although the change trend of PD B,1 is fluctuant, the PD values in two Type B heat sinks connected in parallel are small, and the maximum value is about 0.27 kPa for PD B,1 .This may be because boiling is inhibited by the higher mass flux.The PD of heat sinks connected in parallel is higher than that of single heat sinks at mass flux values of 34.60 and 46.14 kg/(m 2 s), and the difference in PD between single Type B heat sinks and those connected in parallel is small.However, the PD of single Type B heat sinks is significantly higher than that of heat sinks connected in parallel.It can be seen that the HTC values for two Type B heat sinks connected in parallel are degraded, but the PD values of two Type B heat sinks connected in parallel decrease with high mass flux.Figure 9b presents a schematic diagram of two Type B heat sinks connected in parallel on which the measured parameters are marked.The temperatures for two heat sinks connected in parallel are shown in Figure 9a.It can be seen that Tw values of the two Type B heat sinks are similar, and they have a slightly increasing trend with the increase in mass flux.The values are higher than those for single heat sinks, with the maximum difference between the single Type B heat sinks and those connected in parallel being 3.78 °C.The outlet temperatures of two heat sinks first increase and later decrease.This may be because boiling is inhibited by a higher mass flux.It can be seen that the Tw for two Type B hea sinks connected in parallel is degraded.Figure 9b presents a schematic diagram of two Type B heat sinks connected in parallel on which the measured parameters are marked.The temperatures for two heat sinks connected in parallel are shown in Figure 9a.It can be seen that Tw values of the two Type B heat sinks are similar, and they have a slightly increasing trend with the increase in mass flux.The values are higher than those for single heat sinks, with the maximum difference between the single Type B heat sinks and those connected in parallel being 3.78 °C.The outlet temperatures of two heat sinks first increase and later decrease.This may be because boiling is inhibited by a higher mass flux.It can be seen that the Tw for two Type B heat sinks connected in parallel is degraded.the higher mass flux.The PD of heat sinks connected in parallel is higher than that of single heat sinks at mass flux values of 34.60 and 46.14 kg/(m 2 s), and the difference in PD between single Type B heat sinks and those connected in parallel is small.However, the PD of single Type B heat sinks is significantly higher than that of heat sinks connected in parallel.It can be seen that the HTC values for two Type B heat sinks connected in parallel are degraded, but the PD values of two Type B heat sinks connected in parallel decrease with high mass flux.Table 4 lists the mean values of Tw, HTC, and PD for two heat sinks connected in parallel at different flow rates.It can be seen from the mean Tw, HTC, and PD values for the two heat sinks that the performance of heat sinks is better when utilizing the Type A-A parallel connection compared to when utilizing the Type A-B and Type B-B parallel connections.In addition, the performance of the heat sink in the Type A-B parallel connection is the worst, where the Tw of the Type B heat sink is the highest and the HTC is the lowest.Therefore, when two heat sinks are connected in parallel, the two heat sinks should be of the same type in order to achieve a stable performance.In addition, as described above, the Tw, PD, and HTC values of heat sinks connected in parallel are different from those of individual heat sinks, because there are mutual influences affecting the two heat sinks connected in parallel.This may be related to the fluctuation of the mass flux when phase change occurs in two heat sinks connected in parallel.Table 4 lists the mean values of T w , HTC, and PD for two heat sinks connected in parallel at different flow rates.It can be seen from the mean T w , HTC, and PD values for the two heat sinks that the performance of heat sinks is better when utilizing the Type A-A parallel connection compared to when utilizing the Type A-B and Type B-B parallel connections.In addition, the performance of the heat sink in the Type A-B parallel connection is the worst, where the T w of the Type B heat sink is the highest and the HTC is the lowest.Therefore, when two heat sinks are connected in parallel, the two heat sinks should be of the same type in order to achieve a stable performance.In addition, as described above, the T w , PD, and HTC values of heat sinks connected in parallel are different from those of individual heat sinks, because there are mutual influences affecting the two heat sinks connected in parallel.This may be related to the fluctuation of the mass flux when phase change occurs in two heat sinks connected in parallel.
Results of Two Heat Sinks Connected in Series
There are four series connection modes for two Type A or B heat sinks connected in series, as listed in Table 1.The T w values determined for the preposition and postposition heat sinks after testing and analysis are shown in Figure 11a,b, respectively.The solid curves represent the data correspond to heat sinks connected in series, and the dotted lines belong to single heat sinks.As seen in Figure 11a, for the preposition heat sinks connected in series, the T w in Type A heat sinks connected in series always shows an upward trend with increasing mass flux, while the T w in Type B heat sinks connected in series is fluctuant and even exhibits different change trends with different series connections.In addition, the T w of single heat sinks first increases, until the mass flux reaches 69.20 kg/(m 2 s), and then sharply decreases.In contrast, all the T w values of preposition heat sinks utilizing different series connections are higher than those of single heat sinks.This may be because the flow boiling state in the two cold plates is affected by the series connection, increasing, for example, the pressure in the preposition heat sinks.Furthermore, all of the T w values of Type A heat sinks are lower than those of the Type B heat sinks connected either in series or individually.The reason for this is that the grooves in the channels are beneficial for the outflow of large bubbles from the channel.The analysis showed that the difference in the value of T w between the single Type A heat sinks and those connected in series is lower than that of Type B heat sinks.For preposition heat sinks connected in series, a lower T w value was found for the Type A-A series connection compared to the other series connection modes.However, the lowest value of T w observed for preposition heat sinks connected series is higher than that of single heat sinks.The T w of two preposition heat sinks connected in series is degraded.
the Tw of single heat sinks first increases, until the mass flux reaches 69.20 kg/(m s), and then sharply decreases.In contrast, all the Tw values of preposition heat sinks utilizing different series connections are higher than those of single heat sinks.This may be because the flow boiling state in the two cold plates is affected by the series connection, increasing, for example, the pressure in the preposition heat sinks.Furthermore, all of the Tw values of Type A heat sinks are lower than those of the Type B heat sinks connected either in series or individually.The reason for this is that the grooves in the channels are beneficial for the outflow of large bubbles from the channel.The analysis showed that the difference in the value of Tw between the single Type A heat sinks and those connected in series is lower than that of Type B heat sinks.For preposition heat sinks connected in series, a lower Tw value was found for the Type A-A series connection compared to the other series connection modes.However, the lowest value of Tw observed for preposition heat sinks connected series is higher than that of single heat sinks.The Tw of two preposition heat sinks connected in series is degraded.As shown in Figure 11b, for postposition Type A heat sinks connected in series, the Tw exhibits an upward trend with increasing mass flux in the Type A-A connection mode, while the Tw first decreases slightly, until a mass flux of 46.14 kg/(m 2 s) is reached, and then increases sharply for the Type B-A connection mode.For postposition Type B heat sinks connected in series, the Tw gradually increases until the mass flux reaches 69.20 kg/(m 2 s), and then it decreases slowly until the end in the Type B-B connection mode, while the Tw first decreases slightly until the mass flux reaches 69.20 kg/(m 2 s), before increasing slightly until the end with the Type A-B connection mode.In contrast, the majority of the Tw values of Type A heat sinks connected in series are higher than those of single heat sinks.Only at mass flux values of 57.67 kg/(m 2 s) and 69.20 kg/(m 2 s) are the Tw values of heat sinks in the Type A-A connection mode lower than those of single heat sinks.The majority of Tw values of postposition Type B heat sinks are higher than those of Type A heat sinks.The majority of Tw values of postposition Type B heat sinks in series are similar to those of single heat sinks, or even lower than those of single heat sinks.For postposition heat sinks As shown in Figure 11b, for postposition Type A heat sinks connected in series, the T w exhibits an upward trend with increasing mass flux in the Type A-A connection mode, while the T w first decreases slightly, until a mass flux of 46.14 kg/(m 2 s) is reached, and then increases sharply for the Type B-A connection mode.For postposition Type B heat sinks connected in series, the T w gradually increases until the mass flux reaches 69.20 kg/(m 2 s), and then it decreases slowly until the end in the Type B-B connection mode, while the T w first decreases slightly until the mass flux reaches 69.20 kg/(m 2 s), before increasing slightly until the end with the Type A-B connection mode.In contrast, the majority of the T w values of Type A heat sinks connected in series are higher than those of single heat sinks.Only at mass flux values of 57.67 kg/(m 2 s) and 69.20 kg/(m 2 s) are the T w values of heat sinks in the Type A-A connection mode lower than those of single heat sinks.The majority of T w values of postposition Type B heat sinks are higher than those of Type A heat sinks.The majority of T w values of postposition Type B heat sinks in series are similar to those of single heat sinks, or even lower than those of single heat sinks.For postposition heat sinks connected in series, a lower T w value is obtained for the Type A-A series connection compared to the other series connection modes.In addition, there are some T w values for preposition heat sinks connected in series that are lower than those of single heat sinks.It can be seen that it was possible to optimize the T w of preposition heat sinks connected in series.
Furthermore, for the HTC of two heat sinks utilizing the four series connection modes, the HTC values of preposition and postposition heat sinks are shown in Figure 12a,b, respectively.As shown in Figure 12a, for preposition Type A heat sinks connected in series, the HTC of the heat sink in the Type A-B connection mode shows a downward trend with increasing mass flux, while the HTC of the heat sink in the Type A-A connection mode decreases slowly first, and then increases slightly towards the end.For preposition Type B heat sinks connected in series, the HTC values of the heat sink in Type B-A and Type B-B connection modes show a downward trend with increasing mass flux.In contrast, the majority of HTC values in preposition Type B heat sinks connected in series are lower than those of single heat sinks.However, the majority of HTC values of Type A heat sinks are higher than those of single heat sinks.In addition, all the HTC values of Type A heat sinks are higher than those of Type B heat sinks with different mass fluxes.The grooves in the channel cause the differences in the performance of the two heat sinks.It can be seen that the difference in HTC between the single heat sinks and those connected in series is small.For preposition heat sinks connected in series, higher HTC values were found for the Type A-B series connection compared to the other series connection modes.In addition, the highest HTC value observed for preposition heat sinks connected in series is higher than that for single heat sinks.The effect of being connected in series on the HTC of two preposition heat sinks is small, and the HTC value of the Type A heat sink is optimal in the Type A-B series connection mode.
sition Type B heat sinks connected in series, the HTC values of the heat sink in Type B-A and Type B-B connection modes show a downward trend with increasing mass flux.In contrast, the majority of HTC values in preposition Type B heat sinks connected in series are lower than those of single heat sinks.However, the majority of HTC values of Type A heat sinks are higher than those of single heat sinks.In addition, all the HTC values of Type A heat sinks are higher than those of Type B heat sinks with different mass fluxes.The grooves in the channel cause the differences in the performance of the two heat sinks.It can be seen that the difference in HTC between the single heat sinks and those connected in series is small.For preposition heat sinks connected in series, higher HTC values were found for the Type A-B series connection compared to the other series connection modes.In addition, the highest HTC value observed for preposition heat sinks connected in series is higher than that for single heat sinks.The effect of being connected in series on the HTC of two preposition heat sinks is small, and the HTC value of the Type A heat sink is optimal in the Type A-B series connection mode.As shown in Figure 12b, for postposition Type A heat sinks connected in series, the HTC of heat sinks in the Type A-A and Type B-A connection modes increases at the beginning and maintains a constant at moderate mass flux, but decreases sharply at the end.For postposition Type B heat sinks connected in series, the HTC in the Type A-B connection mode increases slightly until the mass flux reaches 69.20 kg/(m 2 s), after which the HTC maintains a constant level.In addition, the HTC value of heat sinks in the Type B-B connection mode remains almost stable with increasing mass flux.In contrast, the majority of the HTC values of postposition heat sinks connected in series are higher than those of single heat sinks.In addition, the majority of HTC values of Type A heat sinks are higher than those of Type B heat sinks under different mass fluxes.It can be seen that a higher HTC value was obtained for the Type A-A series connection compared to the other series connection modes.In addition, the highest HTC value observed for preposition heat sinks connected in series is significantly higher than those of single heat sinks.The maximum As shown in Figure 12b, for postposition Type A heat sinks connected in series, the HTC of heat sinks in the Type A-A and Type B-A connection modes increases at the beginning and maintains a constant at moderate mass flux, but decreases sharply at the end.For postposition Type B heat sinks connected in series, the HTC in the Type A-B connection mode increases slightly until the mass flux reaches 69.20 kg/(m 2 s), after which the HTC maintains a constant level.In addition, the HTC value of heat sinks in the Type B-B connection mode remains almost stable with increasing mass flux.In contrast, the majority of the HTC values of postposition heat sinks connected in series are higher than those of single heat sinks.In addition, the majority of HTC values of Type A heat sinks are higher than those of Type B heat sinks under different mass fluxes.It can be seen that a higher HTC value was obtained for the Type A-A series connection compared to the other series connection modes.In addition, the highest HTC value observed for preposition heat sinks connected in series is significantly higher than those of single heat sinks.The maximum increase in HTC is 12.77 kW/(m 2 K), at a mass flux of 80.74 kg/(m 2 s), which represents an improvement of 72.88% compared to a single Type A heat sink.The reason for the increase in HTC is that the heat transfer mechanism of the postposition heat sinks is mainly the thin liquid film evaporation mechanism.This mechanism is caused by the preposition heat sinks supplying a portion of the steam.The effect of being connected in series on the HTC of the two postposition heat sinks is obvious, and the HTC in two postposition heat sinks is optimal.
The change trends of PD in two heat sinks connected in series are shown in Figure 13.Firstly, the PD values for individual Type A and Type B heat sinks are similar, and have an upward trend.The PD of two single heat sinks increases when the mass flux changes from 34.60 kg/(m 2 s) to 57.67 kg/(m 2 s).When the mass flux exceeds 57.67 kg/(m 2 s), the PD of the two single heat sinks barely changes.These trends are similar to the change trend of the PD for preposition Type A and B heat sinks connected in series.Conversely, for postposition heat sinks connected in series, the PD increased significantly compared to single heat sinks.The analysis showed that the maximum difference in PD between the single heat sinks and the Type B heat sinks connected in series is 0.79 kPa, which represents an increase by about 200%.This is mainly because the flow boiling state in the postposition heat sink is more dramatic and postposition heat sinks have a higher HTC compared to preposition heat sinks.Therefore, the series connection has a significant impact on PD values in postposition heat sinks.In addition, the PD values of two heat sinks in the Type A-A and Type B-B series connections have lower values compared to other series connection modes.The PD for postposition heat sinks connected in series is increased significantly compared to single heat sinks, but the HTC for postposition heat sinks connected in series is higher.
trend of the PD for preposition Type A and B heat sinks connected in series.Conversely, for postposition heat sinks connected in series, the PD increased significantly compared to single heat sinks.The analysis showed that the maximum difference in PD between the single heat sinks and the Type B heat sinks connected in series is 0.79 kPa, which represents an increase by about 200%.This is mainly because the flow boiling state in the postposition heat sink is more dramatic and postposition heat sinks have a higher HTC compared to preposition heat sinks.Therefore, the series connection has a significant impact on PD values in postposition heat sinks.In addition, the PD values of two heat sinks in the Type A-A and Type B-B series connections have lower values compared to other series connection modes.The PD for postposition heat sinks connected in series is increased significantly compared to single heat sinks, but the HTC for postposition heat sinks connected in series is higher.
Comparison of Two Heat Sinks Connected in Series and Parallel
The mean differences in the Tw, HTC, and PD between single heat sinks and two heat sinks connected in series or parallel at mass fluxes of 34.60, 46.14, and 57.67 kg/(m 2 s) are presented in Table 5.Firstly, the mean difference in the Tw for two heat sinks connected in parallel is about +2.18~+3.91°C.This shows that the Tw of two heat sinks when using different parallel connections is obviously degraded.In addition, the Tw in the Type A-B parallel connection is the worst.The mean difference in the HTC for two heat sinks connected in parallel is about −1.50~−8.40kW/(m 2 K).This shows that the HTC of two heat sinks utilizing different parallel connections is degraded; in particular, this is most obvious for the Type A heat sink.Meanwhile, the mean difference in the PD for two heat sinks connected
Comparison of Two Heat Sinks Connected in Series and Parallel
The mean differences in the T w , HTC, and PD between single heat sinks and two heat sinks connected in series or parallel at mass fluxes of 34.60, 46.14, and 57.67 kg/(m 2 s) are presented in Table 5.Firstly, the mean difference in the T w for two heat sinks connected in parallel is about +2.18~+3.91• C.This shows that the T w of two heat sinks when using different parallel connections is obviously degraded.In addition, the T w in the Type A-B parallel connection is the worst.The mean difference in the HTC for two heat sinks connected in parallel is about −1.50~−8.40kW/(m 2 K).This shows that the HTC of two heat sinks utilizing different parallel connections is degraded; in particular, this is most obvious for the Type A heat sink.Meanwhile, the mean difference in the PD for two heat sinks connected in parallel is small.This shows that the effect of parallel connection on the PD for heat sinks is small.Subsequently, the mean differences in T w between preposition heat sinks in different series connections increase, which shows that the series connection degrades the T w of preposition heat sinks connected in series.However, the effect on the T w is small compared to when connected in parallel.For the postposition heat sinks connected in series, the T w is optimal in the Type A-A series connection.In addition, the mean HTC differences between the preposition and postposition heat sink exhibit a small increase in the Type A-B series connection.The HTC in the Type A-A series connection shows the most obvious improvement.This shows that the HTC of the postposition heat sink can be optimized when connected in series, and that the effect of the series connection on the HTC of the preposition heat sink is small.The mean difference in the PD of the preposition heat sinks increases slightly when connected in series, but the increase is obvious for the postposition heat sinks.In addition, the difference in PD between the preposition and postposition heat sinks is up to 5~6 times larger.The series connection degrades the PD of two heat sinks utilizing different connection modes, especially in the case of postposition heat sinks.On the basis of an analysis of the mean difference, the Type A-A series connection has the greatest positive effect on the performance of two heat sinks connected in series, especially in the case of postposition heat sinks at moderate mass flux.However, the T w , HTC, and PD characteristics of the two heat sinks are been degraded when connected in parallel.Thus,
Figure 1 .
Figure 1.Diagram of the heat sink assembly with heating source and two cold plates: (a) the heat sink assembly with heating source; (b) microchannel cold plate with grooves; (c) microchannel cold plate without grooves.
Figure 1 .
Figure 1.Diagram of the heat sink assembly with heating source and two cold plates: (a) the heat sink assembly with heating source; (b) microchannel cold plate with grooves; (c) microchannel cold plate without grooves.
Figure 2 .
Figure 2. Diagrams of the flow boiling system and two heat sinks connected in parallel or series: (a) two-phase cooling system; (b) two cold plates connected in parallel; (c) two cold plates connected in series; (d) wall temperature measurement point.
Figure 2 .
Figure 2. Diagrams of the flow boiling system and two heat sinks connected in parallel or series: (a) two-phase cooling system; (b) two cold plates connected in parallel; (c) two cold plates connected in series; (d) wall temperature measurement point.
Figure 3 .
Figure 3. Temperatures of two heat sinks and measurement parameters in Case 1.(a) temperatures of two heat sinks; (b) the layout of the measurement parameters
Figure 3 .
Figure 3. Temperatures of two heat sinks and measurement parameters in Case 1.(a) temperatures of two heat sinks; (b) the layout of the measurement parameters.
Figure 4 .
Figure 4. HTC and PD for two heat sinks utilizing the Type A-A parallel connection.(a) HTC of single and two parallel heat sinks; (b) PD of single and two parallel heat sinks.
Figure
Figure 5b depicts two heat sinks of Types A and B connected in parallel, along with the measurement parameters.The temperatures measured for two heat sinks connected in parallel are shown in Figure 5a.It can be seen that the Tw values for two heat sinks of Types A and B increase slightly with increasing mass flux.The values are higher than those of single heat sinks, with the maximum difference between single Type A heat sinks and those connected in parallel being 4.18 °C, while for the Type B heat sink the maximum difference is 3.57 °C.The Tw values for the Type A heat sink are lower than those of the Type B heat sink when connected in parallel.The outlet temperatures of the two heat sinks are similar until the mass flux reaches 51.90 kg/(m 2 s).Meanwhile, the outlet temperature of the Type B heat sink decreases obviously.This may be because boiling is inhibited by higher mass flux, and these changes can be further explained by the flow patterns in Figure 6.It can be seen that the Tw values for two heat sinks of Types A and B connected in parallel are degraded.
Figure 4 .
Figure 4. HTC and PD for two heat sinks utilizing the Type A-A parallel connection.(a) HTC of single and two parallel heat sinks; (b) PD of single and two parallel heat sinks.
FigureFigure 5 .
Figure 5b depicts two heat sinks of Types A and B connected in parallel, along with the measurement parameters.The temperatures measured for two heat sinks connected in parallel are shown in Figure 5a.It can be seen that the T w values for two heat sinks of Types A and B increase slightly with increasing mass flux.The values are higher than those of single heat sinks, with the maximum difference between single Type A heat sinks and those connected in parallel being 4.18 • C, while for the Type B heat sink the maximum difference is 3.57 • C. The T w values for the Type A heat sink are lower than those of the Type B heat sink when connected in parallel.The outlet temperatures of the two heat sinks are similar until the mass flux reaches 51.90 kg/(m 2 s).Meanwhile, the outlet temperature of the Type B heat sink decreases obviously.This may be because boiling is inhibited by higher mass flux, and these changes can be further explained by the flow patterns in Figure 6.It can be seen that the T w values for two heat sinks of Types A and B connected in parallel are degraded.Micromachines 2023, 14, x FOR PEER REVIEW 9 of 18
Figure 5 .
Figure 5. Temperatures of two heat sinks with the measured parameters marked in Case 2. (a) temperatures of two heat sinks; (b) the layout of the measurement parameters.
Figure 7 .
Figure 7. HTC and PD values for two heat sinks utilizing the Type A-B parallel connection.(a) HTC of single and two parallel heat sinks; (b) PD of single and two parallel heat sinks.
Figure 7 .
Figure 7. HTC and PD values for two heat sinks utilizing the Type A-B parallel connection.(a) HTC of single and two parallel heat sinks; (b) PD of single and two parallel heat sinks.
Figure 8 .
Figure 8. Main flow patterns in microchannels of the two heat sinks in Case 2. (a1) flow patterns in channels with grooves at t = 0 ms; (a2) flow patterns in channels with grooves at t = 45 ms; (a3) flow patterns in channels with grooves at t = 90 ms; (a4) flow patterns in channels with grooves at t = 135 ms; (a5) flow patterns in channels with grooves at t = 180 ms; (a6) flow patterns in channels with grooves at t = 225 ms; (a7) flow patterns in channels with grooves at t = 270 ms; (b1) flow patterns in channels without grooves at t = 0 ms; (b2) flow patterns in channels without grooves at t = 45 ms (b3) flow patterns in channels without grooves at t = 90 ms; (b4) flow patterns in channels withou grooves at t = 135 ms; (b5) flow patterns in channels without grooves at t = 180 ms; (b6) flow patterns in channels without grooves at t = 225 ms; (b7) flow patterns in channels without grooves at t = 270 ms.
Figure 8 .
Figure 8. Main flow patterns in microchannels of the two heat sinks in Case 2. (a1) flow patterns in channels with grooves at t = 0 ms; (a2) flow patterns in channels with grooves at t = 45 ms; (a3) flow patterns in channels with grooves at t = 90 ms; (a4) flow patterns in channels with grooves at t = 135 ms; (a5) flow patterns in channels with grooves at t = 180 ms; (a6) flow patterns in channels with grooves at t = 225 ms; (a7) flow patterns in channels with grooves at t = 270 ms; (b1) flow patterns in channels without grooves at t = 0 ms; (b2) flow patterns in channels without grooves at t = 45 ms; (b3) flow patterns in channels without grooves at t = 90 ms; (b4) flow patterns in channels without grooves at t = 135 ms; (b5) flow patterns in channels without grooves at t = 180 ms; (b6) flow patterns in channels without grooves at t = 225 ms; (b7) flow patterns in channels without grooves at t = 270 ms.
Figure 9 .
Figure 9. Main flow patterns in microchannels of two heat sinks in Case 2. (a) temperatures of two heat sinks; (b) the layout of the measurement parameters.
Figure 9 .
Figure 9. Main flow patterns in microchannels of two heat sinks in Case 2. (a) temperatures of two heat sinks; (b) the layout of the measurement parameters.
Figure 10 .
Figure 10.HTC and PD of two heat sinks in a Type B-B parallel connection.(a) HTC of single and two parallel heat sinks; (b) PD of single and two parallel heat sinks.
Figure 10 .
Figure 10.HTC and PD of two heat sinks in a Type B-B parallel connection.(a) HTC of single and two parallel heat sinks; (b) PD of single and two parallel heat sinks.
Figure 11 .
Figure 11.Tw of two heat sinks utilizing different series connections.(a) Tw of preposition heat sinks; (b) Tw of postposition heat sinks.
Figure 11 .
Figure 11.T w of two heat sinks utilizing different series connections.(a) T w of preposition heat sinks; (b) T w of postposition heat sinks.
Figure 12 .
Figure 12.HTCs of two heat sinks utilizing different series connections.(a) HTC of preposition heat sinks; (b) HTC of postposition heat sinks.
Figure 12 .
Figure 12.HTCs of two heat sinks utilizing different series connections.(a) HTC of preposition heat sinks; (b) HTC of postposition heat sinks.
Figure 13 .
Figure 13.PDs of two heat sinks utilizing different series connections.
Figure 13 .
Figure 13.PDs of two heat sinks utilizing different series connections.
Table 2 .
The uncertainties of the measured and calculated parameters.
Table 3 .
Mean absolute deviations between the experimental and predicted HTC.
Table 4 .
Mean Tw, HTC, and PD of two heat sinks connected in parallels.
Table 4 .
Mean T w , HTC, and PD of two heat sinks connected in parallels. | 2023-08-12T15:07:44.154Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "69da48d8dc079049cadcc8f9e99cb87cc98c1525",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-666X/14/8/1580/pdf?version=1691676657",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "238111a30a6c8f45361ea77a6f3f1c1d8738e0bf",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6421515 | pes2o/s2orc | v3-fos-license | When Right Feels Left: Referral of Touch and Ownership between the Hands
Feeling touch on a body part is paradigmatically considered to require stimulation of tactile afferents from the body part in question, at least in healthy non-synaesthetic individuals. In contrast to this view, we report a perceptual illusion where people experience “phantom touches” on a right rubber hand when they see it brushed simultaneously with brushes applied to their left hand. Such illusory duplication and transfer of touch from the left to the right hand was only elicited when a homologous (i.e., left and right) pair of hands was brushed in synchrony for an extended period of time. This stimulation caused the majority of our participants to perceive the right rubber hand as their own and to sense two distinct touches – one located on the right rubber hand and the other on their left (stimulated) hand. This effect was supported by quantitative subjective reports in the form of questionnaires, behavioral data from a task in which participants pointed to the felt location of their right hand, and physiological evidence obtained by skin conductance responses when threatening the model hand. Our findings suggest that visual information augments subthreshold somatosensory responses in the ipsilateral hemisphere, thus producing a tactile experience from the non-stimulated body part. This finding is important because it reveals a new bilateral multisensory mechanism for tactile perception and limb ownership.
Introduction
Under normal conditions, humans are highly capable of localizing touch to a particular area of skin being stimulated. Visual information can further guide the localization of touch [1,2], modify the quality of somatic sensations [3,4,5] and improve tactile acuity in healthy individuals [6,7]. But neither visual nor auditory stimuli have been considered to be able to cause tactile sensations in the absence of physical stimuli activating peripheral tactile receptors in the skin of healthy individuals. Thus multisensory signals are viewed as having only a modulatory role in tactile perception, as they have in unimodal perception more generally [8,9,10,11].
In some neurological cases, however, the boundary between multisensory processes and unimodal perception has been dissolved. In this respect Halligan and colleagues described a patient with hemiparesis after stroke who felt the touch when he merely watched a touch being applied to his paralyzed limb [12]. Similarly, in patients with hands rendered anesthetic by stroke or neurosurgery, touches applied to the intact hand produced tactile sensations in the anesthetic hand [13]. Ramachandran and colleagues reported a similar phenomenon when upper limb amputees saw a mirror image of their intact hand superimposed on their stump, which was hidden from their view behind a mirror [14]. When they saw the 'missing limb' being touched in the mirror, they reported feeling touches on their phantom limb [15]. These neurological cases suggest that brain plasticity and central reorganization might up-regulate the processing of tactile signals from the ispilateral intact body half, and that these signals can be combined with visual signals from the impaired limb, resulting in ''phantom touch sensations''. Presumably, this happens via plastic changes in the multisensory areas in the posterior parietal cortex [13] that integrate tactile information from the hands [16,17,18] with visual information [19,20,21,22,23,24].
The sense of touch is intimately linked to the perception of one's own body. A limb that can feel touch is typically experienced as being one's own, as was famously demonstrated in the case of the rubber hand illusion [25]. In this illusion, simultaneous brushing of a rubber hand in full view of the participant, and of the participant's hand, which is out of view behind a screen, produces the illusion that the participant feels the touch of the paintbrush 'in' the rubber hand and experiences the dummy hand as his or her own hand [26,27]. The referral of touch and ownership to the rubber hand only works if certain criteria are satisfied, namely, that: the rubber hand and the real one are touched synchronously [25,26,28], the rubber hand is aligned parallel to the hidden real hand [26,28,29,30], the two hands are touched on corresponding sites [31], the rubber hand is of the same laterality as the hidden hand (e.g., right rubber hand and right real hand [28]), and the distance between the hands is less than 35 cm [32]. This indicates that the visuo-tactile integration underlying the phenomenon operates in arm-centered reference frames in near-personal space [27,29], probably mediated via multisensory neuronal populations in the premotor cortex and posterior parietal cortex [20,26,33,34].
In our laboratory, we recently discovered an unexpected version of the rubber hand illusion that demonstrates an important new role played by homologous limbs for the sense of ownership and tactile perception. We found that healthy participants can experience a ''phantom touch'' on a right rubber hand that they see being brushed in the absence of any touch delivered to their hidden right hand. This occurs when the contralateral left hand is stimulated synchronously at the corresponding homologue's site. This ''bimanual transfer of touch'' is also associated with the feeling of ownership of the rubber hand. These findings are of fundamental importance because they reveal how multisensory interactions between the hands cause qualitative changes in unimodal tactile perception, and that this has a direct consequence for how we come to experience limbs as part of our own body.
Participants
Thirty healthy naïve participants (mean6s.d. age 2565 years, 15 females) participated in our first experiment. For the second experiment, a new group of fourteen volunteers was recruited (mean6s.d. age 2466 years, 8 females). Another group of fourteen volunteers participated in our third experiment (mean6s.d. age 2669 years, 7 females). Thirteen new participants took part in the fourth experiment (mean6s.d. age 2967 years, 5 females). All participants gave their written informed consent prior to participating in the relevant experiment. This study was conducted according to the principles expressed in the Declaration of Helsinki. The study was approved by the Institutional Review Board of the Regional Ethics committee of Stockholm and Karolinska hospitals. All participants provided written informed consent for the collection of samples and subsequent analysis.
Experimental design
The experiments were designed to include three experimental manipulations and to obtain three complementary measures of the illusion (see below for details). We changed the timing of the stimulation on the two hands, hypothesizing that only synchronous stimulation would produce the illusion (Experiments #1, #2, and #3). The orientation of the rubber hand was also varied (Experiment #2) to test the prediction that the right rubber hand has to be aligned with the participant's own hand, i.e. that it has to be placed in an anatomically congruent position. Finally, we studied the effect of the laterality of the hand -right vs. left -to test the hypothesis that the illusion only works for a homologous pair of a left hand and a right (rubber) hand. The combination of subjective (Experiment #1), physiological (Experiment #2) and behavioral (Experiment #3 and #4) measures of the illusion provides robust and corroborative evidence for the illusion.
Experimental setup
The participants were seated with their arms resting prone on a table as depicted in Figure 1. A life-size right cosmetic hand prosthesis was placed on the table twenty one centimeters to the right of the midline of the participants' body. The real right hand was hidden behind a screen at a distance of twenty centimeters from the rubber hand. The left hand was placed in full view twenty one centimeters to the left of the midline of the body. A towel was laid over the proximal ends of the arms to cover the gap between the rubber arm and the person's body. The set up, thus, created the visual impression that the participants had placed both of their hands on the table parallel to one another ( Figure 1). All participants were instructed to look at the rubber hand. Two identical brushes were used to stroke the left real and the right rubber hand either synchronously (corresponding to the illusion condition used in all experiments) or asynchronously (providing the control condition for Experiments #1, #2, and #3). The touches were delivered to the corresponding parts of the index and middle fingers of the right rubber hand and left real hand. An irregular, but synchronous, rhythm of brushing was chosen to enhance the illusion since this mode of stimulation is known to maximize the traditional rubber hand illusion (unpublished observations). The brushing in the asynchronous condition was in an irregular and alternating pattern. The participants were explicitly instructed not to move their right hand behind the occluding screen.
Questionnaire data (Experiment #1)
Our first experiment consisted of two sessions, one of synchronous and one of asynchronous brushing of the two visible hands (i.e. the left real hand and the right rubber hand). Each session lasted five minutes. Half of the participants started with the synchronous and the other half started with the asynchronous condition. At the end of each session, the participants were asked to fill out a short questionnaire, which consisted of nine statements about the experiences they might have had during the stimulation. Four statements (Q1-Q4) were designed to capture different aspects of the illusory perception related to the sensation of touches on the rubber hand and the feeling of ownership of that hand. One statement (Q5) was constructed to explore possible sensations in the real right hand induced by the visuo-tactile conflict, as suggested by the results of a previous study [35] and pilot experiments. Statements Q6-Q9 served as control questions for task compliance and susceptibility effects (see Table 1). The participants were asked to rate their level of agreement with the statements on a seven-point Likert scale with a range from ''+3'' (agree very strongly) to ''23'' (disagree very strongly) where ''0'' corresponded to neither agreeing nor disagreeing.
Physiological recordings (Experiment #2)
In the second experiment, we measured the skin conductance response following the simulation of physical injury to the rubber hand. This experiment was included to provide objective physiological evidence for the illusion. Previous work has demonstrated a relationship between the feeling of ownership of a rubber hand and the anxiety experienced when this hand is being subjected to physical threats [36,37]. The anxiety triggered by physical threats leads to changes in skin sweating that lead to changes in skin conductance. We included three conditions: the synchronous or asynchronous stimulation conditions from Experiment #1; and a third condition, where the rubber hand was rotated 180 degrees and synchronous stimulation was applied. The latter experimental manipulation is known to reduce the traditional rubber hand illusion [26]. We included this condition to control for possible association learning effects induced by a period of synchronized visual and tactile stimuli.
All three conditions were repeated three times in an order that was balanced across the participants. At the end of each session, a needle was stabbed into the rubber hand and the skin conductance response (SCR) was measured with two Ag-AgCl reusable electrodes attached to the middle and index fingers of the right hand, hidden behind the screen. We used Signa electrode gel (Parker Laboratories, INC., New Jersey, USA). The data were registered with a Biopac System MP150 (100 samples per second) and processed with the Biopac software Acqknowledge for Windows ACK100W. The participant wore the electrodes for a few minutes before starting the recording. The parameters of the recording were as follows: The gain switch was set to 5 mmho/V and the CAL2 Scale Value was set to 5. The timing of the stabbing events was indicated in the raw data files during the recordings by the experimenter, by pressing a key. A one-way repeated ANOVA was used to test for statistical differences in the SCRs for the three conditions. The SCR was identified as the peak in the conductance that occurs up to 5 seconds after the onset of the threat stimuli. The amplitude of the SCR was measured as the difference between the minimal and maximal values of the response identified in this time-window. We calculated the average of the all responses including the trials where no response was apparent, thus, analysing the magnitude of the SRC [38]. Participants who did not show a reliable threat-evoked SCR ('null responders'), i.e. had zero responses in more than two-thirds of the trials, were excluded from the analysis.
Proprioceptive drift measure (Experiments #3 and #4)
In the traditional rubber hand illusion, the feeling of touch on the rubber hand is associated with a drift in the perceived location of the hand towards the location of the rubber hand [25,28,39], with both hands having the same handedness. In our third experiment we wanted to determine whether the present illusion of the transfer of touch from one hand to the other was associated with changes in proprioception. This would also provide objective behavioral evidence that the rubber hand is perceived as one's own hand. In this experiment, the participants were exposed to periods of three minutes of synchronous and asynchronous brushing of the left real hand and the right rubber hand (as in Experiment #1). The participants were asked to rate the degree of their agreement vs. disagreement with those statements using the following scale: In a fourth experiment we used this proprioceptive drift measure (see Results) to test the hypothesis that in our set-up the bilateral illusion requires a homologous pair of limbs, i.e. that the effect requires a pair of right and left hands. Thus, as a control condition, we replaced the right rubber hand with a left one and brushed the real left hand and the left rubber hand simultaneously.
In both experiments (#3 and #4), the two conditions were repeated three times in a balanced order across participants. Between each brushing session there was a break of one minute. Directly before and directly after each period of brushing, the participants were asked to close their eyes and indicate the position of their right index finger by pointing with their left hand. Before making this response, the experimenter positioned a ruler 31 centimeters above the table 49 centimeters in front of the participant's body. The experimenter placed the participant's left index finger at the starting point of the ruler, which was just in front of the body midline, and asked him or her to move that finger briskly along the ruler and stop until it was immediately above where he or she felt the right hand to be located. We computed the differences in pointing error (towards the rubber hand) between the measurements made before and after each period of stimulation. The average of the difference values was compared between the two conditions using paired t-tests.
Statistical analyses
In Experiment #1 we compared the illusion questions to the control questions, in the synchronous and asynchronous conditions, respectively, using a 262 ANOVA with the factors Condition (Synchronous, Asynchronous) and Question type (Illusion, Control). Out planned comparison was the interaction between Condition and Question type, i.e. a greater difference between the illusion and control questions during the synchronous stimulation than during the asynchronous stimulation.
In Experiment #1 we also analyzed the correlations of scores on the illusion questions related to feeling touch on the rubber hand and the feeling of limb ownership. In the traditional rubber hand illusion it is well known that these perceptual experiences are tightly coupled (Makin et al. 2008). On the basis on our observations from pilot experiments we predicted that a similar tight correlation should be observed between the experiences of phantom touches and ownership of the model hand in the present set-up.
In Experiment #2 we predicted greater skin conductance responses in the illusion condition than in each of the two control conditions. Thus first we used one-way ANOVA to test for an effect of condition on the SCR. Then we conducted two planned comparisons between the illusion condition to the two control conditions, respectively (corrected for multiple comparisons).
In Experiment #3 we predicated greater proprioceptive drift towards the rubber hand in the illusion condition than in the control conditions. In Experiment #4 we predicted that the proprioceptive drift towards the rubber hand will be observed only when a right rubber hand is brushed in synchrony with the left real hand. We predicted that the effect will be abolished when the right rubber hand is replaced by a left rubber hand. In both experiments we used t-tests to compare the two conditions.
The reader should note that in all our experiments we have used the more conservative two-tailed statistical tests even in the case of planned comparisons with one-tailed predictions. We have used Kolmogorov-Smirnov test to check the parametric assumptions and in cases of violations we have used non-parametric statistical tests, as indicated in the Results section. Apart from the correlation analysis in Experiment #1 in which we set alpha to 2.5% due to multiple comparison between Q1, Q3, and Q4, we set alpha to 5% in all remaining tests.
Questionnaire data (Experiment #1)
Sixteen out of the thirty participants (53%) felt as though the rubber hand was their real hand (ratings on statement Q1 of $+1) when it was brushed for a prolonged time in synchrony with their left hand (Table 1). Similarity, sixteen participants (53%) reported the sensation of two distinct touches: one on the right rubber hand and the other on the real left hand.
The rating scores were significantly greater on the illusion questions than on the control questions, and this effect was significantly greater after a period of synchronous stimulation as we had predicted. Statistically, we could demonstrate this effect using a two-way 262 ANOVA on ranks. Specifically, we obtained significant differences between the levels of the factors ''Condition'' (synchronous, asynchronous) (N = 30, p,.001, F(1, 29) = 25.367, two-way 262 ANOVA on ranks), ''Question type'' (illusion, control) (N = 30, p = .039, F(1, 29) = 4.674, two-way 262 ANOVA on ranks), and crucially, a significant interaction between the two factors (N = 30, p = .035, F(1, 29) = 4.892, two-way 262 ANOVA on ranks).
As we predicted there was a significant correlation between experiencing duplication of touch and feeling ownership of the rubber hand (N = 30, p = .021, r = .418, two-tailed Pearson correlation), i.e. a correlation was observed between the ratings of Q1 (''I felt as if the rubber hand was my hand'') and Q4 (''I could sense two touches, both on my (real) left hand and on the right rubber hand''). A highly significant correlation was also observed between the ratings of Q1 (''I felt as if the rubber hand was my hand'') and Q3 (''It seemed as if I was feeling the touch of the paintbrush on the rubber hand'') (N = 30, p,.001, r = .644, two-tailed Pearson correlation) (Figure 2). It was important to analyze the correlations between the different illusion questions because strong correlations would imply that our objective tests for ownership (see below) would provide evidence for experiencing ''phantom touches'' on the rubber hand.
In two additional pilot experiments we measured how long it took before the onset of the illusory perception of touch: we found that it takes more than one minute of synchronous stimulation. In these pilot tests, we also observed that simply brushing the rubber hand for five minutes without simultaneous brushing of the contralateral real hand does not elicit the illusion. In other words, just seeing the rubber hand brushed does not produce the referral of tactile sensations.
Physiological recordings (Experiment #2)
In line with our hypothesis people displayed greater skin conductance responses when we stabbed the rubber hand with the needle after the illusion condition than they did under the control conditions. There was a significant effect of condition (synchronous brushing, asynchronous brushing, and synchronous brushing of the rotated rubber hand) in the stabbing-evoked SCR (N = 14, p = 0.028, F(2, 26) = 4.138, one-way repeated measures ANOVA) (Figure 3). We used the Student-Newman-Keuls Method for pairwise multiple comparison between the different conditions, which yielded significant results for the comparison between the illusion condition and each of the two control conditions (N = 14, p = 0.035 and p = 0.030, respectively), and a non-significant result for the comparison between the two control conditions (N = 14, p = 0.729).
Proprioceptive drift measure (Experiments #3 and #4)
Experiment #3 demonstrated that the illusion was associated with a drift in the perceived location of the right hand towards the rubber hand (Figure 4a). The mispointing towards the rubber hand was significantly greater (3.0062.25 cm; corresponding to 15.5% of distance between the hands) after the synchronous condition than after the asynchronous one (0.8061.87 cm; 4%) (N = 14, p = .012, two-tailed t-test).
In our fourth experiment we found that the mispointing in the direction of the rubber hand requires that a rubber hand be used that is from the same laterality as that of the real hand hidden from view, that is, the illusion does not occur when the right rubber hand is replaced with its left counterpart. The proprioceptive drift was significantly greater after a period of synchronous brushing of the right rubber hand (2.0462.28 cm; 10.2%) than after an equivalent period of stimulation using the left rubber hand (0.2361.79 cm; 1.15%) (N = 13, p = 0.01, two-tailed t-test) (Figure 4b).
Discussion
We have reported a perceptual illusion in which touches applied to a participant's left hand are sensed on a right rubber hand when both hands are brushed synchronously. For this phenomenon to occur, the rubber hand had to be a right hand, it had to be oriented in parallel to the person's hidden right hand in an anatomically plausible position, and the touches delivered to the two hands in view had to be synchronous. These observations suggest that visual, tactile and proprioceptive information from the two hands is integrated automatically, even in the absence of bimanual action or bimanual tactile exploration, and that this bilateral multisensory integration can cause qualitative changes in tactile perception and limb ownership.
The questionnaire ratings revealed that only 16 out of the 30 participants, equivalent to 53%, reported feeling the illusion at all (gave scores of 1 or higher to statement S1). This is less than the original rubber-hand illusion which is perceived by approximately 70% of the participants [26,32,39]. Furthermore, the illusion presented here requires a longer period of stimulation to be elicited (typically minutes), while the original rubber hand illusion is experienced in most of the cases after only ten to fifteen seconds of synchronized brushing [26,32]. These differences suggest that the bilateral transfer illusion requires that additional processes related to the integration of visual and tactile input from the opposite sides of the body be implicated.
It is important to emphasize that our data rule out the possibility that the present perceptual effect is merely a weak rubber hand illusion as described by [32] who demonstrated that the greater the distance between the rubber hand and the participant's real hand, the weaker the illusion. In this study the distance over which referred tactile sensations can be attributed to an artificial hand from the same laterality was estimated to be approximately a maximum of thirty centimeters [32]. In our set-up, the distance between the stimulated left hand and the right rubber hand was forty-two centimeters, which according to Lloyd's data would suggest that the illusion would not work very well. Crucially, in our fourth experiment, we made a direct comparison of the difference in the illusions when a right or a left rubber hand was brushed in synchrony with the left hand, the real right hand being hidden from view throughout. Importantly, only the right rubber hand produced a significant drift in proprioception, which demonstrates that the bilateral transfer illusion involves different processes.
Along the same argument, it is also unlikely that the present bilateral transfer illusion relies on the same process that created the duplication of touch sensation onto two rubber hands in the recently described ''three-arm illusion'' [40]. In this experiment, the person's right hand is placed under a table and two right rubber hands are placed side by side (10 cm apart), 10 cm above the real hand. Simultaneous brushstrokes applied to the three hands produced the sensation of touch on both rubber hands. However, for this illusion to work, the rubber hands have to be of the same laterality as the stimulated real hand (as found in the pilot experiments). Importantly, in the fourth of our experiments reported here, the left rubber hand condition effectively served as a control for a putative duplication of touch from the brushed left hand to any rubber hand placed 42 cm to the right of the stimulated hand. We observed a significantly greater proprioceptive drift in the right hand condition, eliciting the bilateral transfer illusion.
To the best of our knowledge, the present illusion is the first where tactile sensations are transferred from one limb to another across the body midline in healthy participants. In the 'cutaneous rabbit illusion' rapid stimulation at the wrist followed by stimulation near the elbow creates the illusory perception of touch at intervening locations along the arm [41,42]. In another illusion, the so-called ''tactile funnelling illusion'', people experience one touch at a location between two close sites of physical stimulation on the skin [43,44,45,46]. In the somatosensory version of Sham's ''double flash illusion'' [47], participants experience two brief touches when the index finger is tapped once in combination with two brief flashes or auditory clicks [48]. All of these illusions are associated with a shift in the perceived location of touch on a limb, or with the duplication of the number of touches experienced at a particular location. The illusion reported here, however, is different because the touch was transferred between two homologous limbs. Thus, a right rubber hand 'felt' the touch that was applied to the left hand. What brain mechanisms might be responsible for the present bilateral illusion? The transfer of tactile information from the left to the right hand could be mediated by neurons with bilateral tactile receptive fields in the parietal cortex. Electrophysiological studies in primates have revealed a substantial number of neurons with bilateral tactile receptive fields in Brodmann's areas 2 and 5 [16,17,18]. Such cells probably exist in the human brain too, as fMRI experiments have reported ipsilateral activation in areas 2 and 5 during unilateral somatosensory stimulation of the hand [49,50]. Similarly, in non-human primates, cells with bilateral tactile receptive fields have been found in the parietal operculum in areas neighboring the SII cortex [51,52]. Positron emission tomography [53,54], fMRI [55] and magnetoencephalograpy [56], too, show bilateral responses in the parietal operculum in humans during unilateral tactile stimulation. The ipsilateral tactile responses in primates are likely mediated by callosal projections from the contralateral somatosensory areas, although thalamocortical input from the ventrobasal complex is a viable alternative [57]. Humans who have their corpus callosum sectioned as part of surgical procedures show reduced or eliminated ipsilateral responses in SII and parietal areas 2 and 5 [58]. Iwamura demonstrated that lesions of the contralateral SI in monkeys eliminated most of the ipsilateral responses in areas 2 and 5, which is consistent with the interpretation that these cells receive tactile information from the contralateral somatosensory cortex via callosal connections [17]. Thus, a plausible scenario for the 'phantom touch' reported here would be that the prolonged tactile stimulation of the participant's left hand generated weak activation in ipsilateral somatosensory areas. The time which was necessary for the 'phantom touch' to be perceived suggests that, initially, these ipsilateral responses were below the 'threshold for conscious perception'. However, when the sub-threshold ipsilateral activation was combined with the temporally and spatially congruent visual information from the contralateral rubber hand, the ipsilateral tactile responses were up-regulated and produced the 'phantom' touch sensations. The visual information from the brushed right rubber hand could influence the ipsilateral tactile processing at several cortical nodes in the left hemisphere. Although areas 2, 5, and SII do not receive strong visual input [but see [20,59]], they are reciprocally connected to multisensory areas such as the ventral premotor cortex [60,61,62], area 7 in the inferior parietal cortex [51,52,63,64], and the ventral intraparietal area [VIP; [65,66,67]], all of which are known to be areas that receive substantial visual input [68,69,70,71]. Thus, within these fronto-parietal circuits, the ipsilateral tactile information could be fused with visual and proprioceptive information from the right hand. This could be achieved by the integration of visual and tactile signals in arm-centered reference frames centered on the right rubber hand [27], implemented by neuronal populations in the ventral premotor cortex and the intraparietal cortex [26,39].
It is still an open question whether the bimanual transfer of touch in healthy individuals involves similar mechanisms to those that induce 'phantom touch' sensations in patients with hemi-sensory loss or amputation [12,13,15]. In these cases, subsequent to brain damage or the loss of a limb, central plasticity could lead to a strengthening of the commissural connections and the ipsilateral somatosensory representations. In healthy individuals, as used in the case of our study, it seems that a couple of minutes of congruent visual and tactile stimulation is sufficient to up-regulate the ipsilateral somatosensory processing. Future imaging experiments are needed to characterize this hypothesized up-regulation process and to localize the neural correlates of the phantom touches with precision.
In conclusion, our study has introduced a novel version of the rubber-hand illusion in which converging multisensory input from both sides of the body suffices to change the feeling of limb ownership and to elicit illusory tactile sensations on an unstimulated limb. This reveals an inter-hemispheric mechanism for tactile perception and multisensory integration which is involved in the perception of our own bodies. Our finding could have a bearing on applied neuroscience, as tactile stimulation to an intact hand in amputees might support the ownership and usage of prosthetic limbs [31,72]. Similarly, research on stroke rehabilitation should examine the possibility that physiotherapy of a hemiplegic limb might be facilitated by concurrent tactile stimulation of the contralateral limb. | 2016-05-12T22:15:10.714Z | 2009-09-09T00:00:00.000 | {
"year": 2009,
"sha1": "cf7336556ea94d8b828b32cb0fc6e0a9e04ff792",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0006933&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fc88a3a0ace43562b76b760057201a377a5aea5d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
14143278 | pes2o/s2orc | v3-fos-license | Modeling Transmembrane Domain Dimers/Trimers of Plexin Receptors: Implications for Mechanisms of Signal Transmission across the Membrane
Single-pass transmembrane (TM) receptors transmit signals across lipid bilayers by helix association or by configurational changes within preformed dimers. The structure determination for such TM regions is challenging and has mostly been accomplished by NMR spectroscopy. Recently, the computational prediction of TM dimer structures is becoming recognized for providing models, including alternate conformational states, which are important for receptor regulation. Here we pursued a strategy to predict helix oligomers that is based on packing considerations (using the PREDDIMER webserver) and is followed by a refinement of structures, utilizing microsecond all-atom molecular dynamics simulations. We applied this method to plexin TM receptors, a family of 9 human proteins, involved in the regulation of cell guidance and motility. The predicted models show that, overall, the preferences identified by PREDDIMER are preserved in the unrestrained simulations and that TM structures are likely to be diverse across the plexin family. Plexin-B1 and –B3 TM helices are regular and tend to associate, whereas plexin-A1, -A2, –A3, -A4, -C1 and –D1 contain sequence elements, such as poly-Glycine or aromatic residues that distort helix conformation and association. Plexin-B2 does not form stable dimers due to the presence of TM prolines. No experimental structural information on the TM region is available for these proteins, except for plexin-C1 dimeric and plexin-B1 – trimeric structures inferred from X-ray crystal structures of the intracellular regions. Plexin-B1 TM trimers utilize Ser and Thr sidechains for interhelical contacts. We also modeled the juxta-membrane (JM) region of plexin-C1 and plexin-B1 and show that it synergizes with the TM structures. The structure and dynamics of the JM region and TM-JM junction provide determinants for the distance and distribution of the intracellular domains, and for their binding partners relative to the membrane. The structures suggest experimental tests and will be useful for the interpretation of future studies.
Introduction
How information is transmitted across cellular membranes remains a key problem in biology [1]. In the case of receptors that transverse the plasma membrane, ligand binding events on the outside are typically transmitted to the cytoplasm by configurational changes of the transmembrane (TM) regions, such as dimerization and/or conformational changes (for example in the orientation or position of TM helices relative to one another) (Fig. 1a) [2]. The study of membrane proteins remains challenging, especially for receptors that cross the membrane only once. Remarkably, no crystals of the helical TM regions of such receptors have been obtained/solved to date. TM domain structures for single-pass receptors, such as EGFR, ErbB2, EphA1, EphA2, and VEGFR2 (currently about 10 structures), have been derived by NMR spectroscopy or other biophysical techniques (e.g. [2,3]). Recently, molecular modeling and simulations play an increasing role for interpreting the experimental data [4,5,6,7,8]. Moreover, as the accuracy of reproducing the experimental structures increases, reliable predictions can be made. In this project, we advance on our previous study, which combined the prediction of helix contacts in TM dimers with extensive all-atom molecular dynamics (MD) [9]. Here we present predictions for the 9 members of the human plexin-family of TM receptors (plexin-A1-4, -B1-3, -C1 and -D1).
Plexins [10] are unique TM receptors in that they interact directly with small GTPases in diverse manners. This includes direct interactions with Rho GTPases and transient/catalytic interactions with Ras GTPases, as plexin functions as a GTPase Activating Protein (GAP) [10]. We have characterized the Rho GTPase Binding Domain (RBD) of plexin and developed a model, which posits a direct participation of Rho GTPases in the regulation of some of plexin's functions [11,12,13]. Previously, it was shown that plexin signaling is outside-in (activation upon ligand binding outside) as well as inside-out (increased activation and ligand binding due to binding of certain Rho GTPases inside) [14,15]. While this mode of synergistic communication is seen in several other systems (e.g. for Integrins [16]), the molecular mechanism remains to be uncovered for plexins. Clearly, the TM region plays a key role, but given the low sequence similarity of this region across the 9 human proteins (Fig. 1b), the signaling mechanisms are likely to be diverse amongst members of the plexin family. For example, the typical GxxxG motifs, usually used for close helix-helix packing [17], are not well conserved between-or even within-plexin subfamilies. Thus, there is considerable interest to predict plexin TM helix dimer structures and to understand their configurational behavior.
We utilized a two-step approach: First, helix dimers were predicted ab initio based on helix packing considerations using the PREDDIMER server [18,19], a method that has been systematically benchmarked against known TM dimer structures. The best 3-7 predictions were then compared structurally across the family of 9 human plexins and 13 examples were chosen to cover the diversity of structures and subfamilies. Second, as an additional refinement, if not the testing step with respect to a state-of-the art all-atom forcefield, these structures were embedded in an explicit lipid bilayer and solvent (Fig. 1c) and equilibrated over a period of around 1.0 μs [9] on the MD optimized supercomputer Anton [20]. Nearly all of the predicted structures were stable and converged during these simulations. Thus, the ab initio predictions with PREDDIMER are relatively accurate. The diverse behavior of the TM helices across the plexin family is discussed. For plexin-B1, the best studied plexin to date, we also modeled the TM Modes for transmitting information across the cellular membrane in single pass TM receptors: Translation (monomer-dimer association); Piston (sliding of helices to change register); Pivot (change in interhelix crossing angles); and Rotation (change of helix interacting surfaces). b). Amino-acid sequence alignment of TM and TM proximal regions of all 9 human plexins: TM regions shaded grey, extra N-and C-terminal extensions underlined in red/blue for peptides, for which all-atom simulations were carried out, the juxta-membrane (JM) region is shown in blue for plexin-B1 and plexin-C1. The number after the plexin name corresponds to the first residue shown in the alignment. c). Comparison of plexin-B1 TM only peptide structure obtained from PREDDIMER (Left) and the same peptide with helix N-and C-terminal trimer and considered the role of part of the intracellular region, which immediately follows the TM segment: the so called juxta-membrane (JM) segment. Together with recent plexin-C1 dimer and plexin-B1 trimer structures of the intracellular region [21,22], we are able to make predictions concerning plexins' configurational behavior and likely functional modes.
Comparison between PREDDIMER and CHARMM-forcefield allatom μs-simulation refined TM structures
TM helix dimer structures were predicted ab initio for all 9 human plexins using the webserver PREDDIMER [19]. The full results are given in Table A in S1 File for the 26 best structures with packing scores (Fscor > 2.5). The PREDDIMER output was examined in terms of crossing angles, the location of the interface contact, and considered the rotation of the helices relatively to one another. Pairwise RMSD alignments were calculated and scaled for the extent of residue similarity between all of the 26 structures (see Table B in S1 File). Together, the helix geometric parameters suggested a grouping, with several additions to include at least two members of each subfamily. Thus, a diverse set of 13 structures was selected for refinement and testing. The parameters for these structures are given in Table 1, with comments on helix dimer configurations. Both right and left-handed crossed structures were selected and crossing angles range from 60 o to -55 o , with several also near ± 10 o ; the latter indicating largely parallel helix configurations. Although all structures are homodimers, it should be noted that the predicted structures are not always symmetric. This is shown, for example, by the different helix rotation angles for model b1.7. The 13 models were then prepared for all atom molecular dynamics (MD) simulations as described in the Methods section.
Since regions immediately outside the hydrophobic TM segment can influence the helix dimer configuration (e.g. [5]), we extended the TM helix peptides by addition of up to 10 extensions (Right) embedded in lipid bilayer (structures of model b1.2 are shown); the peptide is shown as ribbon representation; the lipids are given in allatom line representation on the right and the implicit bilayer is shown for the PREDDIMER prediction as orange lines on left. residues from the native human plexin sequences, both at the N-and C-termini (Fig. 1b,c). The structures were prepared as explained in the Methods section and simulated for 1.0 μs on Anton. One structure, b2.1, dissociated, while another structure, b2.3, showed a separation of the helices but contacts involving the both N-and C-terminal regions (the added residues) still holding the dimer loosely together. In order to verify the convergence of the simulations, plots of the evolution of the geometric parameters (RMSD to starting structure, helix crossing and rotation angles) were carefully examined for drift. While some of the structures fluctuate, drift was only apparent for crossing angles in Plexin-B1 model1, b1.1 and this simulation was continued to 2.0 μs (Fig. 2 . RMSD (top), crossing (middle) and helix rotation angles (bottom panel), calculated as in [9]. Rotation angles for helix A in black, helix B in red.
The data (see also methods and Table 2) suggest that 1 μs MD simulations are typically sufficient for the refinement. However, slower reversible changes are seen in simulation B1.1 which was continued to 2 μs. Standard deviations of RMSD and geometric parameters over the last 250 ns of the simulations were used to confirm equilibration.
final structures are given in Table 2, incl. the RMSD from the initial structures. Again, in correspondence with the geometric parameters, RMSD values are between 3.0-4.4 Å except for b1.7, b2.1 (which dissociated), and for d1.1 and d1.2. RMSD values less than 4.5 Å suggest that the structures are similar to those predicted ab initio, however there are slight adjustments in helix rotational angles (typically less than ±45 o ). Importantly, the relationship between the different plexin subfamilies identified in the ab initio predicted structures is largely preserved in the final, all-atom equilibrated structures of the helix dimers with N-and C-terminal extensions. Fig. 3a) gives the scaled pairwise RMSD values between the initial 13 plexin structures chosen for further refinement. Fig. 3b) gives the RMSD values between the final 13 plexin structures; reflecting that only b2.1 (which dissociates) changes with respect to the others. Looking at some of the simulations in greater detail, Figs. 2 and 4 show results for the MD refinement of the plexin-B1 models 1-3. In Fig. 2 the geometric parameters are plotted as a function of simulation time, illustrating that there are only few significant configurational fluctuations in crossing (Fig. 2b, middle panel) and relative rotation angles (Fig. 2a, lower panel). By 1.0 μs (and in case of b1.1 by 2.0 μs) the simulations are rather well converged in that the changes appear complete, giving us confidence that generally this time is sufficient to equilibrate the structures. In Fig. 4 the final structures for b1.1, b1.2 and b1.3 are shown, which includes both clockwise/right-handed (b1.1 and b1.2) and anti-clockwise/left-handed (b1.3) helix dimer structures. Here, the helices interact via two alternate sets of GxxxG-like motifs. The details of interactions stabilizing the structures, the extent of the observed fluctuations over the last 250 ns of the simulations and the likely functional consequences are discussed below.
Model for the plexin-B1 TM trimer
The intracellular region of plexin-B1 has been crystallized in a trimeric state when bound to the small GTPase Rac1 [22]. It is important to test which configuration of the TM region would be compatible with a trimeric structure. Two TM trimer models, a left-hand/clockwise and a right-hand/anti-clockwise arrangement, were built and equilibrated for 1.0 μs on Anton. The initial and final structures are shown in Fig. 5 (and Fig. A in S1 File). Changes in the rotation angle for both clockwise and anti-clockwise helix trimer structures during the simulations are shown in Fig. B in S1 File. As can be seen, both the clockwise and anticlockwise structures are stable during this extensive simulation. There is a larger initial rotation at the contacting interface for helix C in the clockwise structure (left panel of Fig. B in S1 File) and this helix continues to fluctuate. Similar fluctuations are seen in helix A of the anti-clockwise structure. Both structures have stable contacting interfaces, which are shown in Fig. 5. Particularly, Thr19 and Ser20 from the Plexin-B1 TM region make stable contacts in the trimers. The minimum distances of sidechain hydroxyls of Thr19/Ser20 that are located on neighboring helices, are plotted in Fig. C in S1 File showing, respectively, 1 and 2-3 relatively persistent Ser/Thr sidechain contacts (< 5.0 Å) in the clock-wise and anti-clockwise structures.
Model for the plexin-B1 TM-JM helix trimer
The juxta-membrane (JM) region, which connects the TM and intracellular domains was predicted to form a trimeric coiled coil. JM region was not visible in the X-ray structure of the trimer, but inferred from it [22]. Most of this region was, however seen in the X-ray structure of the plexin-B1 monomer [13]. Using the latter as a starting structure we built an anti-clockwise coiled-coil JM trimer as described in the Methods section and equilibrated it for 1.0 μs on Anton. The MD equilibrated clockwise or anti-clockwise TM trimer structures (described above) were then linked to this JM structure in several different ways: 1) as an extended connection in case of the TM clockwise/JM anti-clockwise arrangement, which was then restrained to become helical, 2) a bulged out-but otherwise irregular-connection for the TM anti-clock/ JM anti-clock structure and finally, 3) the same with connections via helical (bent) structures, resulting in a total of 3 models. After equilibrating these configurations for 20 ns, it was clear that the structure started from model 2) showed very significant deviations from a helical structure and no further simulations were attempted. The structures started from model 1) and 3) were continued for 1.0 μs on Anton and had equilibrated. The initial and final structures are shown, partially in Fig. 6 and fully in Fig. 7a.
Does attachment of the JM region influence the configuration (and dynamics) of the TM regions? In order to address this question we calculated the RMSD values between the TM regions in the TM-only trimers, comparing initial and final structures and RMSD between the TM in the TM-only and in the TM-JM structures ( Table 3). The results, also considering the fluctuations over the last 250 ns (Table C in S1 File) show that model2 (anticlockwise) TM segments and the whole TM-JM, are deviating from starting structures less than the clockwise Simulation on Plexin Transmembrane Domain Dimers and Trimers structures (even in the case of TM only simulations). Slightly less deviation is seen in the JM region of the TM-clockwise structure, compared to the anticlockwise model. Joining the TM to the JM region reduces deviations in both TM-JM models, but especially in the anticlockwise TM compared to this TM's initial structure.
For the comparison of the dynamics, Root Mean Squared Fluctuation (RMSF) and order parameters (S 2 ) were calculated. RMSF is a measure of the deviation of atomic positions from the trajectory average structure. S 2 , reflects the amplitude, here of NH bond fluctuations on the psns timescale. S 2 can also be derived from NMR relaxation measurements; thus this parameter is useful for future comparisons. The results are shown in Fig. 7b and 7c. Comparing the mainchain fluctuations of the TM clockwise and anti-clockwise structures, the anti-clockwise structure is more stable for all three helices, especially in the TM region. Fluctuations are seen in one of the helices in the TM-JM junction. The results for the corresponding TM trimers are shown in Fig. D in S1 File. Except for the clockwise TM-JM structure (above), the overall extent of fluctuations is similar in the TM part of the TM-only and TM-JM trimer simulations.
After considerable rotation of the helices during the initial model building of the TM-JM clockwise structure, both models show the TM region central Thr19 and Ser20 are localized into the interior of the 3-helix bundle (Fig. 6a, 6b). Several contacts between chains are relatively stable, shown by the minimal distances between the Ser20 and Thr19 residue pairs (Fig. E in S1 File). Similar to the plexin-B1 TM-only trimer models, the anti-clockwise TM-JM structure is more stable during the simulations than the structures involving the clockwise TM.
The distances between JM tail region (the last three residues) and the inner bilayer leaflet are shown in Fig. F in S1 File. The two plexin-B1 TM-JM trimers maintained a near constant distance throughout the simulations. The C-terminus of the TM-JM anti-clockwise/anti-clockwise structure is farther away from the lipid, and thus is less influenced by the interaction from lipids, and is more stable, not least because the helices are mostly regular including at the TM-JM junction. The final structures are shown in Fig. 7a. Running the Socket program [23], which can identify and analyze coiled-coil motifs within protein structures, both the plexin-B1 TM-JM helix trimer final structures are predicted to have coiled-coil structures. Such packing was not identified in the initial structures and developed during the all-atom MD simulations.
Plexin-C1 TM-JM helix dimer model
In case of the plexin-C1 dimer [21], the crystal structure shows a coiled coil-like JM dimer (see Fig. G in S1 File for discussion and further analysis). The resolved part of the JM region needs to be extended to the membrane. It was modeled here, starting with the MD refined N-and Cterminally extended TM models, c1.1 and c1.2, which were then linked to the JM coiled-coil- like X-ray structure. The TM-JM junction was modeled in order to connect the TM and JM structures with continuing helices. However, we observed constraints during the modeling were imposed by the crossing angle of the TM region. These influence the packing of the junction, which is initially imperfect in one of the two helices in both structures (Fig. 8a). Again, RMSD comparisons revealed the effect of adding the JM region on TM structures and dynamics ( Table 4). The results show that the model1 (initial parallel) TM segments and the whole TM-JM, are deviating from the starting structures less than the model2 (right handed, RH) structures. Similarly less deviation is seen in the JM region of the parallel structure, compared to the RH model at the end of the trajectory. Surprisingly, joining the TM to the JM region increased deviations in both TM-JM models compared to their TMs initial structures. Similarly, comparing the final TM structures (with and without JM) shows that the deviations are slightly smaller than between the TM-JM initial structures. Interestingly, the difference between TM-JM models1 and 2 was reduced over the time-course of the simulations, possibly due to the fact the TM and JM regions of both models became more regular and better packed. We noted some of the issues of the final MD models c1.1 and c1.2, above-it appears that attaching the JM region fixed problems in the refined TM only models.
In order to examine the extent of fluctuations across the TM-JM region, RMSF and S 2 order parameters were calculated and are plotted in Fig. 8b and c, respectively. It is clear that although helices in model1 (built with c1.1) have a smaller crossing angle in the TM region/show more parallel helices, the fluctuations are greater than in model2 (c1.2), which had them crossed at a greater angle. More specifically, in model1 the helix A has a much larger structure fluctuation than helix B in both the TM and JM regions. At the same time, helix B was not continuous at first but has a bulge (see S1 Movie). Remarkably this bulge was fixed in the C-terminal JM region of helix B towards the end of the simulations. Although the bulge was likely to be responsible for some of the fluctuations, especially for the low S 2 value (res. 38 of helix B), clearly there were other longer range packing defects also in the TM region and along the entire length of one side of helix A (seen as an oscillatory pattern). As a comparison, the RMSF and S 2 results for TM only dimers were also calculated ( Fig. H in S1 File). On average the RMSF values are 1 Å less than for the TM region in the TM-JM structure. Thus, the parallel TM dimer arrangement is not as stable, even when switching to the RH structure, crossed near the C-terminus of the TM region.
The model with the larger TM crossing angle ( Fig. 8a right, built on c1.2) led to a separation of the N-terminal of JM/greater crossing of the C-terminal of JM coiled coil region and to a bulging out of the helical connecting structure, resulting in a break of one of the helices. The extended simulation of this structure did not change this local distortion. Except for this break of one helix in the TM-JM junction in model1 (res. 30-32 of helix A), which showed increased dynamics in the S 2 analysis, the overall extent of the fluctuations is smaller compared to model1 (also see S2 Movie). A comparison with the TM only simulations for this model showed much greater fluctuations in helix A with an average RMSF of 4.0 Å for TM-only vs. 2.5 Å for TM in TM-JM. Helix B behaved similarly in both (RMSF of 2.5 Å). Thus, by contrast to model1 above, model2, with larger RH TM crossing angle experienced reduced fluctuations by being linked to the JM coiled-coil structure.
The distance between the C-terminal tail of the JM region and the inner bilayer leaflet was calculated for both model1 and -2 ( Fig. I in S1 File). The C-terminus of model2 is on average 40 Å closer to the membrane compared to model1, reflecting the difference in crossing angle. However, at around 370 ns, the JM region of the model1 structure transiently came close to the lipid membrane (also shown in S1 Movie). Such interactions between the JM region and the lipid membrane can distort the JM helix structures, and influence the structure. By comparison no such large structural distortions were detected for TM-JM model2 during the all-atom MD simulation (shown in S2 Movie). The structure appeared to be overall more rigidly anchored in the lipid bilayer and the JM regions pointed consistently away from the membrane.
Consistency of PREDDIMER and all-atom molecular dynamics
Experimental structure determination for TM protein segments lags far behind the structures that are available for soluble protein domains [24,25]. Although recently there have been many structures solved for 7 TM receptor GPCRs and for other multi-transmembrane spanning channel and pore proteins, no crystal structures are so far available for single spanning TM proteins, such as plexin. The reasons for this are not clear, but it is possible that TM dimer and trimer structures are too flexible for crystallization/cannot be easily packed into a crystal lattice. Meanwhile, structures are available from other experimental techniques, chiefly from NMR but also structure predictions, and MD simulations of TM proteins are becoming increasingly reliable (e.g. [3,4,5,6,26,27]). We previously reported results on using an ab initio prediction strategy for TM helix dimers that involves an implicit representation for the lipid bilayer and for water. While allowing extensive configurational sampling [9], this method did not work well in our hands and appears to require additional input, such as symmetry [28] or at least helix-stabilizing restraints [29]. Several recent publications show that coarse grained simulations, and in one case computationally expensive 200 μs all-atom simulations, are able to obtain near experimental TM helix dimer structures from randomly placed TM helix monomers [30,31,8]. Here we tested a different approach, starting from structures that are predicted on the basis of helix packing modes, as implemented in the webserver PREDDIMER [19]. Similar to our previous study [9], which was validated with reference to two known TM helix dimer structures, we examined the predictions over an extensive time period (1 μs) by all-atom MD simulations. For this project we selected the family of plexin TM receptors. No experimental structures are known, but inferences for dimer and trimer TM structures could be made from recent crystal structures of the intracellular regions.
Computational resources did not allow us to carry out all-atom simulations for all the wellpacked dimer structures that are predicted for the 9 human plexins. Thus, we considered groups of structures. The amino acid sequences of the TM regions of plexins are moderately well conserved within subfamilies as shown in Fig. 1b. These are the plexins-A1 to A4, which are similar in sequence but have a range of small residues near the N-terminus (A1 predicted as the most flexible, A4 as the least). Plexin-B1 and -B3 are close in sequence, but -B2 is different, having only one GxxxG-like motif near the N-terminus. Plexin-C1 and -D1 are also significantly different, with plexin-C1 having no clear GxxxG sequence. Thus, rather than grouping by sequence similarity, we grouped the predicted structures by geometric considerations and pairwise RMSD between all 26 PREDDIMER structures calculated for the plexin family. Indeed, with exception of plexin-B2, similar structures are predicted for members of the same subfamily as displayed in Fig. 3a). A wide range of configurations of the two TM helices were predicted, with all 9 plexins showing some compatibility with TM parallel or crossed dimer arrangements (Fscor > 2.5, ranging from Fscor of 3.4 for plexin-C1 to 2.6 for -A3 and -B3). There is no initial preference of left-over right-handed crossings or more parallel arrangements from the PREDDIMER calculations. After grouping, 13 models were simulated for 1 μs on Anton. While this amount of time is usually too short to sample transitions between alternative states, we find (with exception of one case) that it is sufficient to allow an equilibration and thus a refinement of the ab initio models by use of the CHARMM27 forcefield [9]. If the structures are unstable, a separation or significant distortion of the helices is anticipated. Indeed one model for the plexin-B2 TM dimer experiences such larger configurational changes (discussed below). Examination of geometric parameters for the last 250 ns of the simulations shows that the simulations are equilibrated (i.e. RMSD < 0.5 Å for the central helix regions, crossing angles fluctuate within ±10 o , and rotation angles within ±25 o ) as shown in Table D in S1 File. This is similar to deep minima seen in simulations begun with NMR derived structures [9] or with TM dimers that associated in all atom simulations [29]. In few cases larger fluctuations were observed, but these represent more shallow minima, rather than conformational drifts; nevertheless indicating that those predicted structures could be less well defined and TM dimers be less stable (e.g. models b1.7, c1.1). As with all classical MD simulations it is not possible to tell whether the structures are in a global energy minimum (e.g. in a 1000 μs simulation of BPTI some of the fluctuations only became apparent on the 100+ μs timescale) [32] and sampling transitions between different models is beyond the scope of the present work. To overcome the problem of potentially rugged energy landscapes, we used a different strategy, which we believe is efficient. We first predicted a number of possible dimer conformations available for a given sequences by the PREDDIMER algorithm and after that tested their persistence in the realistic lipid environment by microsecond MD simulations.
On average, the features of the predicted structures are maintained in the all-atom MD simulations, suggesting that the PREDDIMER predictions are overall reliable. It is likely that a lowered packing (Fscor) arises due to slight distortions of the helical structures in the MD simulations. Remarkably, the initial grouping is preserved when the structures were equilibrated in the extensive all-atom simulations (illustrated in terms of pairwise RMSDs, shown in Fig. 3a, 3b).
Biological Implication: A structural and functional diversity within the plexin family
Plexin-B2. The simulations suggest that not all of the plexin TM regions, by themselves, form strongly stable homodimers. Especially, plexin-B2 is an outlier, showing dissociation/low packing scores in the all-atom simulations. The TM region has little sequence similarity to those of plexin-B1 and -B3 and the refined structure is moderately similar to that of plexin-D1 (RMSD of 4.6 Å). It is less likely that plexin-B2 forms regular TM helix structures or such structures may dissociate for a number of reasons. Apart from a relatively high number of βbranched sidechains (11 out of 22), which tend to be helix destabilizing [33], it has two prolines (res. 13 and 18) and no prominent GxxxG motif, except for SxxxP near the N-terminus. (The structure is also more extended since there are only 22 residues in -B2, compared to 24 in the-B1/B3 TM region). Another feature is several bulky sidechains, the YCYW sequence, at the TM region C-terminus, which as seen for plexin-C1, below, likely keeps helices apart. Plexin-B1 and especially plexin-B3 (only 6 β-branched sidechains) do not have such features. It would be tempting to infer that the unusual TM region of plexin-B2 influences the biological function and functional mechanisms of the receptor, which has been found to be considerably different compared to plexin-B1 and -B3 [34,35]. Other regions of the protein, for example the interacting region of the RBD domain with small GTPases, are also known to be substantially different [36]. Nevertheless, the RBD domain forms dimers in solution and also it is presumed that the extracellular ligand binding domain will form dimers, at least when bound to semaphorin ligand. Thus, the energetics of the TM region may synergize or be over-ridden by the oligomerization of the extra-and intracellular receptor regions.
Plexin-B1 (-B3). Alternative helix packing motifs have been described for the EGFR and Ephrin receptor TM regions [30,36]; however these are non-overlapping GxxxG motifs near the N-and C-termini of the TM helices [37]. These motifs suggested a model of activation/inactivation due to a change in crossing angle. Recently, Zhang et al. [9] and others have described the partially overlapping/off-set GxxxG motifs that are compatible with a helix rotation to a different state; for ErbB1/B2 and EphA2 by approx.120 degrees on one of the helices. For plexin-B1, the offset in the GxxxG motifs is a shift of two rather than one residue, compared to the previous example. This results in an approximate 180 degree change of helix orientation as shown in Fig. 4. The functional implications are illustrated below with the plexin-C1 TM-JM dimer. In plexin-B3 a similarly shifted motif was seen, but with GxxxG rather than with AxxxGxxxG (with the N-terminal Ala being replaced by Glu in -B3). This change could destabilize such alternate configurations or would at least disfavor a helix parallel conformation as a possible intermediate. In the case of plexin-B1, none of the 5 configurations that were tested by simulations have very strong helix-helix packing as reflected in Fscor values after equilibration; the structure of the intracellular trimer as well as coiled-coil predictions for the JM region [38] suggests that a TM helical trimer is the more stable configuration, which is also indicated by the reduced fluctuations of the helixes in the trimer (esp. anticlockwise), compared to the dimer models.
Role of intra-membrane Ser/Thr in plexin-B1 trimers and in dimers
The TM central Thr-Ser motif is unique to plexin-B1. (Apart from isolated Ser or Thr in the TM central region of -A2 and -D1, other plexins do not have this motif. The Ser-Thr pair in the plexin-C1 TM sequence is at the N-terminus of the helix, but that does not form interactions either in any of the plexin-C1 TM or TM-JM model structures). In plexin-B1, no inter-helix Ser/ Thr contacts are made in the 5 TM dimer structures that were examined by simulations (b1.1-1.3,1.6 and 1.7). Generally placing polar sidechains in the hydrophobic environment of the lipid bilayer is unfavorable but the Ser/Thr hydroxyl group can form a hydrogen bond to the carbonyl of the adjacent mainchain helix turn in TM helices. However, interhelical Ser-Thr sidechain contacts have also been observed in some structures (e.g. [39]). In plexin-B1 such contacts are persistent in both the TM only and TM-JM plexin-B1 models of the trimer over the course of the simulations, especially in the anti-clockwise structures. Mutating these residues may destabilize the trimer structure, compared to the dimeric forms, but trimers may still be stabilized by the JM regions, which are predicted by algorithms such as MultiCoil [40] for the B-family plexins [10,22].
Role of irregular structures: poly-Gly in plexin-As, helix shifts in plexin-D1 and -C1
The plexin-A family. Their TM regions are characterized by poly-glycine motifs in the membrane interior-see Fig. 1b, (GGGGG in case of -A1, GGG for -A3 and GG for -A2 and -A4). In the simulations of plexin-A1 we noticed a partial unfolding at this position, allowing a more extended structure (plexin-A family members only have 22 residues spanning the membrane, similar to plexin-B2 above). Alternatively, we saw for plexin-A1 that the helices were crossed at a significant angle with a different orientation of one or both of the N-terminal helix sections. The functional significance of this is not clear, except one may speculate that by itself the structures would be more flexible and may need additional support for the transmission of cell signals across the membrane. Indeed, plexin-A family members are thought to require Neuropilin-1 as a co-receptor. Very recently heterodimeric structures between plexin-A1 and Neuropilin-1 have been modeled using coarse grained simulations [31] and there is also experimental evidence for such heterodimers [41]. Poly-glycines near the N-terminal TM segments have been noticed in other proteins and are thought to be involved in cholesterol binding [42]. Particularly, such a possibility has been shown by the Sanders group for the amyloid precursor peptide [43]. It is not known whether cholesterol plays a role in the signaling behavior of plexin-A family members, especially in the regulation of plexin-A1 signaling.
Plexin-D1. As the sole member of the plexin-D family in humans, the structures appear to utilize a SxxxCS motif, with crossing either right or left-handed near the second Ser (residue16). The structures are relatively rigid, but are not symmetric/very regular since helices were shifted relative to one another in the membrane, consistent with helix dimer tilting, as well as crossing. Except for plexin-B1, -D1 has also the longest TM region with 24 residues spanning the membrane. Again, the structure of plexin-D1 may be stabilized by TM-region interactions with co-receptors, with Neuropilin-1 as a prominent candidate. However, functionally both neuropilin dependent and independent signaling mechanisms have been characterized in different settings [44].
Plexin-C1. The predicted structures for plexin-C1, the sole member of the C family, are considerably different from other plexins (e.g. see RMSD comparisons in Fig. 3a, 3b). The structures are not very regular-there is some helix bulging or unwinding at the N-terminus (C1.1 and C1.2 respectively) due to the sequence TWYF, which is all β-branched and the aromatics present large sidechains, preventing the helices to come close. However in other cases, it has been shown that single aromatics can stabilize TM helix dimers by stacking or cation-π interactions [45]. A TM central proline, furthermore can introduce a kink in the plexin-C1 helices (C1.2). As reflected in the Fscor values, these structures are only moderately well packed, due to the absence of a GxxxG-like motif, mostly in a near parallel manner. Nevertheless, the fluctuations in the geometric parameters are modest for model2 (larger for model1, which however, can be influenced by attachment of the intracellular membrane proximal region).
Effect of JM region on TM structures and their dynamics: example of plexin-C1 and -B1 TM-JM models
A critical question is how the TM segments, being more or less rigid helices, connect with the extra-and intracellular protein domains outside the lipid bilayer. The structures of these regions are typically responsible for ligand/adaptor protein binding in cell signaling. Changes, for example ligand induced dimerization, can be transmitted across the lipid bilayer most effectively if the junctions between the TM helix and the extra-and intracellular domains are relatively rigid. In this case there could be concerted large-scale changes in the orientation of the extra and intracellular domains with rigid connections with the TM segment facilitating the transmission of cellular signals as a mechanical event across the plasma membrane. For many TM receptors, such as the receptor tyrosine kinase superfamily, the connection between the TM and catalytic/kinase domain is not immediate but a JM segment presents a bridging region between the two domains, often also involved in a regulatory function (e.g. [30]).
In the case of plexins, the JM regions show a relatively well-conserved leucine-zipper/heptad repeat characteristic for coiled-coils as shown in Fig. 1b. Recently the crystal structure of plexin-C1 has been determined which, indeed, shows part of the JM region as a loose coiledcoil (see additional comments in Fig. G in S1 File). We, therefore, used the X-ray resolved part of the JM region of plexin-C1 and then sought to model the linker region to the two top models of the plexin-C1 TM region (one is nearly parallel and the other right-hand crossing, displayed in Fig. 8a). Similarly, a coiled-coiled trimer JM structure was modeled and attached to several models of the plexin-B1 TM trimer. The dynamics of the TM dimers and trimers were largely unchanged by attachment of the JM region over the course of the all-atom MD simulation. In fact, the TM structures allowed the JM structures to become more regular. Using the SOCKET program [31] to analyze the packing of the JM region (with a cutoff of 7.0 Å), we found coiledcoil structures in both the JM-regions of the final plexin-B1 structures after 1.0 μs of MD. While for plexin-C1 TM-JM dimers, the SOCKET program could not predict a coiled-coil structure in the JM region (even using a larger cutoff of 8.5 Å), nevertheless, here the JM region has a remarkable effect of regularizing the helices and helix contacts in the plexin-C1 TM region of RH crossed structures. These different scenarios illustrate that equilibrating multidomain structures, such as these TM-JM regions, with long-term all-atom MD simulations using the CHARMM forcefield, yields different effects of structures on each other (different levels of cooperative or competitive interactions). These features reflect on the different mechanical properties and thus suggest different cell signaling mechanisms of the systems.
Implications for the mechanism of signal transduction of plexins
Analysis of the flexibility of the TM-JM structures suggest that these are relatively rigid and are thus likely to help orient the intracellular domains of plexin relative to each other and the membrane. This is also apparent when distances between parts of the structures were examined.
Firstly, one may consider the lateral distances and the rotation between the N-and Ctermini of the TM helices as they enter/exit the membrane. In case of the TM trimer or largely parallel dimers, a modest amount of bulging would occur if unlike structures are connected. Nevertheless, the relative rotation of the TM (and especially JM) helices is similar in the plexin-B1 TM-JM trimer structures, suggesting that by contrast to the plexin-B1 dimers, the signaling mechanism would involve formation/dissociation of the TM-JM trimers, possibly via dimer intermediates. In the case of the two plexin-C1 TM-JM structures, the TM region differ more dramatically; in the parallel/slightly RH structure, the helices cross near Val21, with both sidechains at the interface. The bulged RH crossed structure, by contrast has the bulged helix rotated by~180 o forming a back-to-back packing arrangement. In the JM coiled-coil, not only is the crossing angle different, but the bulged helix is also translated about 1 turn upwards and rotated by~90 o , relative to the parallel structure. This difference suggests a piston-like mechanism for TM signaling depicted in Fig. 1a, together with a rotation and separation of the intracellular plexin-C1 domains in the bulged/RH crossed structure. Overall, considering the number of contacts, the near parallel structure could be the more stable one by itself. However, larger lateral differences occur between parallel and crossed structures, as illustrated by plexin-C1. Here the distances between the TM-JM junctions of model1 and 2 are 11 Å and 23 Å, respectively-23 Å is sufficient to have 4.3 helical turns in an antiparallel helix coiled-coil arrangement for the JM region, similar to the example of EGFR [30]. Generally connecting likestructures (e.g. RH crossed helices with clockwise coiled-coils) results in more close-packed and less bulged structures than connecting unlike TM-JM structures as illustrated in Fig. 9.
Secondly, another consequence of the different TM-JM arrangements is the vertical distance between the membrane and the C-terminus of the JM region; in the results above the refined models show differences in this distance by up to 40 Å (see Fig. I in S1 File top panel). Membrane proximity is also known to be an important regulatory feature in another single TM helix receptor family, the EGFR receptors (e.g. ref. [46]). In the case of plexins, the primary binding partners are membrane anchored Rho and Ras GTPases, which associate with the RBD and GAP domains. The latter domains are also likely to influence plexin's conformation near the membrane. This is the case for the dimer and trimer structures; especially the latter are expected to have more prescribed distances due to a locking of the three plexin units and GTPases into a ring type configuration [22]. Thus the orientations of the TM and JM segments, as well as the distances relative to the membrane, are likely to be important. For example, with an extended TM-JM connecting segment (as shown in Fig. 7a, left initial structure) there would be significant space between the plexin GAP domains and bound GTPases and the membrane, whereas with continuous helix linages-especially, with unlike structures, requiring bulged helices-this distance could be too short, at least for the binding of GTPases.
Summary and perspective
Our computational study for the first time comprehensively examined the TM region of the plexin receptor family. Predictions were made using TM helix packing, which were then tested/refined for the peptides in explicit solvent and lipid bilayer using all-atom simulations. We predict that the plexin family has diverse and alternate TM helix configurations and that intracellular JM coiled-coils likely synergize with the TM structures to create relatively rigid structures. These, in turn may be utilized for the regulation of plexin function. Now, guided by these models, experimental studies are needed to further validate these predictions.
Structure prediction
For the predictions of the initial TM helix dimer structures for the 9 human plexins as shown in Fig. 1b), we used the webserver PREDDIMER [19], based upon the original algorithm that had been systematically benchmarked [18]. It should be noted that the server works only with the membrane embedded segment and N-and C-terminal extensions were added later (Fig. 1b, 1c). The membrane region limits of the sequence were defined by amino acid hydrophobicity as well as by sequence alignment. As extensions 5-8 and 9-10 amino acids were, respectively, added to the N-and C-termini of the 22-24 residue long TM segment.
Currently, no automated protocols are available for building TM trimer structures and we followed the procedure described elsewhere [47]. The models for a plexin-B1 TM trimer structure used either a left or a right handed crossed TM dimer and the third helix was manually docked (copy of helix 1) to give clockwise and anti-clockwise TM trimers, respectively. The structures were then equilibrated by molecular dynamics (MD) simulations and followed by 1.0 μs production runs on Anton.
In order to build a TM-JM helix dimer, we used the two PREDDIMER predicted and Anton-refined TM structures for human plexin-C1, c1.1 and c1.2, and fused these to part of the JM region (see Fig. 1b for sequence). Residues from Q553 to T584 were taken from the recently determined plexin-C1 Zebrafish dimeric structure (PDB ID: 4M8M) [21]. The missing residue sidechains, residues at the TM-JM junction as well as the sequence different from the human protein was rebuilt using MODELLER [48]. For the TM-JM trimer of plexin-B1, no structure for the trimeric JM region is available, but a large part of the JM region is seen bound to the plexin-B1 monomer (PDB ID: 3HM6, ref. [13]]. We took this structure and with reference to the plexin-C1 dimer, built sidechains that would be compatible with JM-GAP domain interactions. This suggested the orientation of the JM helices, which were then manually docked as a trimer and refined by simulation on Anton to 1 μs. The initial structure after connecting the TM and JM regions is shown as an example for the TM-JM plexin-B1 trimer in the (clock, init) of Fig. 7a. In order to make the connection region to be helical, we ran short CHARMM simulations to relax the structure. The TM region and JM region were moved closer together to a distance that corresponds to the number of linking residues to be in a helical conformation. Then, we rebuilt the connection using MODELLER with a restraint that forced the connecting region to be helical. This turned nearly all the linker region into a helical structure. This structure, which is shown as (clock, init2) in Fig. 7a, as well as (anti-clock, init) for the anti-clockwise/anti-clockwise TM-JM plexin-B1 trimer was then embedded in the lipid bilayer/solvent system and used for the simulations. We built TM-JM structures with both a linking segment in an extended configuration, as well as in a helical conformation. In total, three models were continued with simulation refinement since the forth one showed a big bulge in the TM-JM connecting region. After insertion into explicit lipids and solvation, the peptides were equilibrated for around 20 ns using CHARMM. Based on deviations in the structures, we decided to continue with two models, one is a TM clockwise/ right-handed crossing for plexin-B1 and, the other is an TM anti-clockwise/left-handed crossing TM model for the plexin-B1 trimer and an almost parallel helix dimer model for plexin-C1, which has a large crossing angle near the TM N-and JM-C-termini as shown in Fig. 8a.
MD simulations
The predictions were further refined/tested by all-atom MD simulations in explicit solvent and palmitoyloleoyl-phospatidyl-choline (POPC) lipid environment, following our previous work [9]. Briefly, the TM helix dimers to be refined were selected based on packing score of the predicted structures (Fscor > 2.5 and by a desire to have diversity in TM structures-see Table A in S1 File). The 13 structures selected were used as PREDDIMER output coordinates (see Table 1, 2) or one set of simulations; for a second set, these TM helices were extended by several residues at the N-and C-termini using MODELLER [48]. The polypeptide chain, the lipids and water molecules were simulated using the CHARMM36 [49] and TIP3P parameter sets. Structures were inserted into explicit POPC lipid bilayers (72 lipids/leaflet) using CHARMM--GUI [50] at a target area per lipid of 64.3 Å 2 [51]. TIP3P water molecules were then added using CHARMM-GUI to yield a thickness (uniform above and below the bilayer) of no less than 15 Å from the farthest protein atom. Sodium and chloride ions were added to achieve neutral systems and further added to give a near-physiological ion concentration of 150 mM. Equilibration trajectories of the 2x13 systems were then generated in the NPAT ensemble with constant particle number N, at a normal pressure of 1 atm, and with the constant total surface area, obtained from CHARMM-GUI [50], and at a temperature of 310 K. Periodic boundary conditions were applied and electrostatic interactions utilized Particle Mesh Ewald with a real space cutoff of 12 Å. The same cutoff was used for the Lennard-Jones interactions. The SHAKE algorithm was applied to control the lengths of all bonds involving hydrogens and the integration time step was 2 fs. Using the CHARMM-GUI program, all systems were initially equilibrated for 300 ps before a further equilibration of a least 20 ns using CHARMM [52]. Then production runs were carried out to 1.0 μs on Anton.
Structure analysis
Root-mean-square-deviation (RMSD) values between helix configurations within and between plexin subfamilies were calculated in Pymol [53] and scaled according to the number of identical residues. Only the central region of TM helices (typically 14 residues, see below), which are always embedded inside the lipid membrane, were included in the RMSD calculation [9]. Specifically, RMSD obtained from alignment by ratio between the total number N of backbone (bb) atoms in both dimers and the number of atoms Na used for the alignment: The initial RMSD values were taken in Pymol; ignoring the option to eliminate outlying atoms. Helix crossing angles were calculated as described in [9] using the CHARMM program. For the rotation angle calculations over the course of the simulation trajectories, we used the same method as [9] but calculated rotation angle data for individual residues rather than for the whole helices. This sets the rotation angle of the starting structure to zero and then evaluates the average but time specific rotation of the entire helices relative to this configuration. For the rotation angle shown in the tables in the main text and in the supplemental materials, a single structure was used as input to calculate the rotation angle. For helix rotation, residue 4 was chosen, about 1 turn of the helix into the membrane (a position that is represented in the 4 plexin-A family members, in -B1, and -B3 by either Gly, Ser or Gln), as well as residue 11, two further turns along (almost near the center of the 22-24 TM segments). (Unless the helices are distorted, these positions should be in alignment). The rotation angle of the 4th and, separately of the 11th residue from the N-terminus of the TM region relative to the vector of closest initial helix approach is calculated. For convenience all TM (and TM-JM) residues have been renumbered to start at 1 (numbering from first position of N-terminally extended sequence). RMSDs from the starting structures and geometric parameters were evaluated by visual inspection for drift and calculated over the last 250 ns of the simulations; averages and standard deviations are for the central region of the helices (typically res. 11-25) as given in Table C-E in S1 File. Models and simulations are referred to by plexin (e.g. A1) and then by model number, for example model1, to give a1.1.
Root-mean-square-fluctuation (RMSF) values were calculated using CHARMM which considered both the mainchain and sidehchain fluctuations. S 2 for mainchain NH groups were calculated using the same method as in the previous work [54] using CHARMM based on the μs trajectories, but with a cut-off of 10 ns so that eventually the results may be compared to solution NMR measurements in micelles or bicells.
The distance between JM-tail and the membrane was also calculated using CHARMM, by calculating the closest distance from the center of mass of the last 3 residues in the JM domain to the closest heavy lipid head group atom in the lower bilayer.
Supporting Information S1 File. Fig. A. Final structures for plexin-B1 TM trimer in clockwise orientation (Left) and anti-clockwise orientation (Right) after 1 μs MD simulation. Fig. B. MD fluctuation of the rotation angle of the two plexin-B1 TM trimer models. a) clockwise and b) anti-clockwise helix trimer. Fig. C. Minimum distances between OG/OG1 atoms on Thr/Ser residues on neighboring helices in 1 μs MD simulations with the initial plexin-B1 TM trimer structure started from the clockwise orientation (Left) and anti-clockwise orientation (Right). Helix A and C (red) form contacts during most of the simulation time in the clockwise orientation, while helices A and B (black), helix A and C (red) and helix B and C (green) form contacts in the anti-clockwise orientation. A more detailed analysis (not shown) reveals that in the clockwise structure between the A-and C-helices there is one Ser-Thr close (< 3.5Å) contact and two longer range Thr-Thr and Ser-Ser contacts (7-8Å). No interactions are seen in the other helix pairs. By contrast in the anticlockwise TM, there are close Thr-Thr and Thr-Ser contacts between the B and C helices, as well as a Thr-Ser contact between helices A and B. Between helices A-C and A-B there are longer range Ser-Thr and Thr-Thr contacts (~6-7Å). Fig. D. RMSF and <S 2 > of Plexin-B1 TM trimers. a) RMSF and b) <S 2 > of Plexin-B1 TM trimers as a function of sequence for clockwise orientation (Left) and anti-clockwise orientation (Right) trimers. Data for helix A in black circles, helix B in red squares, and helix C in blue diamonds. Fig. E. Minimum distances between atom OG/OG1 on Thr/Ser residues from neighboring helices for the plexin-B1 TM-JM trimer in clockwise direction (Left) and anti-clockwise direction (Right). A more detailed analysis (not shown) reveals that in the case of the clockwise TM refined model, there are two Ser-Ser (3-5 Å) (A-C and B-C) contacts and one far (~7 Å for A-B). Only one Thr-Ser is close (A to C). In the refined anticlockwise model there are one Ser-Ser (A-B at~3.5 Å) and one Thr-Ser (A-C at~3.5 Å) plus 5 longer range Thr-Ser/Thr-Thr contacts at approximately 6 Å. Fig. F. Distances between the C-terminal region of the JM trimer and the inner leaflet of POPC lipid bilayer for plexin-B1 TM-JM trimer in TM clockwise direction (Top) and TM anti-clockwise direction (Bottom). Fig. G. Structure comparison of the plexin-C1 TM dimer. Left) X-ray structure of zebrafish plexin-C1 with the GCN4 coiled coil region that has been added in order to crystallize this dimer (red/pink) (PDB ID: 4M8M [21]). The JM helices (green and cyan) are not strongly in contact and the coiling direction is clockwise, whereas the great majority of coiled-coil structures have an anti-clockwise twist [1,2]. Indeed, the sequence that was attached N-terminally to dimerize the plexin is derived from the GCN4 leucine zipper and shows anti-clockwise coiling. The observation that the native sequence is less strongly packed and has a slight clockwise twist suggests that the plexin JM region may not form a classical coiled-coil. Right) model of left-handed TM dimer (grey) and JM (yellow) helices with an irregular/extended junction. Fig. H. RMSF and <S 2 > of Plexin-C1 TM dimers. a) RMSF and b) <S 2 > of Plexin-C1 TM dimers as a function of sequence for the LH model dimer (Left) and RH model dimer (Right). Data for helix A in black circles, helix B in red squares. Fig. I. Average distances between C-terminal tails (C-alpha of three C-terminal residues) of the JM regions and the inner leaflet of POPC lipid bilayer for plexin-C1 TM-JM dimer in model1/LH model (top) and model2/RH model (Bottom) structures. Table A. Full table of PREDDIMER Predictions with Fscore > 2.5. Table B. Scaled RMSD between the central regions of initial TM structures from PREDDIMER. The structures with identifiers in red belong to the groups of 13 selected for further study. The remaining structures (black) are within an RMSD < 3.5 Å close to those selected as shown. Table C. RMSD, crossing angle, and rotation angle of helices for plexin-B1 TM trimer model1, TMtimer1+JMmodel1, TM trimer model2, TM trimer2+-JMmodel2 after MD simulations. Table D. RMSD, crossing angle, and rotation angles of helices for TM+extension dimers after long-term simulations. Table E. RMSD, Crossing angle and rotation angle for plexin-C1 TM dimers and -C1 TM+JM dimers after MD simulations. (DOC) S1 Movie. The plexin-C1 TM+JM dimer structure showed a structural distortion caused by its interaction with lipids during the long term MD simulations, with the initial structure starting from the model1 as shown in Fig. 8 Left.
(MPG) S2 Movie. The plexin-C1 TM+JM dimer structure during the long term MD simulations, with the initial structure starting from the model2 as shown in the Fig. 8 Right. (MPG) | 2018-04-03T05:22:13.883Z | 2015-04-02T00:00:00.000 | {
"year": 2015,
"sha1": "113bcb343c4fa704a342d2e3e9626cda3b84df3f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0121513&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "113bcb343c4fa704a342d2e3e9626cda3b84df3f",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Computer Science"
],
"extfieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
]
} |
368905 | pes2o/s2orc | v3-fos-license | In Their Own Words: The Health and Sexuality of Immigrant Women with Infibulation Living in Switzerland
Female genital mutilation (FGM) is a significant public health problem. It is estimated that around 14,700 women affected by FGM live in Switzerland, primarily among women with a history of migration. Our qualitative research investigated the sexual health of immigrant women living with FGM in Switzerland, describing their own perception of health, reproductive life and sexuality. We conducted semi-structured, in-depth interviews with a group of eight immigrant women of sub-Saharan origin living in Switzerland with Type III FGM (infibulation). Seven of the women were from Somalia and one was from the Ivory Coast. All of the Somali women were mothers and married (two separated), and the Ivorian woman was a single mother. The women in our study reported a low level of sexual satisfaction and reproductive health. They affirmed their desire to improve, or at least change, their condition. Although they rarely talk with their husbands about sexual subject matter, they would like to include them more and improve dialogue. Specific socio-sexual management is recommended when caring for immigrant women living with FGM in order to respond to their specific health care needs. Multidisciplinary approaches may be able to offer more comprehensive health care, including facilitated communication to improve dialogue between women and health care professionals, and eventually between women and their husbands in discussing sexual subject matter.
Introduction
Female genital mutilation (FGM) is a global public health concern and perceived as a form of violence against women and girls.The World Health Organization (WHO) defined FGM as procedures that "intentionally alter or cause injury to the female genital organs for non-medical reasons" [1].FGM can involve cutting of the clitoris, labia minora and majora, and infibulation (narrowing of the vaginal orifice).Researchers have identified several short, middle and long-term health consequences of FGM for women and girls, such as pelvic infection, excessive bleeding, difficulty urinating, pain [2,3].Female genital mutilation is a traditional practice embedded in the education of the child [4] and in the intergenerational transmission of a gender model [5], which is based on a strict separation between sexes' roles in the society [4].Furthermore, FGM is also considered as an ethnic marker aim at preserving the ethnic identity of the group [6].
It is estimated that up to 125 million women and girls worldwide have undergone some type of FGM [7].According to UNICEF, the countries in which the prevalence of FGMs among women and girls exceeds 80% are Somalia, Guinea, Djibouti, Egypt, Eritrea, Mali, Sierra Leone and Sudan [8].In Switzerland, FGM affects approximately 14,700 women according to data from the Federal Service of Public Health [9].Women with FGM living in Switzerland are primarily from Eritrea, Somalia and Ethiopia.Most of these women belong to vulnerable populations, due to their recent arrival in Switzerland and additional forms of insecurity (e.g., unemployment and financial instability, lack of health insurance, and lack of residential permit).
In response to growing concern about FGM in many European countries, new guidelines were established in 2000 to provide a common treatment protocol with specific medical recommendations [10].Surgical procedures to restore normal physiology have been developed and are practiced in France [11], Belgium, The Netherlands and Sweden.In Switzerland and elsewhere, however, reconstructive surgery is not the primary treatment.Due to a lack of robust evidence on the efficacy of the reconstructive surgery [12,13], the management of women with FGM has focused on addressing obstetrical complications and the psychological impact of FGM.Sexuality is rarely taken into account [14].In some countries, multidisciplinary approaches are being initiated, with healthcare teams including physicians, nurses, psychologists and sexologists [15,16].In France, this approach is mostly connected to the reconstructive dimension (surgical and psychosexual) [17].Along with other countries, Switzerland has developed a clinical protocol to aid women with FGM [18] which seeks to integrate psychosexual therapy, rather than pursuing reconstructive surgery.
In the last three years, Switzerland has implemented research and other actions to address the impact of FGM among women within the country.These efforts include national public health reports to map existing services within Switzerland and studies approaching the issue from the perspective of health care professionals [19], to inform management needs [20] or make recommendations for health care [21].In this article, we present data concerning the sexual and reproductive lives from the point of view of women with FGM and use a qualitative approach.Health and sexual dimensions are here understood as sociological objects and not interpreted exclusively by medical categories.This approach allows us to propose a larger and interdisciplinary reflection.The aim of this study is to provide clinicians and policymakers with useful information to improve follow up and health care services that assist women living in Switzerland with this type of FGM.
Participants
This qualitative research was conducted between July and December 2011 at a university hospital in Switzerland.We recruited women of sub-Saharan origin living in Switzerland, between 18 and 45 years old, with or without children, who had undergone a specific type of FGM (infibulation) visiting the Department of Gynaecology and Obstetrics.We approached a total of 20 women, and 10 agreed to participate in the study.Women were approached by midwives or gynaecologists, and word of mouth was used to recruit.All of the participants were at the hospital for a postnatal follow-up consultation.Two women withdrew consent just prior to participation due to time constraints, mentioning a "hold-up" and a "lack of time".Half of the group of recruited women refused to participate for three reasons: the topic was considered to be too sensitive after delivery (two women); the baby was too little to move with (three women) or they did not wish to undergone the interview with the baby (five women).Some of them expressed the wish to postpone the interview later.
Data Collection
We conducted semi-directed in-depth interviews using a structured questionnaire (including both open-ended and directed questions) [22].The investigator filled out the questionnaire based on the participants' response and asked open-ended questions to promote the disclosure of the subjective narrative [23].Questions were the same for each respondent and asked in the same order.Given the linguistic and cultural barriers between the researchers and the participants, as well as the sensitive nature of the subject matter, we worked in partnership with specially trained health care interpreters.These interpreters were all women (to avoid problems with gender bias during translation) and were also intercultural mediators previously trained on issues relating to FGM [24,25].The hospital was chosen as the place to conduct the interviews in order to ensure privacy and confidentiality.All interviews were recorded and completely transcribed by the researcher (MV).
The research protocol (n • 314/2011) was approved by the Cantonal Ethics Committee for Research on Humans.Particular precautions were taken at the ethical level.All sensitive data (such as name, address, places, etc.) remained confidential and were anonymous.The women were offered the possibility to talk with a sexual therapist or the gynaecologist if needed after the interview and if they had questions.The psychological unit was alerted in case it would be requested or needed by the women.
Measures
Using the updated WHO classification of the medical consequences of FGM [1], we asked the women to estimate, based on the somatic consequences described by the research team during interview, their own health status and ultimately their experienced diseases.This article presents the somatic consequences of FGM as reported by the women themselves, based on a subjective self-evaluation, rather than a specific medical diagnosis.This approach is based on the French survey "Excision and Handicap", which is particularly innovative because it uses the model of handicap and disability elaborated by the WHO [26].
Based on the categorization system proposed by the WHO [1], we separated the health consequences of FGM into three categories: (1) short-term medical consequences (e.g., pain, bleeding, urinary retention, infection and shock bound to the event) [27,28]; (2) long-term medical consequences (e.g., pelvic infections, infertility, menstrual difficulties and obstetric problems) [29,30]; (3) and sexual [31,32], mental and social consequences (e.g., change of sexual sensitivity, anxiety and depression) [33][34][35].Furthermore, given the stress that the affected region of the body, involved in infibulation, is put under, we considered childbirth a potentially painful experience [36].Further, some note that the use of technical procedures, such as episiotomy or deinfibulation [37,38] could reactivate traumatic memories of FGM for some women [33].Therefore, we asked the women to describe both their sexual experiences (first sexual intercourse and follow up) and childbirth experience(s).We also provided the participants a list of possible immediate consequences after FGM, such as excessive bleeding, difficulty urinating, inflation of the sex, pelvic infection, pain (during <48 h; between 48 am and 1 week; >1 week) and potential mid-term or long-term consequences, such as dysmenorrhoea, locking of the vaginal opening, the hindrance of normal blood flow, vaginal infection, keloids, urinary infection, pains between periods, cysts, fistulae, urinary incontinence and/or faecal incontinence.
Terminology
Given the variability and diversity within these traditional practices, defining terms and using precise language is essential.In some official documents, especially within the legal and medical domains in Europe and North American, the term "female genital mutilation/cutting (FGM/c)" [7] or "female sexual mutilation" is used for these practices [9].However, the terms used may vary based on the ethnic membership or geographical areas in which they are performed or studied [6].Our study primarily focuses on women from East Africa, who have undergone so-called Type III FGM, commonly called "infibulation" or, in the local Somali language, "sunna gudnin" [39].According to the WHO, Type III FGM (or "infibulation") consists of: "narrowing of the vaginal orifice with creation of a covering seal by cutting and positioning the labia minora and/or the labia majora, with or without excision of the clitoris" [1].This type of FGM is primarily practiced throughout the Horn of Africa, particularly in Somalia, one of the most represented countries of origin among immigrants from sub-Saharan regions living in Switzerland [40].In order to reflect the data and to respect the complexity of the phenomenon, in this study we use "infibulation" terms that the women in this study used during interviews.
Data Analysis
All interviews were recorded and completely transcribed by the researcher (MV).Inherently qualitative analyses were performed, including detailed descriptions, direct quotations and observations from the interviews [41].Emotions expressed by women during interviews have also been registered and taken into account [42].In particular, the collected data allowed for content analysis, based on thematic categories, produced by manual coding [43].
Women's Background and Demographic Profile
Eight women participated in the study.All of the participants were migrants who had emigrated from sub-Saharan countries.Seven came from Somalia and one woman was from the Ivory Coast.Five of the Somali women came from an urban area (Mogadishu), while the other two came from rural areas and small villages.All of the Somali women requested linguistic interpreters, except the Ivory Coast woman who spoke French fluently.As an alternative she asked to the Obstetrician to stay during the interview.At the time of the interviews, the women were between 26 and 39-year-old.Five women had received no formal education, while two of them had received primary school education and only one attended secondary school education.They were mostly professionally inactive, principally due to the resident permit type (or the lack thereof).All of the women in our study were mothers.Women had between one and four children.Most of them were currently living with their partners, while two had separated from their husbands, and one was single.Women's profiles are recap below in Table 1 (names have been changed):
The Circumstances of the Traditional Ritual
All women of our study underwent infibulation (FGM/c Type 3, according to the WHO classification).Five of them underwent the procedure between ages 4 and 9-year-old, whereas two women have been infibulated later at age 10 and 15-year-old, respectively.Almost all of the women recalled this moment as highly painful.One exception was Anita, a 30-year-old Somali woman, who underwent infibulation in a private clinical under general anaesthesia.
The context in which the infibulation was practiced varied: for some women it took place in their own home, while others were in the home of the practitioner.Generally, the women recounted that the ritual involved a small group of girls or that they were alone.In the following two examples, the women offer accounts of their differing experiences, revealing the impact of context on the recounting of the experience of FGM.
The first case is Samya, a 27-year-old woman from the Ivory Coast, for whom the FGM was performed in the area outside her rural village with 25 other girls in a celebration involving the whole village.While Samya talked about "excision" during the interview, the Obstetrician who followed Samya during her pregnancy (and who was present at the interview), specified that Samya had undergone an "infibulation" on her medical report.In fact, all external parts of her genitalia had been excised, resulting in significant scarring and adherence of the labia minora, producing the effect of the vagina closing that is typical of a Type III FGM.Samya reported that she felt "traumatized" by her "excision" and that the "pain marked her mind for forever".
The second case is Anita, the eldest daughter of an Islamic community leader.Her story is unique among the group of women.Her family was part of the upper social class of Mogadishu, and she attended school and achieved a high level of education.Later she moved to Dubai with her family, where she worked as a nurse in a hospital.Her infibulation was performed under general anaesthesia in a private clinic by medical staff, and she received anti-anxiety treatment throughout her recovery.Anita's perception of FGM was impacted by the circumstances of her infibulation, both the surgical operation as well as the clinical follow-up.She described having undergone a "light" version of infibulation, while the other women, who were cut by female traditional practitioners, communicated a greater level of pain, as Jeanne and Fiona recall: The worst was the pain. . .because when you need to go pee or when you have your period, that was screaming, we all waited standing. . .(Jeanne, Somali, 38-year-old, interpreter).
The worst was the days after when it starts to. . .when scars start to heal.Then we start to feel the real pain (Fiona, Somali, 32-year-old, food service employee).
Nearly all of the women recalled a two-week "convalescence" period, during which girls laid down on the ground or in bed, spending most of the time with their legs tied together.Activities including urination, moving and walking were described as particularly painful.For Fanny, the convalescence period and immobility lasted longer: I stayed sixty days in the house, without getting out, without seeing people.I needed one month to be good and standing up (Fiona, Somali, 32-year-old, food service employee).
After a period of immobility, the girls were allowed to stand and walk around, as Cindy, a 29-year-old Somali woman, reported: After one week and a half or two, we began to walk, but we always had our legs tied: we made small steps (Cindy, Somali, 29-year-old, unemployed).
Jeanne, a 38-year-old Somali woman, also has a unique profile.She received secondary education and had travelled a lot before coming to Switzerland.Jeanne works as a linguistic interpreter.She stated that, in Somalia, mothers do not talk about sex with their children.The topic is considered "bothering" (being ashamed): "if girls speak about sexuality they get a bad reputation in the community, they are considered as ill-mannered and all their family is stigmatized for that."Jeanne said that the topic of the infibulation is rarely discussed, and that the practice remains unquestioned.All of the women reported that no clear reason for the FGM was given, and that no explanations were offered before or after the infibulation.
Reproductive Life: Timing, Experienced Diseases and Values
The women in this study had common trajectories regarding the initiation of sexual activity.Most had their first sexual intercourse between 21 and 25-year-old, relatively late when compared to averages in European countries.For example, the average age of first sexual intercourse is 17-year-old in France and 16-year-old in Switzerland.The first sexual encounter took place with their husband or future husband on their wedding night for the women in our study.For the interviewed women, becoming sexually active coincides with the reproductive period: they initiate sexual intercourse in order to become pregnant.Use of contraception was delayed until after delivering at least one child, and most of the time waiting until the birth of the expected or desired number of children (at around 30-year-old).At the time of the interview only three women were using contraception.
The women reported numerous symptoms after infibulation.We showed the participants a list of immediate somatic consequences following infibulation and asked if they had experienced any of the following problems: excessive bleeding, difficulty urinating, inflammation, pelvic infection, pain (with specification of duration: less than 48 h; from 48 h to 1 week; more than a week).Each item was translated in the Somali language and the interpreter explained each symptom to ensure full comprehension of the health problem.All of the women-except Anita (who underwent the procedure in a hospital setting)-reported a sharp pain.Difficulty urinating was a frequent consequence, cited by six women.
In the same manner as somatic consequences, we listed a number of potential problems relating to reproductive and sexual health for the interviewed women.The listed symptoms were dysmenorrhoea, locking of the vaginal opening, disruption of normal blood flow during menstruation, vaginal infections, keloids, urinary infections, pain between periods, cysts, fistulae, urinary incontinence, and faecal incontinence.Almost all of the women reported multiple and coexisting symptoms.The most common problems cited by the women were dysmenorrhoea (five women), vaginal locking preventing blood flow during menstruation (four women), keloids (four women) and vaginal infections (four women).In response to our questioning, the women stated the belief that their health had been negatively affected by the infibulation.Other problems appeared less frequently; in particular, urinary infections (three women) and pain with menstruation (two women).Only one woman reported having been affected by cysts, while none reported fistulae, urinary or faecal incontinence.
The major health problem for almost all of the women was childbirth.Women reported significant fear in anticipation of delivery, having heard dramatic stories of women dying during childbirth.All women in our study required medical assistance during delivery: five women required a defibulation (the reopening of the vaginal orifice) and another three women had a caesarean.During the interviews, the women described childbirth as a very emotional experience, embedded with fear and a strong sensation that the whole body was "ripping and tearing", as Fiona described: This is super rough for excised women. . .I mean the moment of the delivery.Because it is a part of your body, which was always natural, and that has been stitched, closed and so then it makes a double pain when you give birth (Fiona, Somali, 32-year-old, food service employee).
Sexual Life: Intimate Relationships, Context and Experiences
When prompted, the women in our study answered all of the questions about their sexuality.Again, the women received a list of possible problems during sexual activities from which they could select multiple options and provide further explanations to deepen their responses.Most of the time, they described the moments preceding intercourse as "a source of stress, anxiety and pain".The problems listed were pain with penetration, pain during sexual intercourse, pain after sexual intercourse, absence of desire, difficulty achieving orgasm, vaginal dryness, and vulvar burning.Most women reported pain during and after sexual intercourse (five women), as well as difficulty achieving orgasm and the presence of vaginal dryness (five women).Four women reported feeling pain during penetration, while only two described vulvar burning during and after intercourse.
Next, we encouraged the women to think about the potential link between health problems and their infibulation.The women did not, however, establish a direct link between these two experiences initially.With further exploration of this possible connection, some of the women allowed that "it could be very possible" that a causal link exists, as they experience significant pain and sensitivity in their genital area.Relating to this point Nadia, a 23-year-old Somali woman from a nomadic ethnic group that resides in a remote interior region far from Mogadishu, recounted the painful experience.Her infibulation was a very frightening experience.She was cut by a female traditional practitioner with rudimentary tools and the operation was repeated four times before the practitioner stated that it was "well done".She underwent the procedure for the first time at the age of 5; an excruciating experience she said she "will never forget".The procedure was repeated several times and led to permanent injuries: It was very difficult because it doesn't work the first time and they had to do it four times.On several occasions. . . it was four times.In each region, there's people who are more strict than others. . . in Mogadishu it's easier, but in the country it's very hard.They want to be sure that they have well done and so then they did several times to make sure.There are very bad memories for me. . .four times.I cried, they bound me; they tied my feet, my hands.I was restless (Nadia, 23-year-old, Somali, unemployed).
Samya described her experience differently, focusing more on the different types of relationships she has experienced.For her, any pain felt during intercourse was related to the aggressiveness of her partners and to the specific context.Only after migrating and meeting a more sensitive partner (from sub-Saharan region too), with whom she ultimately had a child, did she start to question her past intimate relationships and the link between pleasure and FGM: In my region, men are very different.Over there if you feel pain or not, that's not their problem.If I feel bad or sick, I keep that for me. . .I can't say to my boyfriend "It hurts me". . .The purpose is to hurt actually.Men purpose over there is exactly to hurt you.So you have not said "yes it hurts". . .that's not his problem if it hurts you or not.When I was there, I felt big pain.The man with whom I had my son here was not violent; he was not brutal compared to my first boyfriend in the country.I saw the difference (Samya, 27-year-old, Ivory Coast, unemployed).
When we inquired about self-inspection, women said that they found it difficult to look at their own genitals.Women said that they felt "blocked" and some underlined "a lack of initiative" during sexual intercourse.Some women mentioned a feeling of "distance" toward their bodies "as there's nothing to explore".Using Samya's words: For years, I've never seen how it was. . .what they did to it.When I was pregnant, they told me to look once to see how I am.It was here (in Switzerland) that I've seen myself first.And I realized that I have nothing.There's nothing over there.I've cried and then I said to myself that crying will not give me back what they've taken.I have nothing.It is like if you have been shaved off.Everything has gone (Samya, 27-year-old, Ivory Coast, unemployed).
When asked generally about their opinions on sex, women asserted that they "do not want sex" and that it is the husband "who comes and takes".For some, this approach reflects "something normal", as Anita explained: "a good woman does not run after the men" and added as an explanation: "a woman asking for sex is seen as very bad".Some women emphasized the moral dimensions underlying the infibulation over the physiological symptoms, and the women rarely explained the absence of sexual desire as a consequence of their infibulation.Some women in our study said that expressing sexual desire to a man is considered "vulgar" and "not appropriate to a well-educated woman".However, others established a link between infibulation and a limited sexual desire.
It is evident, infibulation plays a role on pleasure, but it also depends on persons and on which kind of infibulation she has undergone.In Somalia there are different types of infibulation.For example, girls who have undergone a type sunna like these last years, that's the majority. . .and they look by their own for men!They have a lot of desire.But the infibulation that I have undergone, there's no desire, we don't look for nothing.If the man comes, we feel that desire but anything else (Cindy, 29-year-old, Somali, unemployed).
Findings and Interpretation
Seven of the eight women in our study were from Somali, and one woman was from the Ivory Coast.In addition to country of origin, age, social class and geographic region also help to interpret the medical narratives of these women.In our study, migration represents a transition at which point memories of infibulation and cultural background are reviewed.As previously described in the literature [44][45][46], our study confirms that social class and social context have a major impact on how women understand and recall their FGM experiences.This is particularly true for women infibulated in a rural setting using rudimentary tools (such as in Samya's case) or belonging to the lower social class living in a rural region (Nadia's case) when compared to women from an upper social class, living in the capital or urban region and infibulated under anaesthesia in a private clinic (Anita's case).
The second group of results presents women's subjective perception of their reproductive and sexual health.When we showed the women a list of potential problems that could result from infibulation, they indicated several health problems that they experienced and perceived during their lives.Through their participation in this study, the women adopted the medical language introduced in order to describe the symptoms and other problems they were experiencing.Upon recognizing these medical problems, they were better able to identify the impact of infibulations on their everyday lives (painful menstruation and cysts), in their sexual lives (dryness and lack of desire) and relating to childbirth (requiring medical interventions).
Strengths and Weaknesses of the Study
Our findings clearly demonstrate the profound impact of FGM on women living in Switzerland.In addition to working toward prevention in younger generations, these results highlight the importance of addressing the current reproductive and sexual health needs of women similar to those in our study.Although the women did not initially make the connection between infibulation and their health and sexuality, when prompted and given examples, the women related the two.
From a methodological standpoint, this study illustrates the feasibility of conducting research that addresses this very sensitive theme: the phenomenon deserves to be studied on a larger sample.We also underline the ethical choices in our data production: our approach centred on engaging the women themselves [47] in the process of recalling, contextualizing and framing their experiences and the health consequences.Unexpected findings concern the exclusiveness of the women's narratives: for some of them the interview represented the first time that they have spoken about their infibulation in a specific way.An annexed major finding is that the women become aware about symptoms or sexual difficulties suggested during interview, which were until then ignored.Particularly, the link between the infibulation and some of the somatic consequences listed, was often not recognized by the women of our study.It therefore requires support (such as multidisciplinary counselling) in the rediscovery of the body and sexuality.This seems to be a major public health issue.
The principal and most evident weakness of our study is the small sample on which the whole study is based on.We are aware that, due to the sample's size, any generalization cannot be drawn.However, the participation of the women and the qualitative approach that we defend, allows a deeper comprehension about the inner knowledge of FGM coming from women's perspectives.
Differences in Results and Conclusions
The third set of results explores the sexual lives and experiences of the women in our study.These findings offer novel insights into the views of sexuality, and represent the most significant contribution of this study.Specifically, the women in our study-originally from Somalia and the Ivory Coast and having undergone infibulation-separated sexual desire from sexual pleasure.While they interpreted the absence of desire as a "normal condition" of female sexuality [48,49], they also report experiencing several negative consequences of the infibulation, including lack of vaginal lubrication during sex, pain with penetration and vaginal itching.For the women in this study, sexual desire was a question of morality and education, and its presence (or lack thereof) reflected on a woman herself and impacted her family's reputation.Sexual desire and sexual pleasure are two distinct entities which emerge from the women's narratives.While most of them link desire to a normative dimension, shaped by socialization, education and moral values suppressing its expression, pleasure is seen as a welcome, sensitive experience which is not rejected or repressed, confirming previous research [13].In fact, most of the women clearly expressed their discomfort, dissatisfaction, distress, regret or even pain during sexual encounters, consistent with past research [50].The women discussed and expressed notions of sexual desire within the codes that exist in all cultures and societies and considered these codes to be "normal" [51,52].However, the women were also aware of and concerned by their many symptoms and wished to improve their intimate relationships, as in Samya and Jeanne's cases.Sexual pleasure is often inaccessible, and the women in this study described this as problematic.For example, the first sexual encounter was frequently described as an experience marked by pain and fear.Through the interviews, we also found that the quality of their marital relationship was very important for the women, including communicating and sharing their feelings with their partner to improve wellbeing.
Relevance of the Findings: Implications for Clinicians and Policymakers
Through participation in the interviews, the women in this study had the opportunity, sometimes for the first time, to address the topic of infibulation.The content of the interviews suggests that symptoms and sexual difficulties that the women reported had not been previously recognized.The importance of speaking about their own experiences emerged during the interviews, in particular when we encouraged the women to speak about themes of intimacy and relationships.We present not only the problems reported by the women, but also their experiences and perceptions of links between infibulation and physical consequences, which the women did not initially recognize.Based on our findings, clinicians should inquire about physical symptoms among women who have undergone FGM, and explore-along with the women-their sexual function and satisfaction, and other specific needs that are identified.Furthermore, specialized health care services and intercultural communication practices, such as the use of trained interpreters, is highly needed.
Unanswered Questions and Future Research
Although our study provides new insights into the sexual and reproductive health of women who have undergone FGM, future research should seek to explore these themes in more depth with a larger sample.Several points emerged in our study that were not fully explored, such as inter-partner violence, and require further investigation.Furthermore, the ethical dimensions of these types of interviews should also be explored to determine the long-term effects on women's mental health.Finally, future research should explore the physical and emotional dimensions of intimacy and gender roles [53] in this population of women.
Conclusions
In conclusion, we found that infibulation is part of a process of socialization in the countries of origin of the women in our study.This practice is founded on a strict division of gender roles and an unequal access to education, and (sexual) rights and wellbeing.Female sexuality is primarily understood in relation to reproduction; sexual desire is seen as a potential danger to the stability of marital devotion.Relationships, especially marital ones, follow strict gender roles.
Table 1 .
Recap of interviewed women. | 2017-01-23T08:43:12.842Z | 2016-08-18T00:00:00.000 | {
"year": 2016,
"sha1": "6062e62538b18b4708877f16a7261321cf84e119",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0760/5/4/71/pdf?version=1478089002",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bf6823e60f8f11aaa13413bd7818c3ff9b6e5900",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
128791924 | pes2o/s2orc | v3-fos-license | Ultrahigh resolution total organic carbon analysis using Fourier Transform Near Infrarred Reflectance Spectroscopy (FT‐NIRS)
Fourier transform near infrared reflectance spectroscopy (FT‐NIRS) is a cheap, rapid, and nondestructive method for analyzing organic sediment components. Here, we examine the robustness of a within lake FT‐NIRS calibration using a data set of almost 400 core samples from Lake Suigetsu, Japan, as a means to rapidly reconstruct % total organic carbon (TOC). We evaluate the best spectra pretreatment, examine different statistical approaches, and provide recommendations for the optimum number of calibration samples required for accurate predictions. Results show that the most robust method is based on first‐order derivatives of all spectra modeled with partial least squares regression. We construct a TOC model training set using 247 samples and a validation test set using 135 samples (for test set R2 = 0.951, RMSE = 0.280) to determine TOC and illustrate the use of the model in an ultrahigh resolution (e.g., 1 mm/annual) study of a long sediment core from a climatically sensitive archive.
Introduction
[2] High resolution studies of sediment archives are becoming increasingly important for understanding the rates and timings of abrupt climate changes, and for the detailed reconstruction of change in different areas that provides important information about causal links between regions, the understanding of which is a key issue in climate studies and Quaternary geology.
[3] Total organic carbon (TOC) is a basic and fundamental property of lake sediments and component of the carbon cycle [e.g., Boyle, 2001]. Sediment TOC is an important indicator of environmental and surface water quality [e.g., Ouyang et al., 2006] and has a major influence on biogeochemical processes, nutrient cycling, and redox potential that take place in sediments. Inland waters are a significant component of the global carbon cycle and sedimentary TOC provides information on changes in lake productivity in response to environmental and climate change, and is used to quantify carbon storage in lakes, which are increasingly recognized as important carbon sinks [e.g., Larsen et al., 2011]. Changes in climate result in changes in the burial and release of carbon and it is therefore of relevance and importance to study these processes in the past.
[4] TOC is also commonly used in normalization during calculation and evaluation of organic biomarker proxies in sediments in environmental and paleoenvironmental studies [e.g., Pearson et al., 2007] or in contamination studies [e.g., Malley et al., 2000]. As part of the Lake Suigetsu 2006 Project (www.suigetsu.org), the analysis of organic geochemical biomarkers is being carried out at decadal resolution during key climatic periods which, combined with the long (150 ka) and climatically significant nature of the Lake Suigetsu sediments, initiated our interest in developing a robust FT-NIRS method to obtain high resolution TOC data for this site. Near infrared reflectance spectroscopy (NIRS) is an inexpensive, rapid (sample analysis in triplicate takes approximately three minutes), nondestructive, quantitative, technique that requires small sample sizes (0.01-0.1 g dry weight), and minimal sample pretreatment (drying and grinding/homogenizing) meaning samples can be used for later analyses (e.g., biomarkers). NIRS works through vibrational excitation in the form of stretching and bending of bonds between atoms within organic materials. Differences in the chemical structure and composition of organic molecules corresponding to sediment composition display characteristic vibrations which are expressed as absorption at particular wavelengths in the NIR spectrum. By using multivariate statistics, it is possible to explore relationships between NIRS spectra and sediment properties for use in paleoenvironmental studies [e.g., Korsman et al., 2001]. In Fourier transform near infrared reflectance spectroscopy (FT-NIRS) all frequencies are measured simultaneously which enables rapid measurement of multiple scans per second, higher signal/noise ratios, and enhanced repeatability via an internal reference laser.
[5] (FT)NIRS has been used in a wide range of environmental applications, including the monitoring of river mining contamination [Kemper and Sommer, 2002;Malley et al., 1996;Persson et al., 2007;Xia et al., 2007], determining the composition of peat [McTiernan et al., 1998] and lake sediments [Malley et al., 1999[Malley et al., , 2000, and the reconstruction of lake-water chemistry (e.g., pH [Korsman et al., 1992]; pH, TP, and TOC [Nilsson et al., 1996]; TOC [Ros en, 2005]) and climate [Inagaki et al., 2012;Ros en, 2005;Ros en et al., 2001]. It has also been used, along with near ultraviolet and visible reflectance spectroscopy to examine TOC, carbonate, and opal content of marine sediments [Balsam and Deaton, 1996;Chang et al., 2005;Jarrard and Vanden Berg, 2006;Leach et al., 2008]. These applications of NIRS make use of a calibration data set of either surface or sediment core samples which are used to model the relationship between NIR spectra and the parameter of interest measured using traditional, often time-consuming, methods. Choice of calibration method is thus crucial to the success of the method but most NIRS studies do not report on spectral data treatment comparisons or examine the potential and merits of different statistical approaches. In addition, no studies to date have examined NIR at different resolutions or assessed the number of samples required for optimal model construction.
[6] The aim of this study was to examine these questions and construct a robust calibration model for Lake Suigetsu sediments using FT-NIRS to enable rapid, high resolution reconstruction of TOC via development of a model that can be used in on-going studies at this site. In addition, the methods and approaches developed here should be applicable to a range of sedimentary environments including lakes, peats, and marine sediments. In order to construct the most robust model possible, we examine (i) the effects of different data pretreatment, (ii) the most appropriate wavelength range to include in the calibration model, (iii) the most robust statistical method to employ for model construction, and (iv) the effect of calibration data set size on the accuracy of the TOC model outputs. We also employ rigorous statistical techniques for model cross-validation. This is the first extensive within-lake FT-NIRS calibration study which explores all these features to obtain the most robust model for use to reconstruct TOC at ultrahigh (1 mm) resolution.
Study Site: Lake Suigetsu
[7] Lake Suigetsu (35 35 0 N, 135 53 0 E, 0 m above present sea level) is a meromictic, tectonic lake located on the Sea of Japan coast, Honshu Island, central Japan. The lake is 34 m deep and covers an area of c. 4.3 km 2 . It is one of the ''five Mikata lakes'' (within the Mikata-goko Ramsar site), with a regional climate typically characterized by both summer and winter monsoons. The location of the lake in relation to the Asian monsoon boundary front renders it ideal for providing important paleoclimatic information about changes in this boundary in the past.
[8] In 2006, as part of the ''Lake Suigetsu Varves 2006'' Project (www.suigetsu.org), a 74 m long core (SG06) covering the last c.150 ka was collected from the lake. The upper 46 m of sediment consists of varves (covering c. 60 ka), contains abundant fossil leaves, more than 800 of which have been 14 C dated [Bronk Ramsey et al., 2013], and offers a remarkably well-dated, high resolution multiproxy archive of paleoenvironmental change [e.g., Kossler et al., 2011;Nakagawa et al., 2003Nakagawa et al., , 2012. Lake Suigetsu is an auxiliary Global Stratotype Section and Point (GSSP) for the onset of the Holocene Epoch [Walker et al., 2009] which also highlights the climatic and global significance of this site.
Conventional TOC Analyses
[9] For conventional TOC analyses samples were extracted at 15 cm intervals throughout core SG06 (n 5 382). Samples were dried, ground, and weighed into Ag capsules, acidified using 20% HCl, then redried before placing the Ag capsule into a larger Sn capsule. C and N concentrations were measured by combustion EA (FlashEA 1112) calibrated against a sulfanilamide standard. Analytical reproducibility of 60.5% was calculated through repeat analyses of an in-house standard (marine sediment).
FT-NIRS Analyses
[10] For the calibration exercise dried and ground samples as used for conventional analyses were analyzed in triplicate and spectra averaged before statistical analysis. For the high-resolution core study samples were analyzed at 1 cm and also 1 mm resolution across specific boundaries of interest such as the lateglacial-Holocene onset as a means to examine the robustness of the calibration at different resolutions. Each sample was scanned 64 times over a wavelength range of 835-2500 nm (12,000-4000 cm 21 ) at 1 nm resolution using a Fourier transform spectrometer (ThermoNicolet Nexus 870) equipped with a Quartz beamsplitter, InGaAs (indium gallium arsenide) detector and a top loading diffuse reflectance Smart Near-IR UpDRIFT sampling accessory. The Updrift acces-sory and method maximizes diffusely scattered radiation whilst minimizing scatter and specular reflection/interference. A SpectralonV R white reference, collected every 30 min, was used to standardize sample spectra. Reflectance values range between 0.569 and 0.913 and triplicate samples had a mean error across all wavelengths of 0.38%, indicating high sample reproducibility. Diffuse reflectance (R) was transformed to absorbance (A) using the equation A 5 log(1/R) and the spectra for each sample smoothed using a running mean of width 4 to remove spectral noise and give a final spectral data set of 416 wavelengths at 4 nm intervals.
Spectral Preprocessing and Statistical Analyses
[11] NIR spectra of lake sediments are complex and contain hundreds or thousands of wavelengths which are highly collinear. Predicting sediment properties from spectral data is thus a problem of multivariate calibration. One common solution is to use a data reduction technique such as partial least squares regression (PLS) [Naes et al., 2002] to reduce the spectral data to a small number of components that maximize the covariance between spectra and the sediment property of interest. PLS is particularly suited to modeling NIR data where the spectra are highly multicollinear and the number of variables (wavelengths) often greatly exceeds the number of samples. Most PLS calibrations use all near infrared wavelengths as predictor variables [e.g., Ros en et al., 2010] although some have argued that noninformative wavelengths should be omitted as they may increase random error [Zou et al., 2010], or have found that the main absorption bands influencing the calibrations lie within a narrower wavelength range (e.g., Malley et al., [1996] use 1500-2498 nm and Leach et al., [2008] use 2100-2500 nm). We therefore compare PLS using all and selected wavelengths, omitting those that have an absolute value of the PLS regression coefficient of <10% of the maximum [Garrido Frenich et al., 1995]. The optimal number of PLS components for each model was determined using a randomization t-test [van der Voet, 1994].
[12] An alternative to data reduction methods are data-mining tools that seek to extract relationships from high-dimensional data in the form of predictive models, or rules, based on a reduced set of the original predictors. These methods have the advantage that they omit the data reduction step and select directly the best wavelengths for prediction. Here, we compare the performance of PLS to two data mining techniques, random forests (RF) and regression rules (RR). Random forests are an extension of classification and regression trees (CART) that builds an ensemble model based on many regression trees [Breiman, 2001]. Each individual tree is trained on a bootstrap sample of the training set data consisting of a random subset of predictor variables. By averaging predictions over a large number of trees random forests overcome the problem of instability in individual trees and reduce model uncertainty [Minasny and McBratney, 2008]. Random forest models were developed using the default to 500 trees. In regression trees, the prediction is a value at each node of the tree. Regression rules are similar to regression trees except that each node in the tree represents a rule consisting of a linear regression model based on the predictors used in previous splits [Quinlan, 1992]. Regression rules can also use a boostinglike scheme in which the final prediction is averaged over a number of trees, or committees (25 in our case, chosen by cross-validation), to reduce model bias and variance.
[13] NIR spectra of particulates often contain uninformative information resulting from light scattering effects that vary between samples due, for example, to differences in particle size. It is therefore usual to apply one or more spectral pretreatments to remove these unwanted effects [e.g., Dåbakk et al., 1999b;Ros en and Hammarlund, 2007]. A range of numerical pretreatments have been applied to spectra from lake and marine sediments [e.g., Balsam and Deaton, 1996;Korsman et al., 2001;Ros en et al., 2000] but with no clear guidelines on which is most appropriate. We therefore test a number of different spectral pretreatments and compare them to no pretreatment : (1) standard normal variate (SNV) which centers and scales each spectrum by its mean and standard deviation, respectively (so called autoscaling), (2) multiplicative scatter correction (MSC), (3) firstorder derivative using the Savitsky-Golay algorithm with an order two and interval width of seven, (4) MSC followed by first-order derivative, and (5) second-order derivative with order two and interval width of 11 variables.
[14] The performance of the above models and pretreatments was assessed using the squared correlation between observed and predicted TOC values, and the root mean squared error (RMSE), a measure of the average error of the prediction. RMSE is usually estimated using leave-one-out cross-validation [e.g., Inagaki et al., 2012]. However, down-core calibration data may be temporally auto-correlated, meaning leave-one-out cross-validation does not properly simulate an independent test data set, and ultimately leads to an underestimation of the likely prediction error when the calibration is applied to new data [Burman et al., 1994]. Correlograms and partial correlograms of the Suigetsu TOC calibration data (not shown) demonstrate significant temporal autocorrelation at lags of up to five samples. We therefore use a pseudo-h-block cross-validation scheme and split the data into two parts: approximately two thirds (n 5 247) for calibration development and one third (n 5 135) for validation, with the latter split into three groups of 45 samples each, arranged in equal-spaced blocks down-core ( Figure 1). Tuning of model parameters was determined using 10-fold leave-out using the calibration set.
Spectral Pretreatment, Wavelength Selection, and Model Performance
[16] Table 1 compares the performance of PLS, random forest (RF), and regression rules (RR) models with different spectra pretreatments for the validation data. Prediction errors for the training set derived using leave-one-out (LOO : not shown) underestimate the validation set errors by between 6% and 70% (mean 20%) which suggests that published errors using LOO are actually underestimates of the true prediction error. Prediction errors for our final model (see below) using LOO and h-block cross-validation are 0.235 and 0.280, respectively, highlighting the importance of using an appropriate h-block crossvalidation design that properly accounts for the temporal dependency in the training set data [Arlot, 2010].
[17] Random forests are consistently poor with the highest RMSEP (Table 1). Plots of variable (i.e., wavelength) importance (not shown) reveal models that are dominated by only a small number of wavelengths and predictions that are overestimated or underestimated at low and high TOC values, respectively. For PLS and regression rules, preprocessing the spectra prior to analysis gave variable results: MSC and second-order derivatives reduced model performance in all cases.
[18] MSC in particular is frequently used in midinfrared (FTIR) studies [e.g., Cunningham et al., 2011;Hahn et al., 2011;Ros en et al., 2010Ros en et al., , 2011Rouillard et al., 2011;Vogel et al., 2008] and has also been used in visible-NIR Rouillard et al., 2011]. For near infrared (FT-NIR), it can improve predictions for some variables and has been used in some studies to eliminate light scattering effects arising from variables such as differences in particle size [e.g., Dåbakk et al., 1999a;Ros en and Hammarlund, 2007], but has also been found not to enhance performance [Dåbakk et al., 1999b] and it is not uniformly recommended [e.g., Minasny and McBratney, 2008]. Our results highlight the need to evaluate particular pretreatments for each study.
[19] First-order or second-order derivatives are another way of treating the data to achieve similar outcomes to MSC since it removes baseline offset and has been used in some NIR studies [e.g., Korsman et al., 2001;Malley et al., 1996Malley et al., , 2000. One problem with derivatives is that they amplify spectral curvature which may also increase random noise and result in poor models [e.g., Dåbakk et al., 1999b] although it appears that our initial spectral smoothing has reduced this effect. In our study, first-order derivatives improves predictions with PLS and give the lowest overall prediction errors when used in conjunction with SNV (standardized normal variance) scaling (autoscaling).
[20] In our data set regression rules with spectral pretreatment using first or second derivatives builds poor ensemble models that utilize only a small number of wavelengths. Regression rules with no-spectral pretreatment yields a model that is competitive with the best PLS model. It produces very similar predictions (median difference 0.11% TOC) but at the expense of increased complexity [cf. Minasny and McBratney, 2008]. Straightforward PLS on raw data has also been found to perform best in a study by Dåbakk et al. [1999b] who examined modeling performance in filtered seston.
[21] Similarly, in our data set, PLS in conjunction with wavelength selection generally performed no better than a similar model/pretreatment with all wavelengths. The region between 2100 and 2500 nm has been associated with organic carbon bonding [Osborne and Fearn, 1986] and reduced spectra data sets have been used in some FT-NIRS studies (e.g., 2100-2500 nm [Leach et al., 2008]; 1100-2500 nm [Korsman et al., 1999]; 1500-2498 nm [Balsam and Deaton, 1996]). Rouillard et al. [2011] point out in their study of visible-NIRS (VNIRS) investigations how different regions of spectra influence PLS analysis and could improve model performance, as well as reduce complexity, and that implementing the technique for wavelength weight on multivariate analyses should improve overall reliability of VNIRS-based models. Although wavelength selection has been used in some mid-IR studies [e.g., Vogel et al., 2008], in near IR [Das, 2007], and in the marine sediment NIR study of Leach et al. [2008], generally studies do not examine effects of different wavelengths or outline why spectral ranges were chosen, presumably choosing to use the NIR range available. In our study, we observe no benefit to model performance in including the additional complexity of a wavelength selection step and opt for a PLS using all wavelengths (835-2500 nm) with a combination of SNV and first derivative pretreatment as the most parsimonious model with the lowest prediction error.
[22] It is possible that our model may have used spectral features due to inorganic compounds such n/a n/a 24 0.660 0.724 SG2 n/a n/a 28 0.720 0.760 MSC 1 SG1 n/a n/a 13 0.720 0.782 a PLS, partial least squares regression; SNV, standard normal variate; RF, random forest; None, no spectra pretreatment; MSC, data multiplicative scatter corrected before analysis; SG1, first-order derivative using the Savitsky-Golay algorithm with an order two and interval width of seven; SG2, second-order derivative with order two and interval width of 11; MSC 1 SG1, MSC followed by first-order derivative; N, number of wavelengths used; n/a, wavelengths not applicable for these methods.
as biogenic silica and clays which are negatively correlated with TOC. These minerals have their strong, primary absorptions in the mid-IR but also have weak overtone bands in the NIR range [Stenberg et al., 2010]. Our results suggest that any such effects are either negligible or can be ignored because if the composition of inorganic material varied down-core and had an adverse effect on model predictions we would expect a reducedwavelength model (that is, one that dropped wavelengths associated with inorganic material) to have superior performance. This is not the case. However, it is possible that the confounding effects of inorganic material might restrict the development of a global, rather than site-specific, model for TOC prediction. Figures 1a and 1c show the experimental setup for the calibration and validation sets and Figure 1b shows the relationship between conventional and NIR-predicted TOC for the validation set using the optimal PLS model (PLS with first derivative spectral pretreatment applied to autoscaled data). Figure 1c shows both conventionally measured and FT-NIRS predicted TOC for the calibration data set down-core and highlights the fact that FT-NIRS modeled TOC very closely tracks the conventionally measured TOC. There is only a slight systematic trend to increasing error at higher values (>6% TOC; Figure 1b), probably as a result of fewer calibration samples in this range.
Application to High-Resolution Samples
[23] Selecting the best method and pretreatment, as listed in Table 1, we constructed a model and test set applying partial least squares (PLS) standard normal variate (SNV) regression to first-order derivative spectra. We apply our model to our Lake Suigetsu sediments at a range of resolutions ( Figure 2). Figure 2a shows the application of our chosen model to a section of the Lake Suigetsu core at 1 cm resolution (solid line), highlighting the good match between the 1 and 15 cm (dots and dashes) resolution samples. Figure 2b magnifies a small section of the core from Figure 2a and shows the 1 cm samples superimposed on ultrahigh resolution (1 mm, c. annual) TOC modeled results. The results demonstrate that FT-NIRS can realistically be used to reconstruct TOC at a range of resolutions, including at 1 mm intervals, which in this study is the limit of subsampling that is possible. We also see that at higher resolution there are also many finer details in the reconstructed TOC values and our results highlight the internal variation that can occur in such high resolution studies ( Figure 2), and therefore the potential importance of using FT-NIRS to examine high-resolution changes. These changes in terms of paleoenvironmental significance are being investigated further and are beyond the scope of this paper. The small sample size (0.01-0.1 Figure 2. Application of model to Suigetsu core samples at different temporal resolutions. (top) Gray 5 measured TOC, black 5 modeled TOC, 1 cm resolution; (bottom) modeled TOC, gray 5 1 cm data set, black 5 1 mm resolution. g) required for FT-NIRS analysis is minimal compared to conventional techniques (0.5-1 g), meaning this technique is invaluable and has huge potential for use in ultrahigh resolution and multiproxy studies and where sediment availability is limited.
Optimum Calibration Data Set Size
[24] The final part of our study was to examine the optimum calibration data set size and to provide a guideline for future studies. Previously published NIR-TOC calibration studies range in sample number from a small down-core data set of 30 samples (20 in model and 10 in test set under external crossvalidation to 65 samples with internal cross-validation) [Leach et al., 2008] and c. 20 samples [Korsman et al., 1992] to over 100 for surface sediment calibration data sets [e.g., Hahn et al., 2011;Cunningham et al., 2011]. None of these studies examine the effect calibration sample size has on prediction errors or what is the minimum number of samples required to develop an acceptable model.
[25] Our NIR-TOC data set contains a total of 382 samples (247 used for calibration and 135 used for validation). In order to examine the minimum number of samples needed to develop an accurate TOC calibration, we developed models of different sizes by randomly selecting samples from our calibration data set and plotted RMSE for the validation set versus number of samples in the calibration set ( Figure 3). Results show the mean value and standard deviation for 50 random training sets for each value of N and provide guidance to the size of training set required. Results indicate that the prediction error reaches a minimum with a calibration set of c. 120 samples and that there is little to be gained by expanding the training set beyond this number. This is larger than sample numbers generally used in calibration studies and suggests that the development of NIR calibrations are particularly useful for long core studies where large numbers of analyses are required such as in our study of Lake Suigetsu sediments. Likewise, it is applicable to other terrestrial (e.g., lake, peat) sites with long records, as well as marine sediment cores.
Conclusions
[26] The best NIR-TOC model with the lowest prediction error was obtained using PLS standard normal variate regression with first-order derivatives and using all wavelengths in the data set (835-2500 nm). Our data set consisted of a total of 382 samples but our analysis suggests that a calibration data set of c. 120 samples is sufficient, provided it covers the range of TOC values likely to be observed in the core. We highlight the use of FT-NIRS as a rapid and cheap method to reconstruct TOC at a range of resolutions up to ultrahigh resolution (1 mm/annual) in long sediment cores from climatically sensitive and significant archives.
Acknowledgments
[27] FT-NIRS analyses were carried out at Newcastle University during Natural Environment Research Council (NERC) grant NE/G011001/1. Conventional TOC analyses were carried out at the Kochi Institute for Core Sample Research, Japan, during Japan Society for Promotion of Science (JSPS) fellowship PE07622. We thank Mathew Brown and Hayley McDowall for assisting with FT-NIRS, Minoru Ikehara and Yusuke Yokoyama for help in facilitating conventional TOC analyses, Takeshi Nakagawa for providing samples, and Tsuyoshi Haraguchi, Katsuya Gotanda, Hitoshi Yonenobu, Yusuke Yokoyama, Ryuji Tada, and Takeshi Nakagawa for help in sub-sampling the core. We also thank Hitoshi Yonenobu and an anonymous reviewer for their comments on the original manuscript. We thank Hitoshi Yonenobu and an anonymous reviewer for their comments on the original manuscript. This work contributes to the Suigetsu Varves 2006 Project (www.suigetsu.org). | 2019-04-24T13:06:21.296Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "aa7c528cc1fbba96bdc43861d5845d906e6536c2",
"oa_license": "CCBY",
"oa_url": "https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/2013GC004928",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "d42b8021b6b51aa32db93d632970e4d4a419d08a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
244643925 | pes2o/s2orc | v3-fos-license | "Marine Algal Bioactive Metabolites: Effects and Occurrence"
When the algal density reached over the baseline level and caused harmful effects, these algal blooms are defined as harmful algal blooms (HABs) Hallegraeff [1]. Over the past several decades, marine algal bioactive metabolites have become a concern for the environment and human health. The contact (e.g., ingestion) of these metabolites results in an alternation of cellular enzyme functionality and causes cell deformation and mortality in the worst cases. Marine diatoms, dinoflagellates, and cyanobacteria are the known producers of these harmful metabolites. Bioactivity and the occurrence of these algal metabolites will be reviewed herein. Marine diatoms were the known producer of domoic acid. In 1987, ingestion of cultured blue mussels (Mytilus edulis) containing domoic acid (DA) caused food poisoning that killed three people and sickened >100 others Bates, et al. [2,3]. The structure of DA was determined to be an analogue of glutamic acid (Wright et al., 1989), and known producers of DA are species of a marine diatom genus Pseudo-nitzschia Jeffery, et al. [4]. DA is a neurotoxin that causes neuronal degeneration and necrosis in specific hippocampus regions, leading to amnesic shellfish poisoning. Several reports on the accumulation of DA in various organisms. However, DA can be degraded through the process of frozen storage and cooking, suggesting the low stability of this compound (as cited in Jeffery, et al. [4]).
Marine diatoms were the known producer of domoic acid. In 1987, ingestion of cultured blue mussels (Mytilus edulis) containing domoic acid (DA) caused food poisoning that killed three people and sickened >100 others Bates, et al. [2,3]. The structure of DA was determined to be an analogue of glutamic acid (Wright et al., 1989), and known producers of DA are species of a marine diatom genus Pseudo-nitzschia Jeffery, et al. [4]. DA is a neurotoxin that causes neuronal degeneration and necrosis in specific hippocampus regions, leading to amnesic shellfish poisoning. Several reports on the accumulation of DA in various organisms. However, DA can be degraded through the process of frozen storage and cooking, suggesting the low stability of this compound (as cited in Jeffery, et al. [4]).
Marine dinoflagellates are known producers of a series of bioactive metabolites that are classified into five major groups by their bioactivities after ingestion of toxin-containing fish and shellfish. These bioactivities including paralytic shellfish poisoning (PSP), diarrhoeic shellfish poisoning (DSP), neurologic shellfish poisoning (NSP), azaspiracid shellfish poisoning (AZP), and ciguatera fish poisoning (CFP) metabolites. PSP in humans is caused by the ingestion of seafood containing a group of alkaloids, including saxitoxin and its analogues (Cusick,et al. [5]). The pharmacological action of the PSP toxin is characterized as the blockage of the voltage-gated sodium channel (VGSC), leading to numbness and respiratory paralysis that could be fatal. Marine dinoflagellates (e.g., Alexandrium) and freshwater cyanobacteria and pectenotoxins (PTX) are two groups of lipophilic toxins that are structurally distinct from OA and DTX but possess similar bioactivities. However, some YTX and PTX can cause liver necrosis and cardiac muscle damage without diarrhea (Domínguez,et al. [7]).
The common origins of YTX are marine species of Prorocentrum, and PTX are produced by Dinophysis.
The Florida and Gulf of Mexico coastal red tide former, Karenia brevis (syn. Gymnodinium breve and Ptychodiscus breve), is the common producer of lipophilic brevetoxins (PbTX) that cause NSP (Baden, et al. [8]). Structurally, PbTX can be divided into two groups, one with 10 cyclic rings and the other with 11 rings.
The toxicity of NSP toxins was caused by opening the VGSC, leading to nausea, vomiting, paralysis, seizures, and coma. The aerosolization of PbTX caused by wave action leads to asthmalike symptoms in humans. The next dinoflagellate toxin group is AZP toxins, including azaspiracids (AZA). To date, over 50 AZA were isolated from species in marine dinoflagellate Azadinium, and contaminated seafood (Twiner, et al. [9,10]). The ingestion of AZA caused nausea, vomiting, diarrhea, and stomach cramps (Twiner, et al. [9,10]), while the mechanism of action has not been elucidated. The final dinoflagellate toxin group is CFP that caused by lipophilic ciguatoxins (CTX). The common origin of CTX is Gambierdiscus toxicus, while species in Prorocentrum were also reported as producers (Friedman,et al. [11]). CTX found in the Pacific region (n=13) had a different number of cyclic rings than CTX in the Caribbean region (n=14). CTX mechanism of action is similar to PbTX; however, some CTX had a greater affinity for VGSC than PbTX. Cyanobacteria are prolific bioactive secondary metabolite producers. To date, 157 known bioactive classes have been identified. Four (i.e., microcystins, saxitoxin, anatoxin-a, and cylindrospermopsins) of these known bioactive classes were listed in the EPA Contaminant Candidate List 4 (CCL4).
In the author's previous effort, the 157 known bioactive classes have been reclassified to 55 structurally unique bioactive classes based on similarities of their structure and biological activity (Huang,et al. [12]). This effort was necessary because some metabolites share similar chemical structures and bioactivity but have been named differently. Therefore, a classification system was proposed to include both the original class names and reclassified class names. For example, lyngbyaureidamides have a similar structure as anabaenopeptins. Thus, when describing this compound, the proposed description will be anabaenopeptinlyngbyaureidamides. Fifty of 55 classes, including isomers/ synonyms, have been described from the marine environment (Table 1). Three of the four EPA CCL4 listed classes have been described from marine cyanobacteria: microcystins, anatoxin-a, and saxitoxin. Thirty of these 55 marine secondary groups were originally isolated from benthic cyanobacteria (e.g., Lyngbya and Moorea), with some compounds were elucidated from marine invertebrates (e.g., Dolabella). Over the past three decades, the knowledge of marine cyanobacterial metabolites has increased tremendously (Huang, et al. [13][14][15]). Their structures belong to eight main groups: amino acids, alkaloids, fatty acids, depsipeptides, glycosides, oligopeptides, vinyl halides, and other structures ( Figure 1). Some cyanobacterial bioactive metabolites are chlorinated and brominated. The bioactivity of these compounds started from enzyme inhibition and VGSC blockage, causing organ bleeding and swelling, eventually, the death of the organisms (Table 1). Table 1: Summary of marine cyanobacterial metabolites bioactivities. The column with "I + II", "I + III", "II + III", and "I + II + III" presented a combination of bioactivity has been found. For example, compounds in column "I + II" contain both cellular and tissue cell activity. Figure 1: Summary of cyanobacterial bioactive compound general structures. These compounds can be classified into eight groups: amino acids, alkaloids, fatty acids, depsipeptides, glycosides, oligopeptides, vinyl halides, and other structures.
In addition, the structures of gray-highlighted compounds are shown. Among these classes, anatoxin-a(S), calothrixins, microguanidines, microviridins, and cylindrospermopsins have not been reported in the marine ecosystem.
Algal blooms will likely increase in frequency and intensity due to climate change and anthropogenic nutrient input. Accompanied with this trend, the frequency of toxin-producing algae cooccurrence will also be increased. Unfortunately, most researchers and monitoring programs only focus on certain toxins of interest, while other toxin classes with similar or higher toxicity are unstudied. Therefore, if two toxins carry similar toxicity co-exist, the estimate of the bloom might toxicity not be accurate. Further, HAB co-occurrence could result in synergistic effects caused by multiple bioactive metabolites that possess different known bioactivities. For example, when an enzyme inhibitor co-occurs with a linear cytotoxic oligopeptide, this inhibitor can deactivate the enzyme digesting ability on the oligopeptide, leading to more damage caused by the oligopeptide. Thus, a reevaluation of the monitoring protocols to include strategies for toxin co-occurrence is needed. | 2021-11-26T16:33:08.525Z | 2021-07-26T00:00:00.000 | {
"year": 2021,
"sha1": "1410e265053a8423c10b5364d6f2b965c597054c",
"oa_license": "CCBY",
"oa_url": "https://biomedres.us/pdfs/BJSTR.MS.ID.006008.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cec5b8fe0cd75669a5d0c7eec4617067adac9061",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
17037224 | pes2o/s2orc | v3-fos-license | Anti-angiogenesis in prostate cancer: knocked down but not out
Angiogenesis is a very complex physiological process, which involves multiple pathways that are dependent on the homeostatic balance between the growth factors (stimulators and inhibitors). This tightly controlled process is stimulated by angiogenic factors, which are present within the tumor and surrounding tumor-associated stromal cells. The dependence of tumor propagation, invasion and metastasis on angiogenesis makes the inhibitors of new blood vessel formation attractive drugs for treating the malignancies. Angiogenesis can be disrupted by several distinct mechanisms: by inhibiting endothelial cells, by interrupting the signaling pathways or by inhibiting other activators of angiogenesis. This strategy has shown therapeutic benefit in several types of solid tumors, leading to Food and Drug Administration (FDA) approval of anti-angiogenic agents in the treatment of kidney, non-small cell lung, colon and brain cancers. Although no angiogenesis inhibitors have been approved for patients with metastatic prostate cancer, therapies that target new blood vessel formation are still an emerging and promising area of prostate cancer research.
INTRODUCTION
Angiogenesis is the process of new blood vessel formation and it is a normal process in growth, in wound healing and in the formation of granulation tissue; however, it is also crucial step for cancer growth, invasion and metastasis. Because tumor is dependent on the diffusion of nutrients and oxygen supply, establishing a sufficient blood supply is critical and limiting step for continued tumor progression. 1 As the cancer progresses and cells in the center of the tumor become more hypoxic, the tumor activates neo-angiogenesis process by shifting the homeostasis between angiogenesis inhibitors and stimulators, the process known as 'angiogenic switch' . 2 This switch can occur at different stages of the tumor development as result of metabolic stresses such as acidosis, inflammation, or hypoxia. In addition, pro-and anti-angiogenic factors are not only produced by tumor cells, but also by stromal cells of the tumor microenvironment, which as well plays an important role in tumor development by modulating the tumor's progression and metastasis. 3 Tumor vessels that are eventually formed are different compared with normal vasculature: they are disorganized with irregular structure and with altered interaction between endothelial cells. 4 Once cancer cells generate their own blood supply, they are capable of further invasion and have the capacity to metastasize. Folkman 5 , in 1971 proposed the hypothesis that cancer growth is dependent on the formation of new blood vessel, which was repeatedly confirmed by multiple clinical studies with several angiogenesis inhibitors. 6 Although the most well-described angiogenic factor is the vascular endothelial growth factor (VEGF), several others angiogenic stimulators mainly receptor tyrosine kinase ligands 7 such as fibroblast growth factors (FGFs), angiopoeitin-1, epidermal growth factor (EGF), platelet-derived growth factor (PlGF), Microvessel density, a histological measure of new blood vessel formation within a tumor, has been shown to correlate with Gleason score and may predict clinical or biochemical recurrence. 12 However, other studies have yet to confirm that microvessel density can be used as an independent prognostic factor. 13 In addition, studies have found that hypoxia can upregulate the expression of VEGF in prostate cancer 14 and that the hypoxia-inducible factor-1 (HIF-1), a key regulator responsible for survival of cells in hypoxic condition and the mediator of VEGF expression, has higher expression in prostate cancer cells compared with benign prostate cells. 15 Finally, some studies have demonstrated that in vivo alterations of testosterone levels regulate the expression of FGF, VEGF, and angiopoietin-family members. 16 Inhibition of angiogenesis, alone or in combination with chemotherapy, has potential antitumor efficacy against metastatic prostate cancer, and several antiangiogenic agents have been tested in phase III of clinical trials or are currently undergoing testing in clinical trials ( Table 1 and Table 2).
LESSONS LEARNED FROM COMPLETED CLINICAL TRIALS OF ANTI-ANGIOGENIC AGENTS IN PROSTATE CANCER
None of the completed phase III clinical trials of anti-angiogenic agents performed to date met expectations to extend the life in men affected with metastatic prostate cancer. The results of early phase studies delivered great expectations for anti-angiogenesis treatment alone or in combination with cytotoxic chemotherapy in prostate cancer patients; however, that could not be confirmed in the randomized clinical trials. Experience in over a decade's of worth clinical trials have identified some of the key challenges in clinical development of anti-angiogenic agents in prostate cancer. Taken together, results of anti-angiogenic studies in prostate cancer demonstrated the need for better clinical trial endpoints and markers of clinical benefit.
What is the appropriate clinical trial endpoint?
Historically, overall survival (OS) has been considered the 'gold standard' for evaluating novel treatments in oncology, because of its objectivity; however, the use of OS as an endpoint is increasingly difficult given the long survival of prostate cancer patients and the additional survival benefit associated with novel therapies such as abiraterone, sipuleucel-T and enzalutamide that patients may receive after disease progression. Progression free survival (PFS) may be a surrogate endpoint that can be met earlier and shorten the time for drug development; however, PFS is not considered an ideal endpoint to the treatment as it may or may not necessarily translate into an OS improvement. 17 Potential measures of progression can include changes in prostate specific antigen (PSA), clinical status and/or imaging. These evaluations may not always correlate with each other, or with activity of the disease. Detection of progression cannot be predicted as clinically relevant since the progression is affected by the timing and frequency of assessments. In addition, investigators may differ in their interpretation of bone scan results or clinical progression. Definitions for PSA progression have been proposed by the PSA Working Group (PSAWG). To avoid misclassification of bone scan flares at the first assessment, the PSAWG2 recommends that the patients treated with non-cytotoxic drugs found to have new lesions noted on their first scan receive a second confirmatory scan after six weeks. They would be considered to have progressed if they have two additional lesions noted on the confirmatory scan. PSAWG further recommends a modification to Response Evaluation Criteria In Solid Tumors (RECIST), such that the only changes in lymph nodes were reported to be 2 cm or greater at baseline. 18,19 However, these guidelines have not been prospectively validated. In an attempt to identify intermediate clinical endpoints in prostate cancer trials, Halabi and colleagues 20 performed a pooled analysis of nine cancer and leukemia group B (CALGB) trials conducted from 1991 to 2004 that included 1296 chemotherapy naïve patients with castrate resistant prostate cancer (CRPC). They reported that PSA biochemical progression at six months and PFS at three and six months may predict OS, but those results needed to be prospectively validated. An analysis of SWOG 9916 clinical trial which evaluated the use of docetaxel in metastatic CRPC found that biochemical response (30% decline in PSA at 3 months) was found to be a predictive of OS. 21 The search for the ideal surrogate endpoint(s) for OS of prostate cancer that can shorten the time to complete prostate cancer clinical trials is still ongoing.
Novel mechanisms of action may not be measured by current standards of progression
The above mentioned analyses that measured the association between the PFS or biochemical responses were conducted using older studies of chemotherapy naive CRPC and may not be appropriate for novel therapies. For example, sipuleucel-T did not improve response rate, delay progression or cause reductions in PSA, as compared to placebo; however, this immunotherapy treatment demonstrated the improvement in OS. 22 In addition, PSA may not be an appropriate indicator of activity by the targeted agents. In a Phase II study of cabozantinib (described below), PSA did not correlate with radiologic changes in bone or soft tissue. 23 Preclinical studies using LNCaP prostate cancer cell lines treated with sorafenib demonstrated the inhibition of cancer cell growth while exhibiting simultaneous PSA increase, suggesting that PSA may not be an appropriate biomarker of sorafenib anticancer activity (discussed below). 24 In addition, clinical studies of sorafenib have also suggested PSA may not be an indicator of its activity in advanced prostate cancer patients. 24
Toxicity
Given the advanced age of most men with prostate cancer, careful attention to toxicity profiles is especially important. Novel treatments, including the inhibitors of angiogenesis described in this review can be associated with toxicities such as hypertension, edema, thromboembolic events and bleeding. Therefore, it is possible that a drug may improve PFS but not OS if it causes excess toxicity.
Bevacizumab
Bevacizumab (Avastin®; Genentech, San Francisco, CA, USA) is a recombinant, humanized monoclonal antibody that blocks angiogenesis by inhibiting VEGF-A. It is FDA approved for treatment of several malignancies including colorectal carcinoma, metastatic renal cell carcinoma, non-squamous non-small cell lung cancer, and recurrent glioblastoma. In a phase II study, 15 patients with chemotherapy naïve metastatic CRPC was treated with single agent bevacizumab 10 mg kg −1 IV every 14 days. Results showed no objective responses and only 4 patients (27%) had PSA decline less than 50%. The trial was halted for futility; 25 however, several future trials suggested potential activity when bevacizumab was combined with chemotherapy in patients with CRPC.
CALGB 90006 was a phase II trial that enrolled 79 patients who received docetaxel 70 mg m −2 IV, bevacizumab 15 mg kg −1 IV every three weeks with estramustine 280 mg TID on days one through five. Seventy seven patients were evaluable and received a median of eight cycles; Of the total 77 patients, 58 patients (75%) had a 50% PSA decline. Twentythree of 39 patients with measurable disease had a partial response (PR) (59%). PFS was 8.0 months with a median OS of 24 months. The most common severe toxicities were neutropenia (69%), fatigue (25%), thrombosis and embolism (9%). 26 This study did not meet its primary endpoint of PFS; however, observed anti-tumor activity and favorable OS led to a phase III study of bevacizumab with docetaxel chemotherapy.
CALGB 90401 was a phase III study that randomized 1050 patients to docetaxel (75 mg m −2 IV every 3 weeks) with 10 mg of daily prednisone with or without 15 mg kg −1 bevacizumab. The primary endpoint of this study was OS, and secondary endpoints were PFS, objective response (OR) and 50% decline in PSA. The addition of bevacizumab did not improve OS despite the improvement in OR and PFS. The median OS was similar between the two arms: 22.6 months in bevacizumab group vs 21.5 months in control group (HR 0.91; P = 0.181). Also, the addition of bevacizumab was associated with greater treatment toxicity (Grade ≥3 neutropenia, leukopenia, hypertension, fatigue, gastro-intestinal bleeding and perforation) and the significantly higher number of treatment related deaths (4.0% vs 1.2%; P = 0.005). 27 OS in the control group was longer than reported in other trials, 28 raising questions that the study may have been underpowered or that patients were enrolled earlier in their disease course which could lead to a leadtime bias. Interestingly, recent results from a phase III clinical trial in metastatic colorectal carcinoma (ML18147) showed that maintenance of bevacizumab with standard chemotherapy in patients beyond disease progression has improved the OS 29 suggesting that duration of antiangiogenic treatment may be important and that the mechanism of resistance to anti-VEGF agents may be different. The prostate cancer trials described above did not continue bevacizumab treatment that was beyond the disease progression.
Sorafenib
Sorafenib (Nexavar ® , Bayer HealthCare and Onyx Pharmaceuticals, Emeryville, CA, USA) is a small molecule tyrosine kinase inhibitor (TKI) which targets RAF kinase in addition to VEGF receptor 2 (VEGFR-2) and platelet derived growth factor receptor beta (PDGFRbeta) resulting in antiangiogenic effects. The agent is FDA approved for hepatocellular carcinoma and renal cell carcinoma. Sorafenib has been evaluated in phase II studies in patients with CRPC both prior to and following docetaxel chemotherapy. Dahut and colleagues 24 reported results of a single arm study of sorafenib given at 400 mg daily. Initial results from the first 22 patients with CRPC following docetaxel chemotherapy showed no PSA declines that was greater than 50%. Of the 21 patients with progressive disease, 13 had PSA progression only with stable disease (defined by clinical and radiographic criteria). The second part of the study enrolled 24 additional patients (21 previously treated with docetaxel chemotherapy with a median Gleason score of 8). Ten patients had stable disease and one patient had PR. Median PFS (defined by clinical or radiographic criteria) was 3.7 months and median OS was 18.0 months. Pooled data from both stages of the trial (N = 46) demonstrated a median OS of 18.3 months. Reported toxicities were grade 2 and 3 hand-foot skin reaction, rash, transaminitis, and fatigue. 30 Another phase II trial enrolled 57 chemotherapy naïve CRPC patients who were treated with sorafenib 400 mg BID. Of the 55 evaluable patients, only two had PSA decline of more than 50% and none had objective responses based on RECIST criteria. Interestingly 15 patients had stable disease and 31% of patients had not progressed by 12 weeks. 31 Chi reported phase II findings in 2008 with 28 chemotherapy naïve CRPC patients. Only 3.6% of patients had PSA decline more than 50%, interestingly more patients had PSA decline after treatment discontinuation indicating that treatment with sorafenib may have caused increased PSA levels independent of tumor growth. 32
Sunitinib
Sunitinib (Sutent®, Pfizer Inc. New York, NY, USA) is an oral multi TKI with activity against VEGFR-2, PDGFRb, FLT-3 and KIT, which play a role in tumor angiogenesis and tumor cell proliferation. It is FDA approved in advanced renal cell carcinoma and gastrointestinal stromal tumor (GIST) after failure of imatinib. Sunitinib has being studied with docetaxel in several clinical trials. Zurita completed a phase I/II trial of sunitinib combined with docetaxel and prednisone in 55 chemotherapy naïve CRPC patients. Patients received sunitinib at 37.5 mg per day on days 1-14, docetaxel 75 mg m −2 on day one and prednisone 5 mg BID. The primary endpoint of the study was PSA decline by PSAWG-1 criteria. Of the 55 chemotherapy naïve CRPC patients, 56% of the patients had PSA decline; 39% of the patients had a partial response with median time to progression (TTP) of 42 weeks. Median PFS and OS were 12.6 and 21.7 months, respectively. Only 22% patients competed the planned 16 cycles of treatment, 36% discontinued for disease progression while 27% discontinued due to adverse events, most commonly grade 3 and 4 neutropenia (75%), grade 3 and 4 febrile neutropenia (15%) and fatigue (15%). 33 A randomized, multicenter phase III trial comparing sunitinib and prednisone with prednisone alone in CRPC patients who have failed docetaxel-based therapy was halted for lack of efficacy after the Aflibercept has binding affinity to the isoform VEGF-A, VEGF-B and platelet-derived growth factors PlGF1 and PlGF2, thereby inhibiting angiogenesis. 35 It is FDA approved for the treatment of patients with metastatic colorectal cancer that is resistant or has progressed following an oxaliplatin-based regimen. 36 Aflibercept has been tested in phase I and II clinical trials with docetaxel 37 although no phase II trials of this combination have been done in patient with metastatic CRPC. VENICE was a phase III, multicentre, randomized double-blind placebo-controlled study, which enrolled 1224 chemotherapy naïve patients with metastatic CRPC. Study randomized 1224 patients to docetaxel (75 mg m −2 IV every 3 weeks) and prednisone (5 mg BID) plus aflibercept (6 mg kg −1 IV every 3 weeks) or to docetaxel, prednisone and placebo. There was no improvement in OS in the aflibercept group (22.1 vs 21.2 months, HR 0.94; P = 0.38). In addition, there was statistically significant increased number of side effects in the aflibercept arm (grade 3 and 4 gastrointestinal symptoms (30% vs 8.0%), hypertension (13% vs 3.3%), bleeding (5.2% vs 1.7%), fatigue (16% vs 7.7%), infections (20% vs 10%) and treatment-related fatal adverse events (3.4% vs 1.5%). 38
Thalidomide and lenalidomide
Thalidomide (Thalomid®, Celgene Corporation, Summit, NJ, USA) is an oral synthetic glutamic acid derivative with teratogenic, immunomodulatory and anti-angiogenic activities. Its mechanism of action is still not clearly understood. It inhibits the production of tumor necrosis factor alpha (TNF-alpha), basic fibroblast growth factor (bFGF) and VEGF causing inhibition of angiogenesis. 39 It is FDA approved for newly diagnosed multiple myeloma. It has been evaluated alone or in combination with cytotoxic agents in prostate cancer. A phase II trial of 100 mg daily of thalidomide in CRPC patients demonstrated >50% PSA decline in 3 out of 20 (15%) of patients. 40 A phase II randomized study tested the docetaxel (30 mg m −2 IV weekly for 3 weeks on 28-day cycles) with or without thalidomide (200 mg daily). In an updated analysis with median follow-up of 46.7 months, the median OS for the combined arm was 25.9 months vs 14.7 months for docetaxel alone, which was statistically significant (P = 0.04). Thromboembolic events occurred in 12 of the first 43 patients. Following this event, prophylactic anticoagulation with low-molecular weight heparin was given in the combination arm. Other toxicities in the combined arm were manageable (fatigue, neuropathy, depression and pleural effusions). 41 Lenalidomide (Revlimid®, Celgene Corporation, Summit, NJ, USA) is a thalidomide analog. It inhibits TNF-alpha production, promotes G1 cell cycle arrest and apoptosis of malignant cells and reduces serum levels of the VEGF and bFGF. It is FDA approved for newly diagnosed multiple myeloma, mantle cell lymphoma and low or intermediate-1 risk myelodysplastic syndromes (MDS). In phase I/II clinical trials, lenalidomide demonstrated activity and tolerability in prostate cancer patients when used as a single agent 42 or in combination with docetaxel and prednisone. 43 These results provided the basis for a randomized phase III clinical trial of lenalidomide in combination with docetaxel and prednisone as first-line therapy for metastatic CRPC (MAINSAIL trial). Eligible patients were randomized to docetaxel 75 mg m −2 on day one, and prednisone 5 mg BID plus lenalidomide 25 mg daily, or to docetaxel, prednisone and placebo. The primary endpoint was OS, and key secondary endpoints were overall response rate (ORR), PFS, and safety. The study enrolled a total of 1059 patients, but was discontinued on the recommendations of the Data Monitoring Committee. The median OS was shorter in the lenalidomide arm (77 weeks) and had not been reached in the placebo group (HR 1.53, P = 0.0017). Median PFS was 45 weeks with lenalidomide and 46 weeks with placebo (HR 1.32, P = 0.0187). In addition, patients randomized to lenalidomide arm had significantly higher rates of febrile neutropenia and other nonhematological toxicities. 44
Dual anti-angiogenic blockade (thalidomide and bevacizumab)
Dual anti-angiogenic therapy (bevacizumab and thalidomide) in combination with docetaxel and prednisone has also been evaluated in patients with metastatic CRPC. A phase II trial reported 90% biochemical response rate and ORR in measurable disease of 64%. The median OS was 28.4 months, which was longer than the historical controls. 28 However, this combination therapy was very toxic. All patients developed grade 3 and 4 neutropenia, 20% had grade 3 and 4 thrombocytopenia or anemia. Grade 3 and 4 non-hematologic toxicities occurring in more than 10% of the patients were syncope and hypertension. Significant thalidomide-related toxicities were constipation (55%), fatigue (35%), peripheral neuropathy (13%), and depression (10%). Grade 2 osteonecrosis of the jaw occurred in 18.3% of patients, much higher than the previously reported data. 45
ONGOING PHASE III CLINICAL TRIALS OF ANTI-ANGIOGENIC AGENTS IN PROSTATE CANCER Cabozantinib
Cabozantinib (Cometriq®, XL184, Exelixis, San Francisco, CA, USA) is an orally bioavailable dual TKI with strong activity against VEGF receptor 2 (VEGFR2) and c-MET. It is FDA approved for the treatment of medullary thyroid carcinoma. c-MET is also expressed in prostate cancer tissue. 46 Based on the broad activity demonstrated in several phase I trials, a phase II randomized discontinuation trial was conducted in nine selected tumor types including CRPC. One hundred and seventy-one men with CRPC were enrolled. Seventy-two percent demonstrated regressions in soft tissue metastases, and 68% of patients showed significant improvement on bone scans, including CR in 12% of evaluable patients. The ORR at 12 weeks was 5%, with SD in 75% patients. The median PFS was 23.9 weeks for patients who were previously treated with a docetaxel chemotherapy (N = 74) and 29.7 weeks for chemo naïve patients (N =97). 23 Interestingly, the improvements in bone metastasis were accompanied by improvement in serum markers associated with bone destruction (c-telopeptide and alkaline phosphatase) and by pain improvement in 67% of patients. More than half of the patients enrolled in the study had significant toxicity, mostly fatigue, and several gastrointestinal symptoms including constipation, diarrhea, nausea and decreased appetite. A recent study tested a lower dose of cabozantinb (40 mg) and found that the drug had similar clinical effect but less toxicity. 47 Two phase III studies are currently underway in patients with CRPC affected by bone metastases who have received prior docetaxel and abiraterone or enzalutamide (COMET -Cabozantinib MET Inhibition CRPC Efficacy Trial 1 (NCT01605227) and 2 (NCT01522443). COMET-1 randomizes patients to cabozantinib vs prednisone and evaluates OS, whereas the COMET-2 randomizes patients to cabozantinib vs mitoxantrone and evaluates the durability of pain response ( Table 2).
Tasquinimod
Tasquinimod (ABR-215050, Active Biotech, Lund, Sweden) is a quinoline-3-carboxamide linomide analog with anti-angiogenic and potential anticancer activities. Tasquinimod has been shown to decrease blood vessel density but the exact mechanism of action is still unclear. 48 It is presumed to have an anti-angiogenic effect by downregulating hypoxia-inducible factor-1α and by inhibiting myeloid-derived suppressor cells (MDSC), which play an important role during angiogenesis. Interestingly, it was also found to be an inhibitor of S1900A9, which is expressed on MDSC and in the tumor microenvironment, and has been postulated to have a role in immune suppression. A phase II study randomized 206 patients with metastatic CRPC to tasquinimod vs placebo. Median PFS was 7.6 vs 3.3 months (P = 0.0042). The treatment was well tolerated; the most common side effects were fatigue, nausea and inflammation. There were few rare but serious adverse events including hyperamylasemia, sinus tachycardia and stroke. 49 A randomized, double-blind, placebo-controlled phase III clinical trial in men with metastatic CRPC recently completed the enrollment (1200 patients) (NCT01234311). The final results of the trial are not yet available ( Table 2).
OTHER ANTI-ANGIOGENIC AGENTS CURRENTLY UNDER EVALUATION IN PROSTATE CANCER Cediranib
Cediranib (Recentin®, AZD2171, AstraZeneca, London, UK) is an oral small molecule inhibitor of VEGFR-1, VEGFR-2 and VEGFR-3 and also of PDGF receptor and c-kit. 50 Cediranib has been reported to have activity in prostate cancer. A phase I trial reported a maximum tolerated dose of 20 mg with dose-limiting toxicities of muscle weakness and hypertension. 51 It was studied in a phase II study of 59 patients of which two thirds were heavily pretreated with two or more previous chemotherapy regimens. This study met its primary endpoint. Six of 39 patients with measurable disease had partial responses. At six months, 43.9% of patients were progression free; the median PFS and OS were 3.7 months and 10.1 months, respectively. The most frequent adverse events were fatigue, anorexia, weight loss and hypertension. The addition of prednisone reduced the incidence of toxicities. 52 A phase II study investigating the use of cediranib with dasatinib in patients with docetaxelrefractory metastatic CRPC is currently underway (NCT01260688). Another phase II study is evaluating docetaxel with or without cediranib in chemotherapy-naïve patients with CRPC (NCT00527124).
TRC105
TRC105 (Tracon Pharmaceuticals, San Diego, CA, USA) is a therapeutic human/murine chimeric monoclonal antibody to CD105 (endoglin), a TGF-b accessory receptor that is highly expressed on tumor vessel of endothelial cells and appears to be essential during angiogenesis by altering TGF-b and BMP-9 signaling. By binding to CD105, TRC105 may inhibit angiogenesis. Recently reported results of the phase I study demonstrated some evidence of clinical activity in advanced solid tumors. A phase I study enrolled 50 patients with advanced solid tumors who were treated with escalating doses of TRC105. Twenty-one of the 45 evaluable patients (47%) had stable disease at 2 months and 6 of 44 were progression free at 4 months including two ongoing responses at 48 and 18 months. The safety profile of TRC105 appears to be distinct from other VEGF inhibitors; it was well tolerated with common toxicities such as anemia, infusion reactions and telangiectasia. 53 An ongoing clinical trial is testing TRC 105 as a single agent in metastatic CRPC (NCT01090765).
Trebananib
Trebananib (AMG 386, Amgen, Thousand Oaks, CA, USA) is a novel peptide-Fc fusion protein that disrupts tumor endothelial cells proliferation and angiogenesis by preventing interaction between angiopoietins (Ang) 1 and 2 and Tie2 receptors. A phase I study enrolled 32 patients and demonstrated some evidence of clinical activity in advanced solid tumors. Four patients had stable disease at 16 weeks, whereas one ovarian cancer patient had a durable partial response after 156 weeks. Trebananib was well tolerated; the most commonly observed adverse events were peripheral edema, fatigue and proteinuria. 54 A phase I/II study investigating the use of abiraterone with or without trebananib in patients with chemotherapy naive metastatic CRPC is currently underway (NCT01553188).
CONCLUSIONS
While targeting angiogenesis appears to be a rational therapeutic approach for metastatic CRPC, there are still major obstacles in identifying the appropriate timing and patients that may benefit from these agents. Several phase III trials of anti-angiogenic agents were discouraging; however, anti-angiogenic agents are not out (yet). The role that anti-angiogenic agents have in metastatic CRPC, still remains to be evaluated with tasquinimod and cabozantinib being evaluated in phase III clinical trials along with several other angiogenesis inhibitors in Phase II studies. Forthcoming results from these clinical trials will hopefully clarify the role of angiogenesis inhibitors in the prostate cancer.
There are several challenges in drug development for this class of agents. Previously used measures of treatment affect (PFS, PSA response) may not be appropriate for angiogenesis inhibitors. The development of biomarkers of anti-tumor and anti-angiogenic activity, including novel imaging modalities may help to clarify the true activity of these drugs. In addition, these treatments must have acceptable safety profiles, given the advanced age of presentation for many men with prostate cancer. The role of combination therapies may also be explored, with early evaluation for both safety and efficacy.
AUTHOR CONTRIBUTIONS
MB and YNW both drafted the manuscript. Both authors read and approved the final manuscript.
COMPETING INTERESTS
MB has no competing financial interests. YNW has received grant support from Pfizer. | 2018-04-03T03:50:25.841Z | 2014-04-08T00:00:00.000 | {
"year": 2014,
"sha1": "0f41fd7f4aee83c8d58b14ff2252ab2e36a16ee6",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/1008-682x.125903",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f41fd7f4aee83c8d58b14ff2252ab2e36a16ee6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234266673 | pes2o/s2orc | v3-fos-license | Evaluation of antagonistic potential of bio -agents against anthracnose of French bean Colletotricum lindemuthianum
The French bean ( Phaseolus vulgaris ) is one of the most widely grown grain legume crop around the world covering an area of about 28 million hectares with an annual production of 20 million tones (FAO 2016). French bean suffers from many diseases caused by fungi, bacteria, viruses, nematodes and abiotic stresses. Among the fungal diseases anthracnose, are the most prevalent ones. Colletotrichum lindemuthianum attack on the bean leaves, causes dark brown necrotic lesions and decrease leaf photosynthesis activity. Yield loss is due to early leaf senescence and plant death, shrunken seed and an increase in the amount of diseased seed that has lesions on its coat. Such beans have a repulsive appearance and are not preferred by consumers. The disease is characterized by serious leaf spotting ultimately resulting in ‘shot hole’ symptoms and finally defoliation which affects the yield greatly. In the present investigation five bio-agents viz., Trichoderma viridae, Trichoderma harzianum, Chaetomium globosum , Pseudomonas fluorescence and Bacillius subtilis were evaluated to see the efficacy against C. lindemuthianum through dual culture technique under in-vitro conditions. The present inhibition of mycelial growth of pathogen by bio-agents was recorded after 48, 96, and 168 hrs. At 96 hours of inoculation, maximum % inhibition of C. lindemuthianum was recorded by Trichoderma viridae (49.25%) which were significantly superior from all the tested bio-agents, followed by P. fluorescence (46.95%) while at 168 hours of inoculation, maximum % inhibition of C. lindemuthianum was recorded by Chaetomium globosum (59.50%) followed by P. fluorescence (58.14%) and Trichoderma viridae (57.04%).
Introduction
The French bean (Phaseolus vulgaris) is one of the most widely grown grain legume crop around the world covering an area of about 28 million hectares with an annual production of 20 million tones (FAO 2016). Asia is the largest continent producing common beans and exporting to other countries in the world. About 46% of common beans are produced in Asia. In India, it is grown in all most all part of India like as Himachal Pradesh, Uttarakhand, Jammu and Kashmir, Punjab, Haryana, Uttar Pradesh, Bihar, Gujarat, Madhya Pradesh, Maharashtra, Karnataka, Andhra Pradesh and Tamil Nadu. In India, it is cultivated both as dry and snap bean in an area of about 0.15 million hectares with an annual production of approximately 0.42 million tones (FAO 2016). In Uttar Pradesh it is grown in an area 9.8 mha and total production of bean 147.38 mt (NHB 2016). It is a nutritive vegetable, rich in protein (1.70 mg), calcium (1.32 mg), thiamine (0.08 mg) and vitamin C (2.4 mg) per 100 g of edible pods. Dry leaves, threshed pods and stalks are nutritious feed to the animals. It has anti-diabetic property and is good for natural cure of bladder burns, cardiac problems and diarrhea. French bean suffers from many diseases caused by fungi, bacteria, viruses, nematodes and abiotic stresses. Among the fungal diseases, powdery mildew, anthracnose, Cercospora leaf spot, web blight and dry root rot are the most prevalent ones. In recent years, Green gram anthracnose caused by Colletotrichum lindemuthianum. Andrus and Moore has been reported that Green gram anthracnose (Colletotrichum lindemuthianum) become one of the major diseases which is known to occur in many countries viz., India, Nigeria, Thailand, Philippines, Upper Volta, Zambia, Palmira, Columbia, etc. (Agarwal, 1991) [3] . It occurs in all the parts of the world, wherever French bean is cultivated. In India, the French bean anthracnose was first reported from Jorhat of Assam state in 1951 (Majid, 1953) [1] . The disease has been reported from all major French bean growing regions of India in mild to severe form and in tropical and subtropical areas it causes considerable damage by reducing seed quality and yield (Sharma et al., 1971) [2] . The disease causes huge losses in temperate and subtropical zones. Plant at all growth stage is susceptible and susceptibility increase age in infection of a susceptible cultivar under favorable condition leading to an epidemic may result in 100% yield loss (Fernandez et al., 2000). It produces symptoms as circular radish brown, sunken spots with dark centre and bright red orange margins on leaves and pods. The disease also produces cankers on petioles and on stems that cause severe defoliation and rotting of fruits and roots. Infected fruit has small, water soaked, sunken circular spots that may increase in size up to 1.2 cm in diameter. Anthracnose, Colletotrichum lindemuthianum (Sacc. and Magn.). It is the most dangerous disease in common bean. Field losses in these regions, due to seedling, leaf, stem and pod infections, are up to 90% under favourable climatic condition. Colletotrichum lindemuthianum attack on the bean leaves, causes dark brown necrotic lesions and decrease leaf photosynthesis activity. Yield loss is due to early leaf senescence and plant death, shrunken seed and an increase in the amount of diseased seed that has lesions on its coat. Such beans have a repulsive appearance and are not preferred by consumers.The disease is characterized by serious leaf spotting ultimately resulting in 'shot hole' symptoms and finally defoliation which affects the yield greatly. Infection of pods directly damages the seeds and reduces its germinability. Pod infection may result in complete loss in yield. The pathogen survives on seed and plant debris in soil. Disease spreads in the fields through air borne conidia. The disease is more severe in cool and wet regions. Since the synthetic fungicides are being widely used by the farmers to eradicate pathogens but it results in environmental hazards and have harmful effects on human beings and animals. The chemical fungicides not only develop fungicidal resistant strains but also accumulate in food and ground water as residues. In order to overcome such hazardous control strategies, scientists, researchers from all over the world paid more attention toward the development of alternative methods which are, by definition, safe in the environment, non-toxic to humans and animals and are rapidly bio-degradable, one such strategy is use of bio-control agents (BCAs) to control fungal plant diseases. Therefore, keeping in view the importance of diseases and the role of bio-control to overcome them, based on the consideration highlighted present investigation.
Effect of bio-agents on mycelial growth of C. lindemuthianum
The bio-control agent, were taken from bio control laboratory, Department of Plant Pathology, Chandra Shekhar Azad University of agriculture and Technology, Kanpur (Uttar Pradesh) and evaluated for their antagonistic effect under invitro conditions against C. lindemuthianum by dual culture technique.
Dual culture technique
The antagonistic activity of five antagonistic bio agents to tested efficacy of inhibit the growth of the pathogen to a maximum extent. Effect on the growth of Colletotrichum lindemuthianum studied using dual culture technique. In dual culture technique, twenty ml of sterilized and cooled potato dextrose agar was poured into sterile petri plates and allowed to solidify. Fungal antagonists were evaluated by inoculating the pathogen at one side of Petri plate and the antagonist inoculated at exactly opposite side of the same plate by leaving 3-4 cm gap. For this, actively growing cultures were used. In case of evaluation of bacterial antagonist, two mycelia (discs of pathogen) were inoculated and bacterial antagonist was streaked in the centre of the plate. Each treatment was replicated three times. After required period of incubation i. e. after control plate reached 90 mm diameter, the radial growth of pathogen was measured. Per cent inhibition over control as worked out according to formula given by (Vincent 1947).
Where, I = Percent inhibition of mycelium C = Growth of mycelium in control T = Growth of mycelium in treatment
Effect of bio control agents on mycelial growth of pathogen
Five bio agents were evaluated for their efficacy against C. lindemuthianum through dual culture technique as explained in 'Materials and Methods'. The studies on inhibitory effect of Trichoderma viridae, Trichoderma harzianum, Chaetomium globosum and bacterial antagonist Pseudomonas fluorescence and Bacillius subtilis against pathogen Colletotrichum lindemuthianum by using dual culture technique on PDA medium showed significant differences in reduction growth of the pathogen under in vitro conditions. The inhibition of mycelial growth of pathogen by bio agents was recorded after 48, 96, and 168 hrs. [4] , who found effectiveness of Trichoderma spp. against Colletotrichum lindemuthianum, where, as Laxman (2006) [5] against C. truncatum. This could be obviously due to several possibilities of existence of microbial interactions such as stimulation, inhibition, mutual intermingling of growth of antagonistic isolate over test pathogen etc. have been enumerated by many workers.
Conclusion
Based on the results of the present investigation, it can be concluded that Chaetomium globosum was found most effective against of C. lindemuthianum which suppress the growth of the pathogen. | 2021-05-11T00:06:43.341Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "237aed9f0294d1b29286046a4841bc904750cf01",
"oa_license": null,
"oa_url": "https://www.chemijournal.com/archives/2021/vol9issue1/PartAA/9-1-23-688.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3f7a86dbff0cdb8a713098bc59a16f5ca307bc69",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
15112890 | pes2o/s2orc | v3-fos-license | A distinctive avian assemblage ( Aves : Passeriformes ) in Western Darién , Panama is uncovered through a disease surveillance program
Basic knowledge about the distribution of flora and fauna is lacking for most tropical areas. Improving our knowledge of the tropical biota will help address contemporary global problems, including emerging tropical diseases. Less appreciated is the role that applied studies can have in improving our understanding of basic biological patterns and processes in the tropics. Here, I describe a novel avifauna assemblage uncovered in Western Darién province in the Republic of Panama that was uncovered during a vector-borne disease surveillance program. I compared the passerine bird species composition at 16 sites using records from recent ornithological expeditions sponsored by the Smithsonian Tropical Research Institute in Central and Eastern Panama. Based on the results of a Mantel test, geographic distance did not correlate with pairwise distinctiveness of sites. Instead, based on an index of distinctiveness modified from the Chao-Jaccard index, most sites were more or less similarly distinctive, with one site, Aruza Abajo, significantly more distinctive than the rest. I found that the distinctiveness of this site was due not only to the presence of several rare and range-restricted taxa, but also to the absence of taxa that are common elsewhere. This finding provides more evidence of high species composition turnover (beta-diversity) in the Panamanian biota, which appears to be driven by a combination of soil and climate differences over narrow distances. Rev. Biol. Trop. 62 (2): 711-717. Epub 2014 June 01.
For most tropical areas, we continue to lack basic knowledge regarding distributional patterns of biodiversity as well the processes that create and maintain this biodiversity.At the same time, this knowledge gap impedes our ability to address problems of conservation, agriculture, and mitigation of emerging tropical diseases using the best available science.Recently, many natural history museums have begun to close this knowledge gap in both basic and applied biology through expeditions in collaboration with applied scientists (Winker, 2004;Pyke & Ehrlich, 2010).While it is routinely acknowledged that basic scientific discovery informs applied scientific inquiry (Yasué et al. 2006), less attention is given to the role that applied scientific endeavors can play in improving our basic knowledge about biodiversity patterns, especially in the tropics.Here, I describe an instance where a collection trip to survey birds and mosquitos for equine encephalitic virus uncovered a unique and overlooked bird assemblage in Panama, a country with one of the best-studied avifauna in the Neotropics.
The Smithsonian Tropical Research Institute Bird Collection (STRIBC) was founded in 2007 in order to better describe patterns of avian biodiversity across the Isthmus of Panama as well as to understand interactions between birds and other environmental factors and biological agents, especially with regard to emerging tropical diseases.In 2010, at the onset of an outbreak of equine encephalitis in Eastern Panama that had claimed at least two human lives and caused considerable loss of livestock, the Panamanian Ministry of Agriculture requested that the STRIBC sample wild birds and mosquitoes in Aruza Abajo (geographical coordinates: 8.36, −77.95,Fig. 1), an affected site in Western Darién Province.Equine encephalitis outbreaks occur periodically in Eastern Panama, but almost nothing is know about the ecology of viral transmission in the area, including which mosquito species are local vectors (Navia-Gine et al. 2013) or whether rodents or birds are natural hosts (Arrigo, Adams, & Weaver, 2010).
At Aruza Abajo, I was impressed by many of the birds we encountered.For example, we collected the second Middle American specimen of the flycatcher Tolmomyias flaviventris, the third, fourth and fifth Middle American specimens of the woodpecker Colaptes punctigula, and several specimens of Phaethornis anthophilus, being the most abundant hummingbird at the site.This species had only been recorded in Panama from five specimens collected more than 45 years ago.The ornithological literature for Panama (Ridgely & Gwynne, 1989;Angehr & Dean, 2010) refers to the Eastern portion of Darién province as being the most distinctive in Panama, but it is clear from these sources and others (González, Eberhard, Lovette, Olson, & Bermingham, 2003;Siegel & Olson, 2008) that Western Darién province has been systematically overlooked by ornithologists.In order to test my observation that the avifaunal assemblage in Western Darién province, and especially Aruza Abajo, was distinctive, I compared our records from Aruza Abajo to those of other locations where my colleagues and I have collected birds in Panama since 2002.
MATERIALS AND METHODS
I generated species incidence lists for 16 locations (Fig. 1) where the STRIBC or STRIsponsored foreign natural history museums have collected birds in central and Eastern Panama since 2002.I participated in the collecting efforts at 15 of the 16 locations.Field work at each site varied from two and 13 days.Most expeditions were disease ecology surveys, while a few were solely biodiversity surveys, but all shared the same goal of sampling broadly across the local bird community, rather than focused efforts to collect particular species of interest.In all cases, the majority of collecting employed standard ornithological mistnets, but occasionally collecting was supplemented by shotgun, where permitted.Data come from the digital collections databases managed by STRI.Although our fieldwork included collecting both passerines and non-passerines, in order to improve consistency among sites, I restricted my analyses to resident passerines only.All scientific collecting in Panama sponsored by STRI was approved by ANAM, Panama's Environmental Authority (permit numbers: DNPN-01-2002, DNPN-01-2003, DNPN-01-2004, DNPN-01-2004, SE/A-60-10, SE/A-137-10, SE/A-96-09, SE/A-44-10, SE/A-66-11, SE/A-2-12); likewise, scientific collecting undertaken by the STRI Bird Collection is done under IACUC approval (permits: 2007-03-03-15-07, 2011-0927-2014-03, 2013-0801-2016).
I used the Chao-Jaccard similarity index modified for replicated incidence data (Chao Chao, Chazdon, Colwell, & Shen, 2005) to measure passerine species assembly distinctiveness among the 16 localities.For each site, I tallied daily incidence (i.e.presence vs. absence).Therefore, for a given site, the total incidence for any given species could vary from 0 (never collected at that site) to n, where n equals the number of days of fieldwork at that site.I chose this replicated incidence approach because effort at each site varied in several ways: the number of days at a site, the number of nets employed and the hours of operation in a day, whether shotgun collecting was allowed, and because the total maximum number of collected specimens per species from a given site varied among species based either on our Panamanian collecting permit or our overall research objectives.Details about number of field personnel, mistnets deployed, and mistnet-hours was not available.Therefore, replicated incidence rather than a raw abundance approach should best deal with these differences in sampling effort, as well as the problem of incomplete sampling at a given site (Chao et al., 2005(Chao et al., , 2006)).It is certain that at none of the 16 sites our collection efforts approached a full sampling of the local passerine community.The modified Chao-Jaccard index (ChaoJaccard inc-est ) was computed pairwise among all 16 sites in the program EstimateS 9 (Colwell, 2013).However, for the rest of this paper, I will be discussing assemblage distinctiveness, which I define as simply 1-ChaoJaccard inc-est .Thus, a distinctiveness value of 0 represents two sites where avifaunal assemblages overlap completely, whereas a value of 1 indicates two sites with completely different bird faunas.I tested for a relationship between pairwise site distinctiveness and the geographic distance using a Mantel test implemented in R (R Core Team 2012).In order to compare the distinctiveness among sites, I generated 16 t-tests, comparing each sites average pairwise distinctiveness measure to that of all other comparisons (that did not include that site).To correct for multiple comparisons, I employed a sequential Bonferroni correction which constrains the familywise error rate (e.g.overall P-value) to 0.05.
RESULTS
Across the 16 sites, 183 species of resident passerines were recorded.Among the 16 sites, raw passerine species richness varied between 16 and 71 (mean S: 42.4±18.0),while species incidence (i.e. total species days) varied from 20 and 219 (mean: 88.3±51.6).No species occurred on all 16 collecting tally lists, and only one species (Mionectes oleagineus) was recorded from 15 locations, with five additional species collected in at least 13 sites.Instead, the distribution of species shows a long tail of relatively rare taxa: 116 species (63.3% of all species recorded in the study) were collected in three or fewer sites and nearly a third of all species in the study (N=60) were collected in only one location.Data for all 16 collecting sites are available at the Figshare depository (DOI: 10.6084/m9.figshare.941075).
Among the 16 sites, distinctiveness averaged 0.60, and ranged from 0.52 to 0.76, with the latter representing Aruza Abajo.Among all pairwise comparisons, the two sites estimated to be most similar were Old Gamboa and El Salto, whereas the most distinctive were found to be Mendoza and Aruza Abajo.A Mantel test failed to rejected the null hypothesis that pairwise distinctive was independent of geographic distance (r=0.092,p=0.17,Fig. 2).The results of the 16 pairwise t-tests found that three locations had significantly higher distinctiveness scores than the remaining pair-wise comparisons before correcting for multiple tests (villa del Carmen: p=0.03;Bayano: p=0.009;Aruza Abajo: p=2.0×10 -7 , Fig. 3), with only Aruza Abajo remaining significant after sequential Bonferroni correction (corrected a=0.003).
Several of the passerines collected at Aruza Abajo represented near-endemic species found only in Eastern Panama and adjacent Northern Colombia, including the flycatcher Tolmomyias flaviventris, the wren Campylorhynchus albobrunneus, and the conebill tanager Conirostrum leucogenys.These taxa were collected only at Aruza Abajo, or in the case of the wren, at one additional site.However, other taxa that were collected exclusively or almost exclusively at Aruza Abajo include more widespread taxa such as the antbird Myrmeciza longipes and the wren Cantorchilus leucotis.Although these species can be found across much of Panama, they were more routinely collected in Aruza Abajo relative to the remaining 15 sites.Finally, it is important to note that while these findings refer to only passerines it is likely that the assemblage of non-passerines at Aruza Abajo is similarly distinctive; the hummingbird and woodpecker examples mentioned earlier are two species of non-passerines that were almost never collected outside Aruza Abajo.
The avifauna of Aruza Abajo is distinctive not just for what is there, but also for what is apparently missing.Only 11 of the 27 species that were collected at half of the study sites were collected at Aruza Abajo, whereas the median representation of these taxa at the other 15 sites was 20 of 27.One possible explanation for missing species is that they are replaced by novel taxa.The most likely example of taxon replacement is the antshrike Thamnophilus atrinucha, which tied for second place among most frequently collected species across the 16 sites (N=14), but appears to be replaced at Aruza Abajo by Thamnophilus nigriceps, an antshrike endemic to Eastern Panama and adjacent Northern Colombia.T. nigriceps was collected at just four sites in this study.
In general, our results agree with earlier studies that have highlighted the high betadiversity found in Panamanian biotic communities.Unlike Chust's finding for Panamanian trees (Chust et al., 2006), there is no evidence that distance drives the turnover in bird species in central and Eastern Panama, agreeing with a more recent study of trees of central Panama (Jones et al., 2013).Often, beta diversity in Panamanian systems has been attributed to the strong rainfall gradient between Caribbean and Pacific Panama, especially in the canal area (e.g.phytophagous beetles: Ødegaard, 2006;and birds: Rompré, Robinson, Desrochers, & Angehr, 2007).Alternatively, Jones et al. (2013) suggest that soil type along with rainfall best predict species turnover of trees and ferns in central Panama.Interestingly, evidence that soils contribute to patterns of tropical bird species distributions comes from recent studies that document a unique avifauna in Amazonian white-sand forests (Alonso & Whitney 2003;Alonso, Metz, & Fine, 2013).Determining the cause of the distinctive avifauna assemblage in Western Darién province, and whether such differences can be found in other flora and faunal assemblages, remains to be determined.It is also important to note that variation in the bird assemblage among Panamanian field sites may be related to the extremely high degree of phylogeographic variation (i.e.within species genetic variation) that has been routinely observed in lowland Panamanian plants (Jones, Cerón Souza, Hardesty, & Dick, 2013) and animals (e.g.birds: González et al., 2003;Miller et al., 2008;Miller, Bermingham, Klicka, Escalante, & Winker, 2010;frogs: Wang, Crawford, & Bermingham, 2008;and bats: Clare, Lim, Fenton, & Hebert, 2011;Hauswaldt, Ludewig, vences, & Pröhl, 2011).
A poor understanding of faunal dynamics in tropical areas not only limits our ability to understand patterns of tropical biodiversity but also impedes our response to applied scientific problems such as vector-borne tropical diseases.In the case of the fatal 2010 outbreak of equine encephalitis in Western Darién, recent phylogenetic evidence indicates that the virus is likely endemic in the area (S.Weaver, pers.comm.), yet as demonstrated by this study, even the most basic understanding of potential vertebrate host communities in the area is lacking.These findings highlight the continuing need to re-examine the depth of basic biodiversity knowledge even in supposedly well-known tropical systems, and demonstrates the value of applied research programs to improving our knowledge of tropical biodiversity patterns.
Fig. 2 .
Fig. 2. visualization of a Mantel test comparing pairwise distinctiveness measures and geographic distance between 16 ornithological collecting sites in Central and Eastern Panama (r 2 =0.0085, p=0.17).Filled circles represent pairwise comparisons that include Aruza Abajo, unfilled circles represent all other pairwise comparisons. | 2017-04-02T15:05:19.488Z | 2014-07-05T00:00:00.000 | {
"year": 2014,
"sha1": "db53d7ce98164342b4a8176dba28be0e3bc6e010",
"oa_license": "CCBY",
"oa_url": "https://revistas.ucr.ac.cr/index.php/rbt/article/download/10493/12924",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "80ac63155f2d99d7074df8fcdd07fd8d5ab01bce",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
1337886 | pes2o/s2orc | v3-fos-license | Establishment of a Replicating Plasmid in Rickettsia prowazekii
Rickettsia prowazekii, the causative agent of epidemic typhus, grows only within the cytosol of eukaryotic host cells. This obligate intracellular lifestyle has restricted the genetic analysis of this pathogen and critical tools, such as replicating plasmid vectors, have not been developed for this species. Although replicating plasmids have not been reported in R. prowazekii, the existence of well-characterized plasmids in several less pathogenic rickettsial species provides an opportunity to expand the genetic systems available for the study of this human pathogen. Competent R. prowazekii were transformed with pRAM18dRGA, a 10.3 kb vector derived from pRAM18 of R. amblyommii. A plasmid-containing population of R. prowazekii was obtained following growth under antibiotic selection, and the rickettsial plasmid was maintained extrachromosomally throughout multiple passages. The transformant population exhibited a generation time comparable to that of the wild type strain with a copy number of approximately 1 plasmid per rickettsia. These results demonstrate for the first time that a plasmid can be maintained in R. prowazekii, providing an important genetic tool for the study of this obligate intracellular pathogen.
Introduction
Rickettsia prowazekii, the causative agent of epidemic typhus and a Category B Select Agent, is an obligate intracellular bacterium that grows directly within the cytosol of eukaryotic host cells. The obligate intracellular nature of R. prowazekii growth places considerable restrictions on the genetic manipulation of this pathogen. Foremost, it is essential that manipulations do not prevent the rickettsiae from infecting host cells, a requirement for rickettsial survival and growth. Since R. prowazekii cannot be grown as colonies on an agar surface using axenic media, standard bacterial cloning protocols are unavailable. In addition, plaquing techniques currently used for cloning spotted fever group rickettsiae that can polymerize actin for intracellular movement are problematic for R. prowazekii, which is deficient in actin polymerization [1,2,3]. Thus, clonal populations of R. prowazekii mutants must be isolated using labor-intensive and time-consuming techniques such as limiting dilution [4,5,6,7]. This inability to form colonies or efficiently form plaques prohibits the precise determination of R. prowazekii transformation frequencies [8]. However, despite these barriers, advances in the genetic manipulation of this intractable organism have been made. For example, identification of antibiotics suitable for the selection of rickettsial transformants [4,5], the use of fluorescent proteins as reporter genes [9], the adaptation of transposon systems for generating R. prowazekii insertional mutants [5,6], and the directed knockout of a rickettsial gene [7] have now been reported.
Although complementation of an R. rickettsii gene mutation using the Himar1 transposon system was recently achieved [8], the R. prowazekii genetic toolbox still lacks a replicating plasmid for extrachromosomal gene expression studies that would not result in chromosomal disruption. Fortunately, the demonstration that some rickettsial species harbor plasmids has added another genetic component to the rickettsial gene repertoire.
Originally, the first rickettsial genome sequencing projects targeting rickettsial pathogens failed to find plasmids, supporting the hypothesis that rickettsial species did not contain extrachromosomal elements. However, beginning with the identification of plasmids in R. felis [10], followed by plasmid identifications in rickettsiae ranging from other pathogens (e.g. R. akari) to arthropod endosymbionts (e.g. R. amblyommii, R. bellii, R rhipicephali, and the rickettsial endosymbionts of Ixodes scapularis, REIS) [11,12,13], it became evident that plasmids are not uncommon in rickettsial species. However, after multiple sequencing projects examining different strains, plasmids still have not been identified in R. prowazekii. To evaluate whether R. prowazekii can maintain a plasmid and to generate an additional tool for the genetic analysis of this pathogen, we introduced a recombinant plasmid derived from one of the natural plasmids of R. amblyommii into R. prowazekii and characterized its maintenance and its effect on rickettsial growth. To our knowledge this is the first plasmid shown to stably replicate in R. prowazekii, opening the door to genetic analyses requiring an extrachromosomal platform.
Materials and Methods
Bacterial strains and culture conditions R. prowazekii Madrid E strain rickettsiae (Passage 283) were propagated and purified from hen egg yolk sacs [14] and L929 mouse fibroblasts (American Type Culture Collection, Manassas, VA, ATCC Number CCL-1) as described previously [5]. Purified rickettsiae were stored frozen in a sucrose-phosphate-glutamatemagnesium solution (0.218 M sucrose, 3.76 mM KH 2 PO 4 , 7.1 mM K 2 HPO 4 , 4.9 mM potassium glutamate, and 10 mM MgCl 2 ). Rickettsiae-infected L929 cells were grown in modified Eagle medium (Mediatech, Inc., Herndon, VA) supplemented with 10% newborn calf serum (Hyclone, Logan, UT) and 2 mM L-glutamine (Mediatech) in an atmosphere of 5% CO 2 at 34uC. Escherichia coli strain XL1-Blue (Stratagene, La Jolla, CA) was used as a recipient for shuttle vector pRAM18dRGA [15] and for preparation of plasmid DNA used in rickettsial transformation. XL1-Blue was cultured in Luria-Bertani (LB) medium at 37uC. For selection of E. coli transformants, rifampin was added to a final concentration of 50 mg/ml.
R. prowazekii transformation
Purified rickettsiae were made competent for transformation and electroporated, as previously described [5,16], in the presence of 14 mg of pRAM18dRGA plasmid DNA. Twenty-four hours following electroporation and infection of mouse fibroblast L929 cells, rifampin was added to a final concentration of 200 ng/ml and rifampin selection was maintained throughout the experiment. The introduction of a gene conferring rifampin resistance into R. prowazekii has been approved by both the University of South Alabama Institutional Biosafety Committee and the Centers for Disease Control, Division of Select Agents and Toxins. Rickettsial infection and growth was monitored by microscopic examination of Gimenez-stained [17] infected cells on cover slips. For infection levels and calculations of rickettsiae per cell, 100 cells were analyzed at each time point. Fluorescent images were acquired using a Nikon Eclipse T2000-U fluorescent microscope and images overlaid using MetaMorph Imaging System software (Universal Imaging Corporation).
Plasmid recovery
Total DNA from the rifampin-resistant rickettsial population (designated ME-pRAM18dRGA) grown in L929 cells was extracted using the DNEasy Blood & Tissue Kit (Qiagen, Valencia, CA). Following total DNA extraction, plasmid DNA was isolated using the Qiagen Plasmid Mini kit. Plasmid DNA (200 ng) was electroporated into XL-1 Blue electrocompetent E. coli and transformants selected on LB medium agar plates containing 50 mg/ml rifampin. Resistant colonies were amplified, plasmid DNA extracted and subsequently sequenced by primer walking at the Iowa State University DNA Facility.
Rickettsial growth analyses
To compare growth characteristics of the ME-pRAM18dRGA to that of the parent Madrid E strain, L929 cells were infected in suspension for 1 hour with either ME-pRAM18dRGA or the wild type Madrid E strain at similar multiplicities of infection. The infected cells were seeded in 60 mm dishes. Samples from each infection were harvested approximately every 24 hours. DNA was extracted from approximately 1610 6 infected cells using the Archive Pure DNA Cell/Tissue Kit (5 Prime Inc., Gaithersburg, MD). At sampling times, the medium was removed and adherent cells were gently rinsed with phosphate buffered saline (PBS). Cells were immediately lysed in the dish using 800 ml of Archive Pure lysis solution and total DNA (L929 and rickettsiae) was extracted. DNA concentrations were determined using the NanoDrop 1000 spectrophotometer (Thermo Scientific, Wilmington, DE) and aliquots were diluted to a final working concentration of 10 ng/ ml in RT-PCR water (Applied Biosystems, Austin, TX). Samples were prepared from master mixes, so that each reaction contained 10 ng of DNA, and analyzed by quantitative PCR (QPCR). Bacterial, host, and plasmid genome equivalents were determined by targeting the single-copy R. prowazekii rho (RP521) chromosomal gene, the host b-actin gene, and the gfp uv gene of pRAM18dRGA. Primer pairs specific for each gene are listed in Table 1. Prior to QPCR analyses, primer pair specificity was validated for each target at the working dilution using either rickettsial genomic DNA, L929 cell genomic DNA, or the pRAM18dRGA plasmid. Assays were performed using the LightCyclerH DNA Master SYBR Green I QPCR master mix (Roche, Mannheim, Germany), 500 nM concentration of each primer, and a Cepheid Smart CyclerH according to the manufacturer's protocol (Cepheid, Sunnyvale, CA). Cycle parameters were 1 cycle at 95uC for 2 min, followed by 40 cycles at 95uC for 15 s, 52.3uC (rho), 54uC (actin) or 58uC (gfp uv ) for 15 s, and 72uC for 15 s. Amplification specificity was confirmed by melting-curve analysis. Data acquisition and analysis were performed using the Cepheid Smart Cycler Version 2.0c software. Genome equivalents were determined by comparison to standard curves. Three independent biological samples were analyzed in duplicate for each time point.
Southern blot analyses
Total DNA was isolated from rickettsiae grown in L929 cells using the DNeasy Blood and Tissue Kit (Qiagen). DNA (2 mg) was digested with SpeI, HindIII, or XhoI and the resulting DNA fragments separated by agarose gel electrophoresis. DNA was subsequently transferred to NytranH SuPerCharge membranes (Schleicher & Schuell, Keene, NH). A gfp uv -specific probe used in Southern hybridizations [18] was PCR amplified (Table 1) and labeled using [a-32 P]dATP (MP Biomedicals, Inc., Philadelphia, PA) and the Multiprime DNA labeling system (Amersham, Piscataway, NJ). Hybridized fragments were visualized using a Cyclone Plus phosphoimager (Perkin Elmer, Waltham, MA).
Results
Plasmid maintenance in R. prowazekii Plasmid pRAM18dRGA was introduced into competent R.
prowazekii Madrid E rickettsiae via electroporation. Selection with rifampin was initiated 24 hours following electroporation. On day 11, a sample of the rickettsiae-infected cells was harvested, DNA extracted and examined by PCR for the presence of a rickettsial chromosomal gene (sdh) and for two genes (arr2 RP and gfp uv ) contained on the plasmid. A predicted PCR product was obtained for each gene (data not shown), indicating the presence of plasmid DNA within the rifampin resistant population. Continued expansion of the infected host cell population generated a slowly increasing population of rifampin-resistant rickettsiae that exhibited fluorescence when examined microscopically (Fig. 1). Rickettsiae were isolated from this population and used to infect L929 cells at a high multiplicity of infection to increase the percentage of infected cells. The resulting rifampin-resistant rickettsial population (designated ME-pRAM18dRGA) was subsequently analyzed for the presence of plasmid DNA by Southern hybridization (Fig. 2) using a probe that spans the coding region of gfp uv . The three physical forms of plasmid DNA (linear, covalently closed circular, and open circular) can be observed in the Uncut lane of Figure 2, demonstrating the extrachromosomal nature of pRAM18dRGA in the ME-pRAM18dRGA strain. This is supported by the presence of predicted restriction patterns for SpeI (linearizes the 10 kb plasmid), HindIII (generates a 2854 bp fragment containing the gfp uv gene) and XhoI (cuts within the gfp uv gene generating two labeled fragments of 4263 bp and 5985 bp). The two faint bands that appear below the linear fragment in the SpeI digested DNA are likely the result of DNA degradation. No hybridization to R. prowazekii chromosomal DNA or to L929 cell DNA was observed (data not shown). The absence of additional bands on the Southern blot supports the data that the plasmid has remained extrachromosomal and has not incorporated into the rickettsial chromosome at a detectable level. The ME-pRAM18dRGA population was serially passaged more than 10 times for a period of greater than 30 days, under selection, without the loss of fluorescence. For serial passages, confluent cell monolayers were trypsin-treated and cells harvested and seeded into new flasks at a 1:3 dilution. Cells reached confluence after approximately three days of growth. After extensive passaging, an extrachromosomal plasmid, with the predicted restriction pattern, could still be isolated from the rickettsiae. In addition, pRAM18dRGA could still be isolated from rickettsiae harvested from L929 cells that were serially passaged in the absence of selective pressure (rifampin) for more than two weeks.
The effect of plasmid maintenance on rickettsial growth
Growth of ME-pRAM18dRGA was evaluated by determination of rickettsial genome equivalents per host cell using QPCR and primers (Table 1) specific for the R. prowazekii rho gene and the host cell actin gene. Infections were initiated with rickettsiae harvested from L929 cells, and for each infection, the percent of infected L929 cells was greater than 90%. Evaluating the increasing number of rickettsiae per adherent cell permitted a comparison of the two strains (Madrid E and ME-pRAM18dRGA) (Fig. 3). Growth of the two rickettsial strains was comparable with each exhibiting a generation time of approximately 13 and 12 hours respectively.
While the Southern blot and QPCR data cannot eliminate the possibility that spontaneous rifampin-resistant rickettsiae lacking a plasmid exist in the population, plasmid copy number (see next section) did not change appreciably over the time course of the growth curve suggesting that such a background population did not significantly affect the growth analysis. In addition, examination of cells infected with a population of plasmid transformed R. prowazekii revealed that every infected cell contained fluorescent rickettsiae (Fig. 1). Absolute confirmation will require the isolation and expansion of a single rickettsia by limiting dilution.
Plasmid copy number
Plasmid copy number was determined using QPCR. The relative ratio of the plasmid gfp uv gene to the single-copy R. prowazekii rho chromosomal gene was determined using genespecific primers (Table 1). Copy number was evaluated over two growth curves at daily intervals for five days (10 independent determinations). These experiments revealed (Fig. 4) that pRAM18dRGA maintained a low copy number per rickettsia of approximately 1 (0.86+/20.3, mean+/2S.D.) which falls at the lower range of rickettsial plasmid copy numbers (2.4-9.2) established for naturally-occurring rickettsial plasmids [11].
Discussion
This study documents for the first time that R. prowazekii can support plasmid replication. Characterization of rickettsial plasmids identified in non-pathogenic rickettsial species led to the construction of the vector pRAM18dRGA [15]. This plasmid contains a 7.3 kb fragment from the replicative pRAM18 plasmid of R. amblyommii [11] ligated to a pGEM vector, containing the rpsL P -Rparr-2/ompA P -gfp uv selection/detection cassette [6], for replication in E. coli. This vector represents the smallest pRAM18 derived plasmid tested that was capable of replicating in a non-R. amblyommii rickettsial species and encodes proteins with homology to the DnaA-like replication initiator and the ParA plasmid partitioning protein [15].
pRAM18dRGA, was introduced into pathogenic R. prowazekii via electroporation and transformants isolated using rifampin selection. The observation of GFP uv expression, under the control of a rickettsial promoter, also demonstrates that plasmid-borne genes can be expressed and detected, providing a promising first step in complementation assays. Although pRAM18dRGA was maintained at a low copy number, this is characteristic of known rickettsial plasmids and suggests that the machinery for maintaining plasmids in this pathogenic species is functional. Interestingly, when this plasmid was transformed into several other rickettsial species, the copy number of pRAM18dRGA was noticeably higher, ranging from 5.5+/20.65 to 28.1+/21.89 [15]. The low copy number of pRAM18dRGA in R. prowazekii supports its future use in expression studies, alleviating concerns of over-expression of multi-copy plasmid-borne genes in complementation assays.
The R. amblyommii fragment in pRAM18dRGA encodes only four proteins; a DnaA-like protein, a ParA partitioning protein, a TPR repeat-containing protein, and a homolog to Xre, a putative repressor protein. The DnaA-like protein was initially described during the annotation of the pRF plasmid of R. felis [10]. Interestingly, the annotation is based on homology of the carboxy-terminal 75 amino acids of the plasmid-expressed protein with the first 65 amino acids of the archetypical DnaA protein of E. coli [19]. The N-terminal region, or Domain I, of the DnaA protein is responsible for the loading of the helicase, DnaB [20]. However, the remaining 700 amino acids of the DnaA-like protein show no homology to DnaA and display no conserved domains, despite their conservation among the rickettsial plasmid sequences. In contrast to the highly conserved rickettsial chromosomal parA genes, the parA genes from several rickettsial plasmids were found to be highly diverse and clustered with parA genes found on plasmids from other bacterial genera [11,21]. In addition to these DnaA-like and ParA proteins, the pRAM fragment encodes a protein with a tetratricopeptide repeat (TPR) motif, originally identified in yeast as a protein-protein interaction module, and a protein with homology to a helix-turn-helix transcription regulator, Xre. Interestingly, the Xre homolog in Bacillus subtilis is described as a probable repressor necessary for the maintenance of the lysogenic state of the defective prophage pbsX [22].
Fortunately, for future studies that might employ pRAM-18dRGA, the presence of a plasmid appeared to have minimal effects on R. prowazekii growth. Growth of the plasmidcontaining strain was comparable to that of wild-type Madrid E strain. In fact the plasmid-containing strain exhibited a slightly shorter generation time than the wild-type control. However, in the growth experiments presented here, the control Madrid E bacteria was isolated from hen egg yolk sacs and only passaged in L929 cells for two days prior to initiation of growth curve assays. A recent report demonstrated that the Madrid E strain grows slower in cell culture without adaptation to the specific host cell environment [23]. However, the 13-hour replication time is similar to the published 8-12 hour generation time [24,25,26,27].
The demonstration of plasmid replication in R. prowazekii provides an important genetic tool and model genetic system for the study of this obligate intracellular pathogen. The absence of an extrachromosomal platform for the genetic analysis of R. prowazekii has prevented the evaluation of gene function by classical genetic complementation techniques. While it is possible to use transposon systems to evaluate gene complementation [8], an extrachromosomal location may be preferred for some experiments, since the plasmid would not disrupt the bacterial chromosome potentially contributing to an observed phenotype. The maintenance of the pRAM18dRGA shuttle plasmid suggests that other rickettsial plasmids may be maintained as well expanding the genetic tools available for the study of this rickettsial pathogen. | 2014-10-01T00:00:00.000Z | 2012-04-17T00:00:00.000 | {
"year": 2012,
"sha1": "48f793f066d41bc9ec79ac250e8eff18e85d1ffb",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0034715&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "48f793f066d41bc9ec79ac250e8eff18e85d1ffb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
255210819 | pes2o/s2orc | v3-fos-license | Urban Logistics Services Supply Chain Process Modelling Based on the Underground Logistics System via the Hierarchical Colored Petri Net
. Te implementation of the urban underground logistics system (ULS) can efectively mitigate the contradiction between the surging logistics demand and the increased negativity of urban logistics. Te widespread implementation of ULS still sufers from a lack of research into its operation in the marketplace, although the research on ULS system technology and network design appears to be sufcient. A new supply chain for logistics service based on ULS (ULS-SSC) was proposed, as ULS embedded in the urban logistics system could lead to the evolution of the role of supply chain participants. Tis article analyzed the organizational structure and operation characteristics of ULS-SSC and designed a top-down ULS-SSC operation process model based on the designed functional structure and subsystems relationship using the hierarchical colored Petri net (HCPN). Te simulation results show that the integrated information management platform based on ULS can integrate urban logistics service supply chain resources and operate efectively under the two main service modes designed. Te high-time delay intermediate links can be upgraded by system optimization, and the links with initial pickup and terminal distribution can be improved through out-sourcing and supply chain collaboration. Te fndings provide new insights into the feasibility of the operation of ULS in the market and help stimulate the implementation of ULS.
Introduction
Te unprecedented e-commerce boom has dramatically increased shipping practices in cities all over the world. Consequently, more logistics companies are competing in the urban freight market, and more vans, delivery stations, and couriers are being installed in cities. Increasingly poor urban trafc has also led to increasingly expensive and inefcient urban logistics. To alleviate this growing confict, the interest in the concept of "moving freight from above to underground," frst proposed by Zandi and Gimm [1], has attracted the attention of researchers and professionals for more than 30 years. Te underground logistics system (ULS), defned as a transport system for moving goods between out-of-town logistics parks and in-town customers through underground tunnels or pipelines, is recognized as a clean, efcient, and smart mode of urban freight transportation to cope with the new demands of urban logistics [2]. Especially, ULS contributes to achieving peak carbon dioxide emissions in the transportation industry [3]. In addition, ULS' unmanned and intelligent transportation also supports COVID-19's requirements for contactless logistics and can enhance emergency response capabilities in urban settings.
As an infrastructure capable of handling the majority of urban freight needs, ULS is capable of all of the actions involved in managing, processing, and disposal of the freight, such as intracity trunk transportation catering for diferent packaging types, transshipment with unpacking and crating processes, circulation processing, and warehousing and distribution processes [4]. Notably, hazardous and heterogeneous items are not included in the ULS transportation services, given locomotive loading constraints and system safety.
In the course of ULS development, several drivers in many industries, namely, legislation, economic interests, and social responsibility in the felds of transport, logistics, underground space, and urban resilience, have gradually led governments and companies to lay out an integrated ground-underground transportation network to respond to the new demands of urban logistics [5]. As an important logistics infrastructure for the realization of smart logistics and common distribution, the huge benefts have not led to the rapid mass adoption of ULS, with projects such as the Cargo-Cap underground freight transportation system in Germany [6] and the OLS-ASH project in the Netherlands [7] failing or stalling due to major obstacles of insufcient demand and huge investments. Until these two years, pilot projects began to be implemented in Switzerland (Cargo Sous Terrain) [8] and China (ULS in Xiongan New Area) [9].
However, ULS is more than just putting goods underground for transport. With a networked underground infrastructure, ULS incorporates the scattered logistics resources of the city, creating a new pattern of urban logistics by integrating the management of urban logistics in terms of cargo fow and process chain. Te change in the perception of ULS from initially being just a new transport technology to now being seen as the core mode of urban logistics of the future refects, on the one hand, the increasing social acceptance of ULS and, on the other hand, raises new questions about market operations in addition to intelligent transport technology systems and project investments.
Te implementation of ULS might form a new urban logistics service supply chain (SSC) system structure, named ULS-SSC, under which the original supply chain participants have had a huge role change. Logistics resources and the original urban logistics providers would be integrated into the ULS-based urban logistics intelligent management platform to build a new logistics system [10]. Te peculiarity of underground engineering makes the government as an indispensable subject to participate in the exploitation of the urban logistics market [3]. Given these new changes in supply chain elements, it remains unclear how the formation of ULS networks and commercial cooperation mechanisms interactively stimulate the development of ULS-SSC.
With the aforementioned in mind, this paper aims to describe the structure of ULS-SSC and design the operation process, under the new logistics service mode provided by ULS and the change of the supply chain participants' role. Te hierarchical colored Petri net (HCPN) was adopted to design the process for the new SSC. In contrast to the traditional supply chain, the design of ULS-SSC took into account ULS-based substitution for surface trunk logistics transportation and multimodal synergy for last-mile distribution. Te operational processes for two types of typical urban logistics goods, bulk goods and parcels, were designed to verify the efectiveness of the ULS-SSC model. By analyzing the synergy between ULS and other supply chain subjects, combined with knowledge about the logistics service supply chain and the characteristics of the ULS system, this study proposed a set of ULS operational management analysis methods based on the ULS-SSC process, providing theoretical support for the practice of ULS in terms of operational feasibility.
Te remains are organized as follows: Section 2 provides an overview of the related works. Section 3 explains the system features and the organizational mode. Te functional structure of ULS-SSC is described, the hierarchical colored Petri net (HCPN) model is constructed to explain the operation of ULS-SSC in Section 4, and the validity of the model is verifed by a case in Section 5.
A Brief Review of Previous ULS Studies
2.1.1. System Technology and Application. ULS, with the name of "Freight pipeline technology," was initially initiated to alleviate the negativity of freight trafc in cities. However, it is now even more widely recognized for its advantages in logistics and environmental protection [2]. Up to now, many underground transportation technology systems based on diferent traction powers have been successfully developed and put into application, such as pneumatic capsule pipeline (PCP) [11], Cargo-Cap powered by electricity [6], and Pipe §net driven by Maglev [12]. A special blockbuster underground logistics line, Cargo Sous Terrain (CST) from Switzerland, with a length of 450 kilometers from Geneva to St. Gallen, has completed a feasibility study and started project fnancing for the frst phase of the 65 kilometers project connecting Niederbipp/Härkingen and the city of Zürich (time horizon 2030) [8]. More pilot projects have been announced on the ofcial platform, for instance, the ULS in Xiongan New Area [9], China. In the era of the grand development of underground space, the ceiling of underground engineering construction technology has been broken. ULS has become a silver bullet to alleviate the contradiction between increasing urban logistics and sustainable development due to its fexibility of layout and diversity of forms.
Feasibility and Cost-Efectiveness.
Core equipment technologies for ULS, such as traction technology, locomotive traction technologies based on multiple types of power [13], and intelligent logistics equipment [14], are already proven technologies. Particularly, unmanned technology based on infrared scanning and 5G-based vehicleobject tracking positioning technology is already capable of meeting the technical feasibility of ULS implementation underground. High construction costs have been one of the most signifcant obstacles to the success of previous ULS projects, such as OLS-ASH in the Netherlands [7]. However, the growing negativity of urban freight, the urgent need for contactless transportation by COVID-19 [15], and lowcarbon sustainable initiatives [16] provide new drivers for the development of ULS. Te comprehensive benefts of ULS applications are refected in all aspects of future smart city needs, including trafc [17,18], logistics [2,19], environment [3,10], and society [5,20]. Although the construction investment is large, the cost beneft under the scale efect realized after the network operation would be gladly accepted by local governments.
Two Main Research Streams Distinguished by the System Form.
One stream is the construction of a new independent ULS network in the urban underground. Binsbergen and Bovy [20] frst proposed a mechanism for the operation of ULS networks. Considering multiobjective optimization such as cost, efciency, and resource allocation, a hierarchical hub-and-spoke network containing multiple pipe diameters may be the optimal network form for ULS [21]. Another stream is that Metro-based ULS utilizes the metro network to realize the coordinated transportation of passenger and freight trains [22]. Tis model is subdivided into passenger-cargo separated type [23] and trailer-type [24]. Te advantage of Metro-based ULS is to fully utilize and exploit the surplus potential of the urban metro network, thus saving investment. In contrast, it increases the difculty of joint scheduling and network reliability management [25].
Te Gaps in Operations Management.
Past research by technologists has simply assumed that ULS is an underground transportation technology that moves freight underground, focusing on aspects such as the system technology [6,8], network design [22,24], and the macrobenefts of ULS [3,5]. Te revolutionary impact of this innovative technology on the entire urban logistics supply chain and the synergistic organizational relationships of the participants in ULS operation are overlooked, issues that will be decisive for the implementation of ULS. Te ULS network is planned with the overall urban logistics industry in mind. It is inevitably operated by an independent and government-backed operator [5]. Te intelligent and efcient underground transportation network in an unmanned environment greatly improves the efciency and organization of urban logistics and thus generates new supply chain cooperation models [18]. Terefore, in the new urban logistics model under unifed management, the role of government and traditional logistics enterprises in the supply chain process has changed and a new competition and cooperation relationship will be formed.
Urban Logistics Service Supply
Chain. Te complexity of intracity transportation makes urban logistics a unique segment in the logistics industry that has been spun of from the product supply chain [26]. Te quality promotion of logistics services has contributed to the formation of the urban logistics SSC, which includes business processes such as transportation, warehousing, circulation processing, and delivery [27]. Te usual structure of urban logistics SSC is shown in Figure 1, which takes logistics service integrator as the core and relies on an information management platform to integrate functional logistics enterprises and other related service providers [28]. Te logistics service integrator coordinates the operation of functional logistics service providers according to the logistics demand accepted and released by the integrated information management platform.
Te fast-changing business model of urban logistics and the rapid growth of demand have supported urban development and people's lives while also bringing a lot of negative efects. In China, the annual growth rate of the express delivery business alone has exceeded 20% since 2015 and the annual growth rate of intracity service has exceeded 30% [29]. Te huge volume of freight transportation has aggravated urban congestion, environmental pollution, and land resource shortage.
Te development of urban logistics SSC oriented to improve the negativity of urban logistics has yielded little success. Many innovative initiatives, such as multimodal transportation [30], green supply chains [31], and investments in green technologies [32], have been instrumental in reducing the environmental impact of freight transportation. However, these initiatives have been virtually inefective in reducing trafc congestion and reducing land use [18]. In recent years, the use of electric vehicles for deliveries began to be advocated [33], but instead of substantially alleviating congestion, many supporting charging facilities needed to be built, increasing logistics land use [34]. Widely discussed smart concepts such as drone delivery and unmanned vehicles are not applicable for large-scale urban use [35].
In short, logistics intelligence relying on ground transportation has always failed to achieve global goal optimization: (i) Restrictive policies exacerbate the confict between limited urban road resources and the growth of logistics demand, which makes it difcult to carry out efective optimization of urban logistics SSC.
(ii) In the market-led urban logistics system based on road transportation, the main body of the logistics SSC is unable to carry out technological innovation from urban logistics as a whole. Tis requires local governments to innovate urban planning concepts and upgrade logistics infrastructure to cope with the transformation needs of urban logistics.
(iii) Te urban logistics system is flled with a large number of homogeneous enterprises, resulting in a lack of cooperation and resource sharing in the logistics service supply chain [36]. (iv) Te formation of multiple SSCs in the urban logistics industry cannot be managed in a unifed manner. Such SSCs that simply cater to a single type of demand is more concerned with cost reduction than intelligence, green and efciency improvement [37].
Terefore, based on the networked underground logistics infrastructure, ULS-SSC integrates the management of Mathematical Problems in Engineering multitype logistics SSCs and enables the sharing of urban logistics resources, which can break through the bottleneck of the sustainable development of traditional road logistics SSC.
Petri Net Method.
Petri net method is a mathematical expression of a discrete parallel system proposed by Carl Petri in 1962 [38], consisting of elements such as place, transition, directed arc, and token. Petri net has signifcant advantages in describing and analyzing the information and control fow in discrete event dynamical systems with asynchronous or parallel nature [39] and can provide formal and efcient support to model build and analysis [40]. Many advanced Petri net approaches have been developed to improve the limitations of the original model. For example, the hierarchical Petri net is suitable for industrial production systems, simplifying complicated systems into hierarchical subnetwork structures, thus controlling the functional units [41]. Modular Petri net can efectively resolve the issues such as state space explosion due to the advantages of easy scalability and better maintainability [42].
Petri net is well able to simulate large-scale discrete events, such as logistics and supply chain systems, and to fnd solutions quickly [43], especially Petri net with integrated hybrid modeling techniques performs better [44]. In this article, the HCPN was adopted to build and simulate the ULS-SSC system. HCPN integrates the colored Petri net proposed by Kurt Jensen [45] and the hierarchical Petri net, which can improve the shortcomings of traditional Petri nets that lack hierarchy and feature description.
Descriptive Analysis of ULS-SSC
Based on the basic composition of the supply chain system [46] and its evolution in ULS application scenarios, a proposed methodology framework for the structural design and rationality verifcation of ULS-SSC consists of (1) a description of the composition of the ULS physical layered network, (2) the structure of ULS-SSC, (3) an analysis of the relationship between ULS and 3PLs in ULS-SSC, and (4) a deconstruction of service mode.
Description of the Physical Layered Network.
From the perspective of solving the diversity of urban logistics and overall planning and design, a layered ULS network and ground transportation system together constitutes a multimodal distribution physical network. Tis network is the physical carrier of ULS-SSC, and its characteristics determine the structure and operation mechanism of the ULS-SSC. Figure 2 illustrates the urban multimodal distribution physical network jointly constituted by a two-layer ULS network and ground transportation system. A two-layer ULS network comprising the primary network (tunnel diameter 8-10 meters) and the secondary network (tunnel diameter 4 meters) has been proposed and repeatedly verifed to meet the logistics needs of a megacity, such as Beijing [47,48].
Te operation mechanism of the network covers the entire process of goods from the logistics park to the customer. Te goods on the primary network will be distributed to the branch network (m1 to m2) or sent to the ground by vertical transport (m1 to m3), depending on the fow direction. Ten, the network and ground transportation together constitute a synergistic logistics mode. We run the previous process in reverse and add a goods collection link at each node to achieve reverse logistics [47].
Terminal transportation, defned as the transportation process between the last node and the customer, takes various forms. For example, small conveyor belts can be built or drones can be used to deliver goods to smart parcel lockers, or they can be delivered manually. Parcel collection and delivery services based on shared systems can also be designed. Te diversity of terminal transportation forms will also give rise to a variety of service outsourcing models.
Te Structure of ULS-SSC.
ULS replaces ground truck transportation and can innovatively promote the development of the urban logistics SSC management to low carbon, efcient standardization, and a high degree of unmanned. It relies on the underground network infrastructure to connect the upstream and downstream and integrate the multifunctional services of the logistics SSC. Tus, with the development of a physical network, a ULS-SSC with a large capacity and multiple service types will be gradually formed. Figure 3 shows the system structure of ULS-SSC. ULS-SSC intelligently integrates above-ground and underground transportation network, storage, circulation processing, and other resources to optimize the allocation of urban distribution resources, so as to provide personalized logistics services to customers with diferent requirements. ULS, the third-party logistics (3PLs) enterprises, and consumers are the three main participants in the ULS-SSC: (i) ULS can be divided into the integrated information management platform and the infrastructure platform. Te former platform performs functional logistics resource integration and order management and transmits system operation requirements to the infrastructure platform. Te infrastructure platform carries out the operation of the physical system and completes the consumer-oriented logistics services. Moreover, the integrated information management platform coordinates the supply of other supplementary services related to the supply chain such as logistics fnance, consulting and planning, business, and taxation.
(ii) Te third-party logistics (3PLs) enterprises, currently the main provider of logistics services, are still fully responsible to the customers in ULS-SSC but also can outsource intracity distribution operations to ULS. Terefore, the former suppliers of urban logistics, also including 4PL companies, can become customers of ULS.
(iii) Te consumers of ULS, in theory, include all those in the city who demand freight activities with goods matching the ULS load requirements. Te types of services cover all types of urban logistics supply chains, including bulk cargo, wholesale and retail, and even former urban logistics supply services.
Note that ULS has an independent operator to manage both platform systems. Distinguished from traditional SSC, the most distinctive feature of the ULS-SSC is that Mathematical Problems in Engineering the ULS operator acts as the logistics service integrator, unifying the management of the ULS infrastructure and third-party logistics as functional logistics service providers.
Relationship between ULS and 3PLs in ULS-SSC.
Te competitive and cooperative relationship between the ULS operator and 3PLs is central to the evolution of the supply chain. Figure 4 illustrates the gradual increase in the market share of ULS-SSC as the network density increases. Te same trend is shown in the terminal distribution segment, but the necessity of ground-underground cooperation makes ULS a relatively low substitute. While the expansion of ULS-SSC market share may have been initially led or driven by the government, the increase in network density has allowed ULS to become progressively more dominant and thus able to proactively attract customers [3]. In this process, the role of 3PLs in the supply chain gradually changes from being suppliers to buyers of transportation services.
Terminal distribution, as described above, can be implemented in a variety of ways, almost all of which require cooperation with ground distribution, which may be provided by 3PLs. Especially in the early stages of ULS network development, terminal distribution still relies on ground-based methods, as shown in Figure 4. Terefore, the joint distribution function (in Figure 5) needs to be added within the integrated information management platform in order to efectively integrate the logistics, information fow, and capital fow of the cooperating parties.
ULS terminal replaces the scattered logistics distribution points and centralizes the distribution and collection of orders. In addition, whether the fnal delivery is done by ULS or other distribution providers, it can effectively respond to the brand promotion needs of logistics companies that need to face customers directly. For example, JD logistics (JDL) always wants customers to feel JDL's services, including parcels printed with JDL's logo or delivered by JDL's delivery agents to strengthen the brand image.
ULS Service
Mode. ULS-SSC can provide personalized services for orders with diferent requirements. Considering the full utilization of resources and fexible operation, outsourcing services based on some links of the ULS system operation process is able to form a variety of service models. Available transportation services were divided into point-to-point mode and distribution mode.
Transportation Service
(1) Point-to-Point Mode. Te point-to-point service mode is mainly for a high and stable fow of freight needs, such as bulk cargo supply, which can develop a long-term and stable order pattern. Moreover, such orders are usually sent from logistics parks or transportation hubs to in-town facilities with warehousing functions and can take full advantage of the convenience and savings of the ULS network, as the cargo fow paths ft perfectly with the layout of the ULS network.
(2) Distribution Mode. Express parcel distribution with large volumes, multiple and scattered destinations is the fastestgrowing and most popular mode of urban logistics, which shows an annual growth rate of 20% in China. Express parcel distribution usually has high timeliness requirements and is closely related to people's lives. A multilevel ULS network can efciently realize the complex scheduling of such cargo transportation in transshipment and temporary storage.
Outsourcing Service.
Te implementation and links in the ULS-SSC system that can operate independently, such as transportation lines, nodes, and warehouses, can be outsourced to others, and the outsourcing service model is shown in Figure 6. With the ability to provide customized transportation services to specifc regions or specifc customers, outsourcing has the advantage that contracted operators will be able to respond quickly to orders with special needs and will be able to utilize system resources for fexible pricing and facility operations. However, the overall operation of the supply chain is still unifed by the integrated information management platform, including the information processing of orders, network dispatching, emergency response, and other implementation processes, as well as the collaboration of all outsourcing mode operators.
Process Design of ULS-SSC
Based on Zurawski and Zhou [49], the ULS-SSC was divided into three steps: (1) functional analysis of ULS-SSC, (2) subsystem design based on functional decomposition, and (3) the operation process design of ULS-SSC based on HCPN.
ULS-SSC Functional System
Analysis. According to the structure and characteristics of the ULS-SSC, the supply chain functional structure (see Figure 7) containing four levels was proposed for SSC process design as follows: (i) Business layer: the main task is to manage the business relationships of the supply chain participants, including order processing (ULS with consumers), coordination management (ULS with other partners or providers), and fnancial management (ULS with banks and administration) (ii) Environment layer: it collects various information within the system, such as order requirements, Mathematical Problems in Engineering resource quantities, and network status, and transmits the information to the control layer to serve the intelligent information management platform of ULS-SSC (iii) Control layer: it monitors system processes and system status in real-time, analyze data from the environment layer, and issue instructions to other subsystems to ensure proper system operation (iv) Operational layer: it receives instructions and completes the transportation process, including inventory management, circulation processing management, vehicle scheduling, and terminal distribution 4.2. Subsystems. Figure 8 describes the entire process of goods from placing an order to terminal delivery. Te process shows the logistics, information, and capital fows between the eight subsystems. Each subsystem was described as follows: (i) Demand management: classify and generate orders by freight quantity, destination, type, or time.
HCPN Modelling.
We utilize CPN Tools version 4.0.1 to design the HCPN top model of the none-closed ULS-SSC process, as shown in Figure 9, as well as the Places and Transitions in Table 1. Eight subsystems in the ULS-SSC functional system were defned as subnets in the HCPN model. Te setting into a none-closed model is due to the fact that it is easier to identify the state of the subsystems and the logical relationships between the subsystems under the independent order run simulation, so that it is also easier to perform the comparison of the system operation efciency under diferent order types. Te HCPN is defned as a tuple HCPN � (S, SN, SA, PN, PT, PA, FS, FT, PP). Here, the following are observed: (1) S is a fnite set of subpages such that each subpage s ∈ S is a nonhierarchical CPN as follows: (2) SN is a set of substitution nodes of the HCPN model. ∀f s ∈ FS, ∀s 1 , s 2 ∈ f s : C s 1 � C s 2 ∧ I s 1 � I s 2 .
When receiving the freight order, the information of the order is transited to Demand Management. Te confrmed order type (whether the order needs circulation processing) information is posted to Information Management. Meanwhile, Finance Management confrms the prepayment status and then sends it to Information Management. Next, Information Management issues order processing instructions for Pickup or Transportation based on ULS system resources and equipment status. After receiving the instruction, Pickup arranges vehicles to pick up cargo and move to the storage yard in the ULS node. Afterward, warehousing (in Warehouse), circulation processing (in Circulation Processing), and transportation operations (in Transportation) are performed in sequence. During transportation, goods arrive at any node to be identifed and
Model Validation.
Two types of orders based on whether the circulation processing is required were designed in the HCPN model. P-type orders require circulation processing, while Q-type orders do not and go directly to the transportation subsystem. P-type and Q-type orders represent two typical freight requirements for urban logistics: distribution mode and point-to-point mode, respectively. For example, in Figure 10(g), a P-type order requires an additional storage judgment (short for S-Judge) at each node, while a Q-type order only makes a judgment (T-Judge) about whether it has reached its destination. Terefore, Q-type orders are more time-sensitive. Table 2 lists the simulation parameter settings, which were obtained by investigating the actual operation data of current logistics management systems of large logistics companies, such as JD Logistics, and combining them with the relevant literature of ULS. Te set HCPN model is valid as evidenced by three characteristics: (1) Seven dead markings exist, which represent the markings at the end of the model run due to the nonclosed-loop structure. Te remaining markings are all alive, in line with the initial design intent of the model. (2) Tere is no dead transition instance, i.e., there is no deadlock in the model due to the inactivation of transitions. (3) Tere is no live transition instance, i.e., the model will not be trapped in a local infnite loop. Monitors were set up at key transitions to observe the system operational conditions. Table 3 statistics the model performance results under 1000 simulations for each of the two order types. Te average total running time of P-type and Q-type is 77.50 and 61.55 minutes, respectively. Te reason some monitors have more or less than 1,000 is that an order may go through transit, warehousing, multiple times, or may avoid a particular process.
Te Role of the Integrated Information Management
Platform. Te efectiveness of the HCPN model likewise indicates the rationality and feasibility of the set integrated information management platform in terms of process disposition and functional design. Te platform can promote value cocreation and optimize the competitive and cooperative relationships among participants in the supply chain. Terefore, taking into account the infrastructure properties of ULS, the proposed value of the supply chain based on the platform operation needs to be measured comprehensively in terms of the efectiveness of the urban logistics system, the distribution of benefts among subjects, and the external benefts of SSC.
Te phased development of the information platform formed based on the ULS network around the realization of the proposed value of the supply chain is also accompanied by changes in the roles of the participants (3PLs/4PLs) and the emergence of new cooperation mechanisms. In the initial stage of the ULS network, the platform-led supply chain activities mainly serve the simple order mode, such as the point-to-point mode. Te development of the platform needs to focus on the openness of the SSC system and the increase of social participation. As the network grows, the increase in the number and variety of orders prompt more partners to join the SSC operation and develop multiple service modes. At this stage, platform development requires continuous optimization of service modes and supply chain resources, especially reducing the fow delay of supply chain links to improve efciency, thus attracting more participants until the formation of a stable, multiservice model synergistic ULS-SSC.
Efciency Improvement of ULS-SSC.
Supply chain efciency improvement is an important criterion for technological innovation. Although there is no experiment comparing the efciency of ULS-SSC and traditional urban logistics SSC in this paper, the simulation results in the new supply chain model suggest the key aspects of urban logistics efciency improvement. Troughout the supply chain process, pickup, circulation processing, and warehousing of P-type orders take up most of the time during the ULS operation, with the average time delays accounting for 23.86%, 13.26%, and 13.08%, respectively. Te terminal distribution process also takes a lot of time, with the percentage of time spent in both types being about 30%. Pickup takes 32.48% of the time in Q-type. Te average delay in the transportation link is related to the distance travelled and is about equal for both types, at 13.55% and 18.88%, respectively. Te longer average delay for P-type is mainly spent on intrasystem transshipments and temporary warehousing. Te three terminal distribution modes are adopted with approximately equal frequency. In addition, except for the transportation subsystem, other aspects such as information management and transshipment take less time.
Technically, circulation processing and warehousing can be improved by increasing the intelligence level of system equipment. Moreover, if a closed model is further constructed, it will be able to refect the fow of resources in the supply chain and thus carry out systematic optimization of the efciency of the operational process. From the perspective of supply chain collaboration, the data results show that the time spent on pickup and terminal distribution together is extremely high, about 50% and 60% under P-type and Q-type orders, respectively. Apart from the inevitable waiting time in the pickup segment, the huge time consumption is frstly because the combined delivery of orders was not considered in the model, i.e., the goods were set to have no waiting time at the end station, and secondly because there is only one partner for each process under a single run. However, with the expansion of the network scale and supply chain cooperation scale, a multitype cooperation mode in the front end will enrich the order acquisition channel and enhance the speed of pickup, and at the end, joint distribution based on resource integration will also reduce the average distribution time of a single piece of goods. Moreover, the intelligent deployment at the end of the ULS infrastructure network can further enhance pickup and distribution efciency.
Operational Cost Analysis of ULS-SSC.
Te operational cost of ULS is crucial to compete with traditional logistics services. Current urban logistics is still a labor-intensive industry and fragmented orders make it challenging to integrate and optimize logistics processes, while unmanned and integrated management is both unique advantages of ULS. Simulation results indicate that the cost of processes requiring manual operation accounts for a larger proportion of total costs, such as pickup (16.50% and 24.30% for P-type and Q-type, respectively) and terminal distribution (31.78% and 45.79% for P-type and Q-type respectively), whereas the cost of fully automated transportation is relatively low. Compared to Q-type orders, circulation processing and warehousing make for a signifcant increase in operational cost for P-type orders. Particularly, the limited storage capacity and high management fees at ULS nodes make warehousing cost substantial (18.09%) of the total supply chain cost.
Te integration of supply chain resources contributes to a further reduction in system operational cost. By pooling multiple types of logistics orders and carrying out a series of logistics links in the system, ULS avoids the increased unit costs of a small number of frequent logistics orders and the time wastage caused by fragmented processes. In addition, collaborative operations of supply chain participants can reduce average logistics costs, share operational risks, and improve overall supply chain efciency as well as resilience.
Supply Chain Operations Based on ULS Network
Development. Te ULS network is the physical carrier of the new urban logistics supply chain system. Te development process of the network has a signifcant impact on the formation and operation of the ULS-SSC.
As a new class of underground infrastructure, ULS networks that can generate scale efects have large investments and long construction cycles. Although local governments usually lead the initial investment in such infrastructure projects with large social benefts, the benefts of prioritized routes in the ULS network greatly infuence the confdence of the logistics market in the innovative model. Good initial benefts can accelerate the formation of the network and can also attract supply chain partners to develop a willingness to cooperate earlier, or even directly participate in the investment and construction.
Accordingly, in the initial stage when the network coverage is not high, the initial route selection and planning is oriented to Q-type order paths as much as possible to ensure the economy of ULS-SSC. Subsequently, the distribution model (P-type), which requires greater network coverage, is gradually carried out.
Finally, it is worth also noting that obtaining the support of local governments, both in terms of investment and policy, is crucial to the realization of ULS-SSC. Although, as a logistics system, economic benefts are paramount, local governments are more concerned with social benefts and urban sustainability resulting from innovation. Admittedly, ULS itself is a green logistics method, energy saving, and low carbon are still important process optimization goals in ULS-SSC operations. In this way, it is more benefcial to cooperate with local governments to develop appropriate policies to encourage the development of ULS. In this way, the development of ULS can ft in with the policy of greening and reducing emissions, as well as enhancing the willingness of local governments to provide subsidies and tax breaks for green logistics operations.
Conclusions
With the aim of promoting the integrated management and efciency of urban logistics, an intelligent logistics management platform based on the new infrastructure of ULS can gradually lead to the evolution of the logistics market and the formation of a new urban logistics services supply chain mode. Te current work, frst, analyzed the process of market evolution characterized by a shift in the role of logistics supply chain participants and constructed a preliminary framework for ULS-SSC comprising four aspects: physical network, structure, relationship, and service mode. Second, based on the ULS-SSC functional system analysis and the operational relationships between the decomposed subsystems, the CPN tools were adopted to construct an HCPN model that can efectively express the hierarchical organizational structure and operational processes of ULS-SSC. Ten, an example containing two order types was designed to validate the model, where orders were classifed into P-type and Q-type according to whether the circulation processing is required, corresponding, respectively, to distribution mode and point-topoint mode.
Te results show the importance of the integrated information management platform in supply chain operations and demonstrate that manual involvement in logistics is an infuential factor in supply chain operation time and cost and that links with initial pickup and terminal distribution can be improved by extending the ULS network upstream and downstream or by outsourcing services and supply chain synergies. Within the ULS, intermediate links with high-time delays can be improved by optimizing the ULS technology system, as well as optimizing the scheduling with a more rational order allocation to reduce the efciency and costs associated with resource congestion in circulation processing or warehousing. Te fndings refect that compared to road-based logistics SSC, the automated networked operation and integrated management of multiple urban supply chains by ULS-SSC ofers a revolutionary innovation in urban logistics SSC, providing a sustainable development direction for the urban logistics SSC.
Tis work develops theories related to the operation of ULS-SSC, and the proposed ULS-SSC model extends the operational process of SSC to ground-underground integration. It flls a gap in the ULS body of knowledge on market operations, which is currently dominated by the study of transport technology, network design, and single-route operational processes. Te designed ULS-SSC HCPN model can be used as one of the theoretical bases for the application of the new transport technology ULS and contributes to further research on the ULS operational model and the synergistic relationships of the participants, thus deepening the acceptance of the feasibility of implementing the ULS in the transport market.
Limitations are inevitable, given that this work is an early study of the integration of ULS with urban logistics supply chains. Firstly, the constructed nonclosed HCPN model can already clearly describe the state of the system under a single run, providing a basis for further construction of closed models to simulate resource cycles and system optimization. Secondly, the diversity and complexity of the supply chain in the actual urban logistics system have been simplifed, with only two order types simulated and the running time of each link set in a simplifed manner. Based on the current work, the collaboration mechanism of the ULS operational participants can be studied in the future by analyzing their positioning in the ULS-SSC. Furthermore, the ULS-SSC operation mode under the convergence of multiurban logistics order types, as well as cost sharing and pricing could be investigated.
Data Availability
Te data used in this model are from online open sources and expert consultation.
Conflicts of Interest
Te authors declare that they have no conficts of interest regarding the publication of this paper. | 2022-12-29T16:11:24.712Z | 2022-12-26T00:00:00.000 | {
"year": 2022,
"sha1": "291e78bf1ffb7452b312bb83d3671ef1decca993",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2022/2556405.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0cf5944f6e0f6ca429b8ae2860968615bb9cef13",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211739120 | pes2o/s2orc | v3-fos-license | Data Security in Internet of Things: A Review
: In recent years, Internet of Things (IoT) has become an interesting research topic in various fields such as technical, academic, medical and industry. With the growing interest in IoT, security must be taken into consideration as it is one of the main issues of IoT platform, technologies and application. The most important part of secure IoT system is verification and authentication between IoT enabled devices and IoT web servers. This paper presents reviews the research on IoT architecture, various securities issues and challenges that must be taken into consideration and also presents a multi-factor authentication for a new solution to ensure confidentiality, integrity, authentication, authorization and ability to identify heterogeneous devices.
INTRODUCTION
In our day to day life Internet of Things is emerging as a technological breakthrough revolution where billions of heterogeneous devices and homogenous devices that are provided with unique identifiers (UIDs) are embedded with electronics, Internet connectivity and sensors which are connected to internet and can communicate and interact with each other without any human interference. They can be remotely monitored and controlled [1]. IoT has the prospective to evolve new market opportunities and business model. Meanwhile security, privacy issues should be addressed. Internet connected devices give lots of opportunities to cyber criminals to do modification in the IoT system on a big scale. This results in creating a lot of risks, issues and challenges in data security. Data privacy, authentication, human factor, data encryption and complex system are the five key security risks and IoT issues [2]. Security attack on IoT devices occurs on the following layers of IoT and they are object layer, transport layer, processing layer and cloud layer [3]. In this paper a study is performed on IoT features and key technologies, security issues and challenges.
A.
IoT Architecture IoT architecture is vast and broad concept so far as no uniform IoT architecture has been proposed in the past years. IoT consists of sensors, actuators, networks to make it work and communicate between the computer systems and technologies. The various layered IoT architecture available are three layer, four layer and five layer architecture. Since in this paper we are focusing on the data security issues we will discuss about the five layered IoT architecture model [3]. 1) Object Layer: The bottom layer is the Object layer. This layer is also known as perception layer or hardware layer. This is the base of IoT architecture that contains sensor, actuators, RFID tags, barcode, etc. The data is collected from sensors and transmitted to the network layer. 2) Object Abstraction Layer: This layer is responsible for transmission of data from an object to the service management layer via channel without any intervention. This layer is also known as network or gateway layer which contains physical components and communication software which helps in transmission of information captured from sensor.
3) Service Management
Layer: This is the middle layer of IoT architecture and hence known as middleware layer. It is a service oriented layer that assures similar type of services between the connected devices on the basis of addresses and paired device name. The main feature of this layer is business and process rule engine. In this layer the received data is processed and control security, make decision and then delivers the requested data. 4) Application Layer: This layer provides the services requested by the customer. The IoT application covers smart home, automation, smart hospital, smart city etc. 5) Business Layer: The business layer is responsible for making a graph, flowchart and business model. In order to enhance the user's privacy in IoT, business layer compares the output of every layer. It supports designing, analysis, implementation of IoT system related element. [10] B.
IoT Characteristics Based on Security Requirements The fundamental characteristics of IoT based on security requirements are as follows [4, 5]: 1) Interconnectivity: Interconnectivity brings day to day things together to empower IoT as anything can be connected with each other. It enables network accessibility and compatibility in the things. 2) Resource Constraints: IoT has caliber to provide things related services within the constraints of things such as privacy protection and semantic consistency between physical things and their associated virtual things. There are major concerns regarding the resource-constrained environments in IoT, including data encryption, privacy-preservation, vulnerabilities, threats, attacks, controls, etc. To address these privacy and security challenges, appropriate technologies have to be developed for resource-constrained environments in IoT.
3) Heterogeneity:
IoT devices are heterogeneous as they are from distinct hardware platform and network and are able to interact with other devices, platform through various networks. In IoT the key design requirements of heterogeneous things and their environment are interoperability, extensibility, modularity and scalability. 4) Dynamic Environment: Gathering data from its environment is the main activity of IoT. This can be obtained by dynamic changes that take place around the devices.
C. IoT Elements
The key elements of IoT are 1) Identification: It plays an important role in identifying each object within a network. The two methods for identification include Naming and Addressing. Naming indicates the object name and addressing indicates object's unique address. Examples of identification method used for IoT are Electronic Product Codes (EPC) and Ubiquitous code [7]. 2) Sensing: It is a process of collecting information from devices and storing it in database, data warehouse or data center. The examples for sensing devices are sensor, RFID tags etc [7].
3) IoT Network:
In recent years, IoT becomes a most significant trend where millions of devices are connected and controlled by internet. IoT network is used to communicate between IoT devices [8]. 4) Platform: An IoT platform is a multilayer technology placed between IoT object layer and IoT gateway layers. This element is also known as computation as it enables object and endpoint management, connectivity, network management, processing and analysis, application development, security, etc [6]. The examples of IoT platform are Thingsworx, AWS, and Microsoft Azure etc. 5) Services: In IoT, services are divided into four classes; they are identity-related services, information aggregation services, collaborative aware services and ubiquitous services. Identity-related services identify the object first requested for the services. Information aggregation services first gather all the raw information and then summarize that information for processing and reporting. The data generated from previous class is used for decision making by the collaborative aware services. On customer's demand ubiquitous services delivers the information obtained from collaborative aware services at anytime and anywhere. 6) Cloud: In real time for collecting, processing, managing and storing large volume of data from heterogeneous and homogeneous devices, users and applications IoT cloud provides tools that need to be managed in an efficient way. IoT cloud provides accurate data analysis and processes data at high speed. One of the most important components of IoT cloud is distributed management system [8]. 7) Semantic: The process of extracting knowledge to offer the required services is known as semantic. It is the most important element of IoT which acts like brain of IoT to accomplish all the responsibilities. Some of commonly known semantic technologies are resource description framework, web ontology language, efficient XML interchange [7,8].
D. IoT Security Standards
The IoT security standards are as follows [9]: 1) Authentication: It is the process of verifying the identity of a user or device. Impersonation attack and Sybil attack are the two types of attack related to authentication. Impersonation attack fantasized to be another element. Sybil attack is the type of attack where at a time distinguish identity is used by attacker to attack. 2) Authorization: it is the act of granting access to network resource which allows the user to access various resources based on user identity.
3) Confidentiality:
It means protecting information from being accessed by unauthorized parties. It ensures that the data is only readable by the proposed destination. 4) Integrity: It ensures that the information contained in the original message is kept intact. Message alteration attack and message fabrication attack are kinds of attack related to data integrity. Access control method is implemented to protect data integrity. 5) Availability: availability means only the authorized user can access the information. The main aim of availability is to protect network services existence against denial of service attack. 6) Privacy: It ensures that only the desired sensor devices and gateways are part of the network and hence it prevents devices from malicious attack. 7) Non-Repudiation: Non-repudiation gives the assurance that someone cannot deny the validity of something. It provides guaranteed message transmission between two devices through digital signature and encryption method. The attacks are Phishing or man-in-the-middle (MITM) attacks that can compromise data integrity.
III. LITERATURE REVIEW
Trusit et al. [11] proposed a method of secret vault(a set of keys) which shares secret between IoT server and IoT devices. In this paper after successful completion of each session, content of secure vaults changes. For implementation they have used Aurdino device & HMAC algorithm. They used three way mutual authentication mechanisms for authenticating and communicating between IoT devices and IoT server. They proved the feasibility of algorithm on IoT devices with memory and computational power constraints. Se-Ra Oh et al. [12] developed an OAuth 2.0 framework based on oneM2M security component to provide authentication and authorization between IoT device platforms and IoT service platform. In OAuth2.0 framework to use the resource the client need to receive an authorization grant from resource owner and to access the token it need authorization grant from Authorization server.OneM2M security component performs token-based authentication, it is lightweight and scalable and implemented by using Node.js-based 'oauth2-server' module i.e. OAuth 2.0 server library. In oneM2M security component, resource request from unauthorized user will be blocked and meanwhile for the authorized user the request will be passed. In this the goal of data security is not achieved. Shantanu et al. [13] presents the classification of IoT security and threats into five distinct categories such as communications, device/services, users, mobility and integration of resources. Sample mechanism for authorization and security is designed in IoT architecture by proposing Attribute Based Access Control (ABAC), Role Based Access Control (RBAC) and capabilities as they employ an access control design. Capabilities structure is used which includes time-stamp, identification of things, operation and condition fields. Capability may grant access to more than one thing and operation on a thing. It is a scalable and flexible method but yet a complex method. Trust and identity management need to be developed. Swapnil Naik and Vikas Maral in their paper [14] discussed about various security attacks and mainly focused on device cloning and sensitive data exposure to secure the IoT solutions. The solution is efficient and secure with a little cost overheads. Sheetal Kalra and Sandeep Sood presents a paper [15] proposed a secure communication between the embedded devices and cloud server. A secure ECC based mutual authentication protocol and Hypertext transfer protocol is used. An automated verification of protocol is performed using AVISPA tool which confirms protocol's security in presence of intruder. It is robust against various security attacks with low computational cost but it can be more reliable. Anusha Medavaka in her paper [16] proposed a concept of protected vault which is a common key in between IoT devices and IoT server. A 3 way mutual authentication takes place between IoT server and devices within IoT session. After the successful completion of session the collection of password is changed hence protects from side channel attacks. Algorithm like AES, HMAC are compared and implemented on Aurdino device to compute the performance analysis and security analysis for power constraints. Hence it is a safe verification mechanism for authentication and communication between IoT devices and IoT server. Luciano et al. [17] presents an authentication model and several use cases for IoT clouds which allows user and manufacturers to access IoT devices in a secure. Several use cases is based on Identity Provider/Service Provider (IdP/SP) model. Santoso et al. [18] proposed a methodology to assure a very high security for IoT based smart home system. For authentication process, the system use Elliptic Curve Cryptography (ECC) and AllJoyn framework. The system runs with the help of wifi network. By android application based mobile device user can control the access of system for initial system configuration authentication of IoT devices. A wifi gateway node is used. Authentication process contains two steps, one is authentication between mobile device to IoT device and other is gateway to IoT device and after this process encrypted communication takes place. Proposed a method of secret vault (a set of keys) which shares secret between IoT server and IoT devices and after successful completion of each session, content of secure vaults changes. A 3-Way authentication message exchange mechanism between IoT server and IoT device is used.
The feasibility of algorithm on IoT devices with memory and computational power constraints. Secure against side channel attacks used to breach the security of the IoT devices.
Other security standards like authorization, confidentiality, and integrity can be included. In oneM2M security component, resource request from unauthorized user will be blocked and meanwhile for the authorized user the request will be passed.
Need to focus on achieving security goals (e.g., nonrepudiation). communications, device/services, users, mobility and integration of resources. A sample mechanism for smart thing authorization is designed and provides basic security mechanism access control.
Policy management is reduced with the combination of ABAC and RBAC. Access control security mechanism is scalable and flexible.
Need a wide range of mechanisms and services and also trust and identity management in the IOT architecture. Authentication, encryption and clone detection is done in few seconds. As it is very secure provides efficient solutions.
Security mechanism is accomplished but the cost is little more which need to be reduced. More improvement is needed to perform the procedure as the current method of entering Device ID, authentication procedure is inconvenient for the user.
IV. DATA SECURITY ISSUES AND CHALLENGES IN IOT:
A. Nowadays maximum devices are IoT enabled devices which results in huge challenges in organizing and managing a single device. B. Data protection: In Internet of Things environment, there is need to protect the personal data for retrieval of data and processing a large volumes of data. C. Data authentication: In IoT environment, the authentication of data is required for source data. D. Data usage: for the data usage it is mandatory to address the data through the security channel attack. E. Performance: The performance of Internet of things environment need to be enhanced by increasing latency and capacity.
V. CONCLUSIONS
The topic "Internet of Things" is becoming one of the demanding and hot topics for the research work in every field. As millions of devices are connected to internet and communicating with each other, communication channel need to be secure. Security in IoT is also very important in making IoT successful. User focus on using IoT because of faster and automatic services but ignore security aspects of IoT devices, IoT server and communication channel. In this paper, we have presented five layered architecture of IoT, characteristics of IoT based on security requirements, IoT elements and IoT security standards. This paper also presented a review of previous paper based on the authentication of IoT devices and IoT server. Also focus on the data security issues and challenges. | 2019-10-24T09:13:53.134Z | 2019-07-31T00:00:00.000 | {
"year": 2019,
"sha1": "ae856214b2e0e3fcc4cad9c7de822072611e1b07",
"oa_license": null,
"oa_url": "https://doi.org/10.22214/ijraset.2019.7220",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "52eeacff7f147b66628b21ff56ebedad8c76a15f",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Business"
]
} |
33016057 | pes2o/s2orc | v3-fos-license | The Effect of Cotton Seed Cake , Lucerne Hay Supplementation on Intake of Maize Stover and Weight Gain by Male Sahiwal Bull
The experiment was conducted at livestock research and development station surezai Peshawar during March 2012 to study the effect of cotton seed cake, Lucerne hay supplementation on intake of maize stover and weight gain by male sahiwal bull. Twelve (12) young Sahiwal bull breed, 280 kg average liveweight and 2 years of age were randomly put into 4 groups of 3 animals under intensive feeding system to determine the effect of different protein supplements on growth, and intake of chopped, dried maize stover. A control group was fed stover adlibitum only, and the other groups were fed daily 750 g cottonseed cake/head, 1 kg lucerne hay or 900 g of lucerne/cottonseed cake (66:34; w/w). Significant differences were observed on average daily live weight gains. Animals on lucerne and its mixture registered higher daily gains (243 g) and (330 g) respectively, followed by cottonseed cake (156 g); the control group lost weight (-8.0 g/d). Contrary to the live weight gains, animals fed on lucerne and its mixture had lower maize stover intakes, 3.35 kg DM/animal/day and 3.70 kg DM respectively, while those on cottonseed cake and the control group ingested respectively 4.72 kg DM and 4.16 kg DM maize Stover. It is concluded that during the critical period in the suburb of Peshawar, small-scale farmers can prevent loss in live weight by utilizing simple available rations.
Introduction
Due to lack of recommended beef breed in Pakistan, people mostly generate revenue to alleviate their poverty by rearing and selling sahiwal male for meat purposes.In southern Peshawar small-scale farmers practice mixed farming.Animals, especially ruminants, lose 20 to 30 percent of their liveweight during the dry season due to low productivity of pasture (Hoste 1974).But the use of concentrate supplements, or the chemical, physical and microbial treatments generally recommended (Nicholson 1981;Hartley 1981) to improve the low energy and protein in lignified straw are not actually applicable to small scale farmers however historical perspective, when Oklahoma researchers (Hibberd et al., 1987) added increasing levels of cotton seed meal to low-quality native grass hay diets containing equal amounts of corn, they observed a significant improvement in digestibility.Several growth trials have supported these results through comparable performance using either hay-based or silage-based diets (Brown, 1991).
. There is a great need to prevent weight loss by using simple techniques and readily available rations.
A study was therefore undertaken to evaluate the effect of different protein supplements on the growth and feed intake of young bulls fed intensively on chopped dried maize stover.
Experimental location
The experiment was conducted at livestock research and developmet station surezai Peshawar during March 2012 to study the effect of cotton seed cake, Lucerne hay supplementation on intake of maize stover and weight gain by male sahiwal bull Animals Twelve Sahiwal bulls, of average liveweight 280 kg and aged 3 years were randomly assigned to 4 groups of 3 animals each.The animals were housed in a roofed, half walled shed with a concrete floor, treated for worms and ectoparasites, and injected with A, D, K, E polyvitamins.
Feed and Feeding
A two week adaption period of maize stover was followed by a 14 week experimental period during which maize stover was given ad libitum at the rate of 120% of the intake.The diets were composed of: basal diet -ad libitum dried chopped maize stover.diet 1 -basal diet + 750 gm cottonseed cake/animal/day.diet 2 -basal diet + 1000 gm lucerne hay/animal/day diet 3 -basal diet + 900 gm/day of a mixture of lucerne hay and cottonseed cake in the ratio of 66:34 (w/w).
The supplement was given separately each morning before the basal diet of maize stover.Water and a mineral supplement (50:50 common salt/bone meals) were given separately ad libitum.Chopped maize stover and refusals were weighed each morning.Animals were weighed fortnightly from the beginning of the adaptation period to the end of the experimental period.
The chemical composition of maize stover and lucerne was determined using the methods of Van Soest (1967) for cell wall materials and nitrogen was measured by a Kjeldahl procedure; these figures are shown in Table 1.Liveweight gains and feed intake were submitted to analysis of variance and Fisher's test was applied to determine significant differences among treatments.
Results and discussion
Intakes of maize stover and liveweights are given in table 2. Cottonseed cake slightly but nonsignificantly increased maize stover intake, but both lucerne and lucerne/cottonseed mixture reduced maize stover intake significantly (P<0.5).
Table 2. Effects of diets 1, 2 and 3 on dry matter intake of maize stover and on liveweight gains in cattle over an experimental period of 12 weeks.Table 2 shows that the liveweight gains (LWG) of the supplemented groups differed significantly from each other, and all significantly exceeded that from the basal diet of maize stover, which produced weight loss (-8 g/animal/day).In terms of LWG, Lucerne hay alone was more effective for animals than cottonseed cake alone.Animals supplemented with the mixture of lucerne hay and cottonseed cake registered the highest LWG, showing the mixed supplement to have been more beneficial for cattle growth than either supplement alone.In the three supplemented groups, the quantity of supplementary protein was the same, and thus LWG was significantly affected by the source of supplementary protein.(Cao et al., 2008) observed Cows fed long Lucerne (LL) hay spent more time ruminating compared with cows fed short lucerne (SL) hay ranging from 293 to 336 min/day (p < 0.001).Total time spent chewing by cows increased from 505 to 574 min/day (p = 0.002) for SL and LL respectively.Based on the results from this study, midlactation cows can be fed diets that contain ground maize grain and SL hay without leading to negative effects on ruminal pH and nutrient digestibility, these findings are in line with our research findings.Morgan (1977) observed in sheep that finely ground cottonseed meal provided animals with by-pass protein resulting in increased total feed intake.The low voluntary intake of maize stover could be the result both of low crude protein content and low digestibility (Finn 1976), in turn probably related to its high lignin content (12%) which limits the fermentable energy available (Lindberg et al., 1984).
Rations
While both leucaena hay alone and its mixture with cotton seed cake significantly decreased maize stover intake, the effect of leucaena alone was relatively more pronounced.This might have been due either to its lignin content (12% as in corn stover) or to the toxic effect of mimosine (Jones et al., 1983) which was not assessed, but in view of the gains observed, probably unimportant Means in the same row not having common letters differ significantly (P<0.05). | 2017-07-15T07:21:39.907Z | 2017-05-11T00:00:00.000 | {
"year": 2017,
"sha1": "ddedd0ec653a787ee7d3eb3d107c9486090e574d",
"oa_license": "CCBY",
"oa_url": "https://www.preprints.org/manuscript/201705.0092/v1/download",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "ddedd0ec653a787ee7d3eb3d107c9486090e574d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
236948173 | pes2o/s2orc | v3-fos-license | Perspective of the Relationship between the Susceptibility to Initial SARS-CoV-2 Infectivity and Optimal Nasal Conditioning of Inhaled Air
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), as with the influenza virus, has been shown to spread more rapidly during winter. Severe coronavirus disease 2019 (COVID-19), which can follow SARS-CoV-2 infection, disproportionately affects older persons and males as well as people living in temperate zone countries with a tropical ancestry. Recent evidence on the importance of adequately warming and humidifying (conditioning) inhaled air in the nasal cavity for reducing SARS-CoV-2 infectivity in the upper respiratory tract (URT) is discussed, with particular reference to: (i) the relevance of air-borne SARS-CoV-2 transmission, (ii) the nasal epithelium as the initial site of SARS-CoV-2 infection, (iii) the roles of type 1 and 3 interferons for preventing viral infection of URT epithelial cells, (iv) weaker innate immune responses to respiratory viral infections in URT epithelial cells at suboptimal temperature and humidity, and (v) early innate immune responses in the URT for limiting and eliminating SARS-CoV-2 infections. The available data are consistent with optimal nasal air conditioning reducing SARS-CoV-2 infectivity of the URT and, as a consequence, severe COVID-19. Further studies on SARS-CoV-2 infection rates and viral loads in the nasal cavity and nasopharynx in relation to inhaled air temperature, humidity, age, gender, and genetic background are needed in this context. Face masks used for reducing air-borne virus transmission can also promote better nasal air conditioning in cold weather. Masks can, thereby, minimise SARS-CoV-2 infectivity and are particularly relevant for protecting more vulnerable persons from severe COVID-19.
Background to SARS-CoV-2 and COVID-19
Coronavirus disease 2019 (COVID- 19) due to the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a pandemic that, since its first identification in December 2019, caused approximately 178.5 million infections and 3.9 million deaths worldwide by 21 June 2021 [1]. SARS-CoV-2 is a membrane-enveloped virus with a 30 kb positive-sense RNA genome. It is related to two highly pathogenic coronaviruses of zoonotic origin that previously triggered limited disease outbreaks, SARS-CoV-1 in 2002-2004 and the Middle East respiratory syndrome coronavirus sporadically since 2012 [2]. SARS-CoV-2 is also related to several less pathogenic coronaviruses that cause mild to moderate common cold-like symptoms in up to 30% of people every year [2]. Early SARS-CoV-2 infection is commonly diagnosed by RT-qPCR for viral RNA in the nasal cavity and nasopharynx. The spike glycoprotein (S) located on the SARS-CoV-2 membrane envelope mediates binding of the virus to epithelial cells of the respiratory tract to initiate infection.
S is composed of an N-terminal S1 region containing a receptor binding domain that attaches to the angiotensin-converting enzyme 2 (ACE2) receptor on host cells and a C-terminal S2 region that subsequently mediates fusion between the virus and host cell membranes to allow the entry of viral RNA into cytoplasm [2,3]. SARS-CoV-2 can also enter cells by endocytosis followed by S-mediated fusion of the endosome and virus membrane [4]. S also facilitates cell-cell membrane fusion that additionally spreads virus [4].
Transfer to the mucous membranes of the eyes, nose, and mouth by fomite or direct contact is an established transmission method for respiratory viruses. It has increasingly become clear that nasal inhalation of virions present in exhaled breath and airborne droplets produced by sneezes and coughs of infected persons is a major route of infection for SARS-CoV-2 [5][6][7][8][9]. Once infected with SARS-CoV-2, most people, typically healthy young individuals, develop mild or no symptoms because they rapidly eliminate the virus from the upper respiratory tract (URT) through an effective immune response [10,11].
However, SARS-CoV-2 infection causes severe pneumonia in about 15% of patients and acute respiratory distress syndrome (ARDS), which is difficult to treat, in about 5% of patients [11]. The earliest immune response to the infection of airways by a respiratory virus, such as influenza A, is mainly innate and antigen-nonspecific; however, this is rapidly followed by an adaptive immune response involving antigen-specific B and T lymphocytes [12][13][14]. Accumulating evidence suggests that the immune response to SARS-CoV-2 infection follows a similar course [10,15]. Overactive and inappropriate adaptive and innate immune responses that ensue if SARS-CoV-2 is not eliminated early in the URT contribute to the characteristic immunopathology of ARDS and severe COVID-19 with lung and systemic involvement [10,11].
The incidence of many respiratory viral infections, including those caused by influenza and respiratory syncytial viruses, increases during winter in temperate zone countries. Preventive measures, such as vaccination, are therefore always undertaken before the onset of winter to mitigate influenza epidemics. The winter peak of infections has generally been attributed to the better environmental survival of the influenza and respiratory syncytial viruses at cold temperatures and increased opportunities for transmission when people spend more time indoors in winter [16][17][18].
An upsurge in winter infections is also characteristic of seasonal coronaviruses and generally the case with SARS-CoV-2 [19], although factors, such as population immunity and the emergence of more transmissible variants of SARS-CoV-2, can modify the relationship between the incidence of disease and environmental conditions. Other respiratory viruses have different transmission seasons with some transmitted throughout the year [18]. Striking outbreaks of COVID-19 among people working for prolonged periods at low ambient temperatures in meat and poultry processing factories occurred year round in many countries [20]. This has been attributed to crowded working conditions, but it was recognised more recently that low working temperatures may increase the risk of COVID-19 [20]. Influenza tends to be more prevalent during the rainy season in tropical countries, and this has been ascribed to greater congregation indoors during the rains [16,17]. It is also pertinent, however, that ambient temperatures are somewhat lower and the humidity is higher during the rainy season in the tropics, and these changes can be more pronounced at higher elevations. Mathematical analyses showed that the rates of exponential spread of SARS-CoV-2, rather than COVID-19 morbidity and death, correlated best with environmental temperature in northern temperate zone countries [21,22].
People with recent tropical ancestry are more prone to severe COVID-19 than persons of temperate zone ancestry in the UK and USA, with the difference attributed largely to socio-economic factors [23,24]. COVID-19 also results in more severe disease in elderly persons, and thus the relative risk of dying from COVID-19 increases exponentially with age in many countries [25]. This has been ascribed to age-related changes in the immune system reducing the ability to mount an effective immune response against SARS-CoV-2 and the increasing prevalence of other morbidities with age facilitating more severe COVID-19 [26,27]. Fatality rates following infection are also generally higher in males compared with females in all age groups [25,27].
An early unpublished observation that nasal warming and humidifying of inspired air (nasal air conditioning) may influence protective immune responses in the URT to SARS-CoV-2 infection [28] is now evaluated in detail with recently published data.
SARS-CoV-2 Infection in the Upper Respiratory Tract
An early virological study during the COVID-19 pandemic suggested that SARS-CoV-2 first infected and replicated in the nasopharynx and oropharynx of the URT, with the likely subsequent seeding of the lower respiratory tract and lungs by aspiration, although infection of the nasal epithelium was not investigated in this work [8]. Subsequent molecular studies demonstrated that ACE2 expression and SARS-CoV-2 infection are higher in the nasal epithelium than the lower respiratory tract, and therefore that the nasal epithelium is the probable initial infection site followed by infection of the pharynx as a result of mucociliary clearance of virus towards the nasopharynx and later the likely seeding of the lower respiratory tract and lungs by aspiration [29][30][31]. This is also consistent with the presence of TMPRSS2, the protease that cleaves S to expose its fusion peptide, in the nasal epithelium [30]. The viral load of SARS-CoV-2, indicative of viral replication, is greater in the nasal epithelium compared with in the pharynx following infection [31]. The nasal epithelium is an established site for the replication and transmission of influenza viruses [32]. Influenza viruses replicating in the nasal epithelium have been shown to reach the pharynx through mucocilary clearance and serve as a source of virus for subsequently infecting lungs [32] through the oral-lung axis [33]. Although definitive experimental evidence is not yet available, it is reasonable to assume that the same process also occurs with SARS-CoV-2.
Current data suggest that infection with SARS-CoV-2 in most people typically produces either mild or no symptoms of COVID-19 because the innate and adaptive immune responses are able to rapidly eliminate the virus from the URT [10], and that severe disease with lung involvement requiring hospitalisation occurs only when such immune responses are delayed or inadequate [10,11]. The susceptibility to infection with the virus is difficult to distinguish from susceptibility to severe COVID-19 because of varying disease phenotypes and multiple factors influencing the propensity to develop severe disease [10,11]. Early RT-qPCR test positivity in the URT is possibly the best presently available criterion for detecting early infection and is, therefore, useful in estimating susceptibility to infection and differentiating it from susceptibility to severe COVID-19. This article only considers the role of nasal air conditioning on susceptibility to infection with SARS-CoV-2 in the URT and not the many other factors that subsequently determine the development of severe COVID-19, which may follow the URT infection.
Physiological Importance of the Nasal Conditioning of Inspired Air
Healthy adults exchange approximately 1-1.5 × 10 4 litres of air per day with the environment through inspiration and expiration [34,35]. The nasal air conditioning of inspired air before it reaches the lungs is essential for healthy respiratory function [34,35]. The nasal mucosa possesses a specialised sub-epithelial network of capillaries to support air conditioning so that inspired air of approximately 25 • C and 35% relative humidity is warmed and humidified to about 33-34 • C and about 90% relative humidity before it enters the nasopharynx [34,35].
Further warming to the alveolar temperature of 37 • C and relative humidity of 100%, which is critically important for lung function [34,35], normally takes place in the rest of the respiratory airway through heat and moisture exchange with its mucosal surface. Conversely, air from the lungs loses humidity and warmth while progressing up the respiratory tract for expiration [34,35]. Nasal air conditioning capability varies with the temperature and humidity of inspired air [34,35]. Even small increases in the temperature of the nasal mucosa enhances the ability of the nose to adequately warm and humidify inspired air [36].
Innate and Adaptive Immune Response in the Upper Respiratory Tract in Protection against SARS-CoV-2 Infection
Protective antigen non-specific or innate immune mechanisms operative in the respiratory tract against viral infections have been recently reviewed [37]. Such innate immune mechanisms have been best characterised in influenza [12][13][14]. Analogous innate immune mechanisms that can protect against infection by SARS-CoV-2 in the URT are summarised in Table 1. The adaptive immune response also plays a role in clearing SARS-CoV-2 infections in the airways and is responsible for the enhanced protection in the URT conferred through vaccination or a prior resolved infection with SARS-CoV-2. The adaptive immune response that controls and eliminate infection in the URT is induced by viral antigens reaching the nasopharynx-associated lymphoid tissue (NALT) for presentation to T and B lymphocytes. The efficacy of this process is illustrated by the elicitation of protective secretory IgA in the nasal mucosa as well as protective systemic T cell and antibody responses by intranasally administered influenza vaccines [38]. Blood IgG antibody levels and CD8+ cytotoxic lymphocyte responses were correlated with protection against a second infection with SARS-CoV-2 in macaques [39]. Multifunctional antibodies with virus neutralising, complement activating, phagocytosis promoting, and NKcell-activating properties as well as interferon γ (IFNγ, a Type 2 IFN) producing T cells are robustly induced after vaccination with S, and such immune responses are associated with vaccine efficacy [40][41][42][43][44][45]. Serum IgA antibodies to S are induced by the vaccines; however, it is not yet known how this relates to the production of dimeric secreted IgA in the URT mucosa, which is important for conferring protection against influenza virus infection [12][13][14]. Antibody and T cell responses in COVID-19 and their functions in resolving disease are increasingly becoming clarified [10,15,[46][47][48]. The principal adaptive immune mechanisms that can protect against infection by SARS-CoV-2 in the URT are summarised in Table 2.
Humidity of Inspired Air and Protection against SARS-CoV-2 Infection
Mucus produced by goblet cells forms a layer covering the ciliated epithelium of the URT and is the first barrier that needs to be overcome by respiratory viruses in order to infect epithelial cells. The mucosal barrier functions optimally at 100% relative humidity and the core body temperature of 37 • C [35]. In mice infected with influenza A virus, low air humidity impairs the mucociliary clearance of virions, Type 1 IFN-dependent antiviral defence in epithelial cells, and the repair of damaged epithelium [49].
The multiple ways in which humidity may affect the stability of respiratory viruses and the mucosal barrier function have been recently reviewed [37] and may be summarized as follows for the URT: (i) airborne enveloped viruses may be more stable at low and high humidities and less stable at intermediate humidities, (ii) mucoepithelial integrity is decreased by inspired air of low humidity, and (iii) mucocilary clearance, which removes virions trapped in the mucus from the airway, is reduced at low humidity. Inadequate humidification of inhaled air of relatively low moisture content that is characteristic of winter may, therefore, be expected to enhance infection of the URT epithelium by SARS-CoV-2.
Temperature of Inspired Air and Protection against SARS-CoV-2 Infection
Low temperature affects the stability of respiratory viruses in the environment and also compromises mucosal barrier function [37,49]. There is evidence to suggest that SARS-CoV-2 survives better at lower temperatures in human nasal mucus and sputum [50], which is pertinent to both airborne and fomite transmission. Essentially, it can be surmised that lower than optimal temperatures in the URT: (i) may improve the stability of the lipid bilayer in enveloped viruses, (ii) reduce the mucociliary clearance of viruses, and (iii) compromise URT-epithelium repair during infections [37,49].
Increasing evidence has demonstrated that lower-than-optimal airway temperatures also compromise the critical innate immunity against an initial virus infection that is normally conferred by the production of IFNs in airway epithelial cells [51]. Viral infection of epithelial cells leads to the production of type 1 IFNα and IFNβ as well as type 3 IFNλ. Viral RNA in the cytoplasm, including SARS-CoV-2 RNA, is recognised as a pathogen-associated molecular pattern (PAMP) by at least two prominent PAMP-recognising receptors (PRRs) that are products of the retinoic acid-inducible gene 1, termed RIG-1, and the melanoma differentiation-associated gene 5, termed MDA5. The activation of RIG-1 and MDA5 initiates a signalling cascade that leads to the phosphorylation of two important regulators of type 1 IFN production-termed IFN regulatory factors 3 and 7 (IRF3 and IRF7) [51].
SARS-CoV-2 RNA released within endosomes during the alternate endocytic entry pathway activates Toll-like receptors, which are PRRs on endosomal membranes, and these can also lead to the phosphorylation of IRF3 and IRF7 in the cytoplasm as well as the activation of the transcription factor NF-κB, which promotes inflammation. Phosphorylated IRF3 and IRF7 dimerise and translocate to the nucleus to initiate transcription and the secretion of IFNα and IFNβ, which bind to type 1 IFN membrane receptors for both interferons-termed IFNARs-on adjacent cells. IFNARs are composed of two subunits termed IFNAR1 and IFNAR2. Activation of IFNAR through the binding of type 1 IFNs initiates a signalling cascade, which results in the transcription of numerous interferon-stimulated genes (ISGs) that confer a virus infectionresistant state to epithelial cells surrounding the virus infected cell [51]. A double-stranded RNA-dependent protein kinase R (an ISG) (i) phosphorylates an initiation factor eIF-2 to block the translation of viral RNA and (ii) triggers apoptosis of the infected cell.
Additionally, double-stranded RNA in the cytoplasm stimulates the enzyme 2 5 -oligoadenylate synthase (OAS, an ISG) to produce 2 5 oligoadenylate, which activates a latent endonuclease RNAse L to degrade viral RNA. Type 3 λ IFNs are induced in a similar manner to type 1α and β IFNs but, in comparison with type 1 IFNs, (i) have different IFN receptors that are expressed prominently in barrier epithelial cell membranes, (ii) are less inflammatory, and (iii) show more sustained production [51]. Type 3 λ IFN may, therefore, be particularly important in the earliest stage of SARS-CoV-2 infection in the nasal and pharyngeal epithelium and in clearing the infection with mild or no disease. Several proteins coded for by non-structural genes and open reading frames in the SARS-CoV-2 genome interfere with the induction of type 1 and 3 IFNs and their subsequent signalling pathways [52], which is consistent with the importance of these IFN pathways in resisting viral infection.
A common cold-causing rhinovirus replicated better at 33 • C than at 37 • C in primary mouse airway epithelial cell cultures. Increased replication was associated with the better PRR-mediated induction of type 1 and 3 IFNs as well as ISGs in the epithelial cells by the virus at 37 • C compared with at 33 • C [53]. Experiments on the infection of primary human bronchial epithelial cell cultures with rhinoviruses suggested that the inhibition of both apoptotic cell death and RNAse L may also be responsible for the better rhinovirus replication at 33 • C compared to 37 • C in these cells [54].
Emerging evidence now suggests that SARS-CoV-2 also replicates approximately 100-fold better at 33 • C than at 37 • C in human airway epithelial cells and that this is associated with better induction of type 1 and 3 IFN-mediated ISGs at 37 • C [55]. SARS-CoV-2 growing in human airway epithelial cells in culture were sensitive to inhibition by type 1 and 3 IFNs [56]. As human nasal temperatures are typically maintained at 33-34 • C under normal environmental conditions, while the temperature in the lower respiratory tract is 37 • C [34][35][36], these findings are consistent with observations that SARS-CoV-2 initially infects the nasal epithelium followed by the nasopharynx and only, subsequently, the lower respiratory tract [29][30][31].
Protection conferred by the type 1 IFN pathway is also supported by the recent demonstration that persons with genetic defects in Toll-like receptor 3, a PRR that senses double stranded RNA, as well as genetic defects in IRF7 and the IFNAR1 subunit, were more susceptible to severe, life-threatening COVID-19 pneumonia [57]. Genome-wide association studies (GWAS) in severe COVID-19 also identified genes for the IFNAR2 subunit and OAS as required to protect against critical illness [58]. Severe and life threatening COVID-19 manifests in the lungs and systemically, and hence the above findings may be the result of defective innate immune responses in the lower airways.
However, such genetic defects will also manifest as suboptimal innate immune responses in the URT. Existing evidence suggests that if SARS-CoV-2 infection in the nasal epithelium and nasopharynx is rapidly eliminated through robust early immune responses, involving type 1 and 3 IFNs, then serious disease in the lungs through subsequent virus seeding will not occur or is minimised [10]. The evidence also suggests that type 1 and 3 IFN production is weaker at lower temperatures in the nasal cavity.
Colder-than-normal inspired air in winter may be expected to produce temperatures in parts of the nasal cavity that are lower than 33 • C, thereby, likely further facilitating the infectivity of SARS-CoV-2 in the nasal epithelium and nasopharynx. The relationships between the temperature of inspired air, intranasal temperatures, and nasal air conditioning have not been systematically studied to date. However, modelling studies showed that the anterior nasal cavity is responsible for most of the warming of inspired air [59].
The effects inadequate nasal warming of colder inspired air on adaptive and other types of innate immune cell responses remain to be fully ascertained; however, it is expected that they will also be compromised if early innate immune responses and the integrity of the airway epithelium are adversely affected by weaker nasal air conditioning. This can further promote SARS-CoV-2 infection of the nasal epithelium and nasopharynx.
Nasal Air Conditioning and Genetic Differences in Susceptibility to SARS-CoV-2 Infection
The propensity to develop severe COVID-19 as a result of defects in the innate immune genes for IRF7 and IFNAR1 [57] as well IFNAR2 and OAS [58], which are also likely to affect early innate immune responses in the URT, has been outlined above in Section 6. Additionally, recent GWAS have identified large segments of human chromosomes inherited from Neanderthals that are associated with a major risk for susceptibility [60] or protection [61] against severe COVID-19. These two studies did not identify specific genes responsible for susceptibility or protection but demonstrated a variable distribution of the relevant chromosomal regions in different parts of the world.
The differential susceptibility and resistance to a variety of human infectious diseases are governed by genetic factors [62] and are particularly well studied in malaria [62][63][64][65]. However, genetic factors that specifically affect the initial infectivity of SARS-CoV-2, in contrast to the severity of ensuing COVID-19, have not been clearly established. The nearest experimental approach to investigate this difference recently examined SARS-CoV-2 RNA test positivity separately from disease phenotype in a US-based GWAS [66]. The results showed that blood group O was significantly associated with reduced SARS-CoV-2 test positivity and an association between some types of tropical ancestry and SARS-CoV-2 test positivity [66]. It is possible to hypothesise that the blood group O association is due to protection conferred in the URT by natural anti-A and anti-B antibodies universally present in blood group O individuals acting against infecting virions carrying membrane A and B antigens derived from infecting individuals of blood groups A and B.
Variations in nasal structure between human populations living in geographically disperse locations with different climates have been correlated with the greater need to humidify and warm inspired air during cold and dry winters on one hand and the correspondingly reduced need for this in warm and humid climates of the tropics on the other [67][68][69]. It is reasonable to postulate that selection by respiratory viral infections in temperate zones in ancient times may have been a factor that contributed to such nasal variations. SARS-CoV-2 may, therefore, be more infectious in temperate zone countries to persons with a tropical ancestry [66] due to a weaker nasal air conditioning ability, and that this may contribute, in addition to socioeconomic factors, to their observed propensity to develop more severe COVID-19 [23,24]. This remains to be definitively established but their potentially greater vulnerability to SARS-CoV-2 infection may be an important consideration for prioritising additional protective measures and vaccination.
Differences in Nasal Air Conditioning and the Age and Gender Differences in Susceptibility to SARS-CoV-2 Infection
Intranasal air temperature and humidity are lower in elderly in comparison with younger persons, and this is associated with the atrophy of the nasal mucosa with increasing age [70]. Recent radiological studies confirmed such age-related changes [71]. Many nasal parameters also display a pronounced gender dimorphism in diverse populations [68]. Through possible differential nasal air conditioning, these variations may contribute in some small measure to age-and gender-related differences in the susceptibility to infection with SARS-CoV-2, which can then, in turn, impact the frequency of severe COVID-19 [25][26][27]. An increase in the basal level of immune activation or inflammation, reduced innate responses in the airway epithelium, deterioration of the quality of adaptive immune responses involving B and T lymphocytes with age, are additional immunological factors [26] that, together with a possibly reduced nasal air conditioning ability, may permit better replication of SARS-CoV-2 in the nasal epithelium and nasopharynx of older persons and, thereby, facilitate severe lower respiratory tract disease and increased mortality [25][26][27].
There are also gender-related differences in the innate and adaptive immunity [72] that can play a role in limiting SARS-CoV-2 infection of the URT epithelium, with an attendant impact on the severity of any ensuing COVID-19 [25]. The relative contributions of these other factors and possibly altered nasal air conditioning ability toward the age and gender-related differences in the susceptibility to COVID-19 merit further investigation. It is encouraging, however, that age or gender does not affect vaccine efficacy [40][41][42][43][44][45] do not affect the vaccine efficacy, which also augurs well for immunity generated after recovery from a primary SARS-CoV-2 infection.
Other Factors Influencing Susceptibility to SARS-CoV-2 Infection
Air pollution can potentially play a role in the susceptibility to SARS-CoV-2 infection by providing air-borne particles to transport virions and affecting the barrier and innate immunity functions of the respiratory epithelium [73]. Available evidence also suggests that the relatively common URT conditions of allergic rhinitis and chronic rhinosinusitis do not increase the risk of COVID-19 [74]. The elevated production of T H 2 cytokines, which is common in airway allergic diseases, reduces ACE2 but increases TMPRSS2 expression levels in the URT [75]. Rhinitis can, however, promote the transmission of SARS-CoV-2 from infected to uninfected persons as a result of increased nasal mucus production and sneezing. Another likely confounding factor is that prior infection with common cold coronaviruses may generate a degree of cross-reactive protective immunity against SARS-CoV-2 infection in the URT. Cross-reactions between common cold coronaviruses and SARS-CoV-2 have been documented at the level of CD4+ T H cells and CD8+ cytotoxic lymphocytes [46,47]. Vaccination against COVID-19 and previous resolved infections with SARS-CoV-2 augment adaptive immune responses in the URT and will, therefore, reduce the susceptibility to infection. In contrast, the emergence of SARS-CoV-2 variants with a higher affinity of the receptor binding domain of S for ACE2 and reduced binding to neutralizing antibodies [76], as well as potential other mechanisms to evade protective immunity in the URT, will increase the susceptibility to infection. The contribution of the nasal air conditioning ability on SARS-CoV-2 infectivity in the URT also has to be considered in the context of such additional factors. Figure 1 summarizes different aspects of the relationship between nasal conditioning of inhaled air and SARS-CoV-2 infectivity in the URT.
Conclusions
The importance of nasal air conditioning in SARS-CoV-2 infectivity remains poorly explored experimentally. The determination of SARS-CoV-2 infection rates and viral loads in the nasal cavity and nasopharynx in relation to the inhaled air temperature and humidity, age, gender, and genetic background is warranted. A better understanding of this may help to develop more effective measures to reduce infections and control the pandemic. Clinical studies on whether safe induction of type 1 and 3 IFNs in the URT reduces SARS-CoV-2 infection rates may be helpful to health personnel working in high risk situations.
The importance of nasal air conditioning predicts that simple measures, like minimizing exposure to cold air and keeping the nose warm with a scarf wrapped around the face and neck or a face mask may help to promote more efficient nasal air conditioning to reduce infections. Keeping the nose warm in cold temperatures is an ancient and common practice to protect against respiratory illnesses in many parts of the world. While infectivity and immune protection in the URT is connected with the subsequent development of serious or even critical illness involving the lungs, many more factors, including comorbidities, come into play in the development of severe COVID-19.
Conclusions
The importance of nasal air conditioning in SARS-CoV-2 infectivity remains poorly explored experimentally. The determination of SARS-CoV-2 infection rates and viral loads in the nasal cavity and nasopharynx in relation to the inhaled air temperature and humidity, age, gender, and genetic background is warranted. A better understanding of this may help to develop more effective measures to reduce infections and control the pandemic. Clinical studies on whether safe induction of type 1 and 3 IFNs in the URT reduces SARS-CoV-2 infection rates may be helpful to health personnel working in high risk situations.
The importance of nasal air conditioning predicts that simple measures, like minimizing exposure to cold air and keeping the nose warm with a scarf wrapped around the face and neck or a face mask may help to promote more efficient nasal air conditioning to reduce infections. Keeping the nose warm in cold temperatures is an ancient and common practice to protect against respiratory illnesses in many parts of the world. While infectivity and immune protection in the URT is connected with the subsequent development of serious or even critical illness involving the lungs, many more factors, including comorbidities, come into play in the development of severe COVID-19.
It is reasonable to conclude, however, that taking appropriate simple precautions to keep the nose warm to promote better nasal air conditioning in cold temperatures, particularly by more infection-susceptible persons, could minimise the initial infectivity of SARS-CoV-2 and other respiratory viruses. These measures promote more effective innate and adaptive immune responses in the URT. In the case of SARS-CoV-2, they supplement vaccination and previous infection with SARS-CoV-2 to further enhance adaptive immunity in the URT. The added advantage of using face masks and face scarves is that they also help reduce the person-to-person transmission of SARS-CoV-2 as well as other respiratory viruses. | 2021-07-30T13:26:12.380Z | 2021-07-24T00:00:00.000 | {
"year": 2021,
"sha1": "2943fec358284279764a4da5e1c63e53edb9f1f9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/15/7919/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bff3c42f0b811ae47297c583e3c303220f9f2162",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255097764 | pes2o/s2orc | v3-fos-license | Seed bank dynamics of the annual halophyte Salicornia ramosissima: towards a sustainable exploitation of its wild populations
Halophytes are able to survive in the high salted areas of the world, and have been recognized as sources of bioactive metabolites. There is a need to design sustainable strategies for the use of wild populations of halophytic species in order to avoid irrational gathering. Seed banks are essential for resilience and regeneration in salty ecosystems. We sampled annual seed production, aerial and soil seed banks and seed dynamics for a year in four wild populations of the annual halophyte Salicornia ramosissima growing in saltpans, in order to develop sustainable management practices for the use of its populations. The seed production of S. ramosissima depended mainly on plant density rather than on the number of seeds produced by each individual plant. In three of the four study populations, most of the annual seed production was exported out of the saltpans (> 79%) and only between 14 and 20% was accumulated in the initial aerial and soil seed banks. These initial seed banks were highly depleted during the year until the next fruiting period, when they accumulated less than 1% of the annual seed production (from 19 to 15,302 seed m−2). Salicornia ramosissima established a persistent soil seed bank in two of the four locations. Annual seed production would be key for the preservation of those S. ramosissima populations that do not establish persistent soil seed banks. In view of our results, each population of S. ramosissima should be studied independently to design population-specific management plans.
Introduction
Salt-tolerant plants (halophytes) have been traditionally used as healthy and functional foods and medicines for human and animals (Marinoni et al. 2019;ElNaker et al. 2020 pharmacological, cosmetic, biofuel and nutraceutical interest (Debez et al. 2017, Petropoulos et al. 2018Faustino et al. 2019). Salt stress negatively affects soil fertility, causing land degradation. In fact, drought and soil salinity are the main factors responsible for crop yield reduction in the present scenario of climate change and sea level rise (Alae-Carew et al. 2020). Nevertheless, halophytes are able to survive in the high salted areas of the world inhabited by an ever-growing number of people (Fita et al. 2015). In this context, halophyte cultivation would seem to be cheaper than other commercial crops, and can yield industrial value, remediate nutrient-rich effluents from aquaculture and rehabilitate lands through soil desalination (Custodio et al. 2017;Nikalje et al. 2018). Besides 'saline agriculture', wild populations of halophytes can be exploited as sources of 'wild food', but very few studies have analyzed the key ecological aspects of halophytes in the development of sustainable management practices (Godfray et al. 2010). Most studies have focused on analyzing the traditional uses of wild halophyte populations by local people and their bioactive properties (Pereira et al. 2020). Since people have become aware of the potential of some halophytes as part of a healthy diet, their natural growth sites are now threatened in some locations (Nae-Kyu and Lee 2012). Thus, there is a need to design sustainable strategies for the use of wild populations of halophytic species in order to avoid irrational gathering and genetic erosion threats.
Halophytes colonize saline environments, such as salt marshes, where soil salinity determines plant distribution, as high salinity conditions affect seed survival and germination (Ungar 1987). Salinity reduces germination and stimulates dormancy in halophyte seeds (Pujol et al. 2000). Some plant species survive in unfavorable environmental conditions in saline environments by building on persistent seed banks ([ 1 year). Other plant species have transient seed banks (\ 1 year) as the result of high and rapid germination and/or because their seeds quickly die in the soil (Walck et al. 2007;Parsons 2012). The reserves of mature viable seeds in the form of seed banks can be repositories of genetic information located on the plants (aerial seed bank) and in the soil (soil seed bank) (Roberts 1981). Germination from halophyte seed banks occurs mainly during germination windows, when soil salinity decreases after rainfalls (Gul and Weber 2001;Noe and Zedler 2001). So, seed banks are essential for resilience, secondary succession and regeneration in salty ecosystems (Honnay et al. 2007). Seed banks may be especially important for the long-term preservation of annual halophytes that may be absent in the aboveground vegetation and present in the seed bank waiting for favorable conditions (Egan and Ungar 2000).
Amaranthaceae (formerly known as Chenopodiaceae) is one of the most represented family of halophytes, with many species that are well known as plants of pharmacological and nutraceutical interest (Lefevre and Riviere 2020). Specifically, the species of the genus Salicornia L. have a broad geographical distribution and are among the most frequent halophytes with pharmacological and culinary uses. An array of functional nutrients has been detected in Salicornia species, justifying its usage as a 'sea vegetable' (Patel 2016). Salicornia ramosissima J. Woods is an annual extremophile halophyte that presents apparently leafless, succulent and articulated stems . Salicornia ramosissima colonizes European and North African salt marshes occurring in a whole range of salt marsh habitats, such as saltpans, where it germinates during winterspring and dies off during autumn-winter (Davy et al. 2001). Salicornia ramosissima plants are fit for human consumption (Lima et al. 2020) and are useful for the ecological restoration of saline areas (Santos et al. 2017). The germination of this annual halophyte decreases at salinities higher than 0.2 M NaCl and is rapidly activated after salinity release (Rubio-Casal et al. 2003;Muñoz-Rodríguez et al. 2017). Although seed banks play a significant role in the optimum development of S. ramosissima populations (Egan and Ungar 1999;Silva et al. 2007;van Regteren et al. 2019), no study has analyzed in detail the aerial and soil seed banks of any Salicornia species and their seasonal dynamics.
Our main goal was to analyze S. ramosissima seed bank size and dynamics to help to design sustainable exploitation strategies of wild populations. We hypothesized that annual seed production of S. ramosissima would be very high, contributing to the establishment of large and permanent soil seed banks in each population. To test this hypothesis, we sampled annual seed production, aerial and soil seed banks and seed dynamics for a year in different populations of S. ramosissima colonizing saltpans.
Based on our results, we propose sustainable management practices for the use of wild populations of Salicornia as biomass sources.
Study area
The present study was carried out in tidal marshes in the Odiel Marshes Natural Park (37°12 0 32.3 00 N, 6°5 8 0 01.5 00 W, Gulf of Cádiz, Southwest Iberian Peninsula) (Online Resource 1- Fig. S1). The mean tidal range is 2.10 m, and the mean spring tidal range is 2.97 m in the Odiel Marshes. The semidiurnal tides can lead to hypersalinity in some mature marshes (Castellanos et al. 1994). The Odiel Marshes are subjected to a Mediterranean climate with Atlantic influence. Mean temperatures are ? 17-24°C, and annual precipitation is 250-850 mm with 75-85 days of rain during the autumn and winter months, and a 4-5-month dry period from approximately June-September (AEMET 2018), when potential evapotranspiration exceeds precipitation.
Sedimentary environment
We collected three sediment samples in each of the three zones of the four study saltpans in October 2017. Sediment samples were collected randomly using stainless steel cores of 50 mm diameter and 50 mm height. Samples were placed in hermetically closed polyethylene bags and stored at -5°C until analysis in the laboratory. Sediment electrical conductivity was used as a measure of soil salinity (Richards 1974). From each sample, a mix of 10 ml of sediment and distilled water (1:1, v:v) was homogenized, and the conductivity measured in the unfiltered supernatant with a conductivity meter (Crison Instruments 5064, Hach Lange, Barcelona, Spain). Sediment pH was measured in the same unfiltered supernatant used for conductivity, using a pH meter (Crison 25, Hach Lange, Barcelona, Spain) (Nieva et al. 2001). Sediment water content was gravimetrically determined using samples of 30 g of sediment (Contreras-Cruzado et al. 2017). Sediment organic matter content was determined by the loss-on-ignition method. Organic matter content was calculated as the proportion of weight lost as compared to the weight of the dry sample before incineration (Gavlak et al. 2005).
Annual seed production
Live plant density at the end of the flowering period of S. ramosissima, when the vast majority of the plants had ripened fruits, was recorded by counting the total number of live plants in 10 randomly chosen plots (20 9 20 cm) inside the S. ramosissima zone in each population in October 2017 for P1, P2 and P3, and in November 2017 for delayed P4; withered plants from the previous flowering period were not counted. Seed production per plant was recorded for 30 randomly collected plants in each population. The production of seeds per plant was calculated in the laboratory using two methods. For small plants (with less than 50 seeds), we counted all their seeds under optical microscope. For large plants (with more than 50 seeds), we separated all the branches from the principal axis and weighed them. Then, three randomly chosen branches were weighed individually and their total number of seeds counted under optical microscope. The seed production per plant was calculated as the product between the quotient of seeds per weighed unit and the total weight of the braches for each plant. Annual seed production per plot was calculated as the product between plant density and seed production per plant. Finally, we calculated mean annual seed production per square meter in each population.
Aerial seed bank
The aerial seed bank or storage of seeds on the plants after seed dispersal, was studied at two moments. The initial aerial seed bank included seeds retained by the current year plants, and it was recorded just after the current seed dispersal in November 2017 for P1, P2 and P3, and in December 2017 for P4. The remnant aerial seed bank included seeds retained by plants just before the seed dispersal of the next flowering period, and it was recorded in October 2018 for each population (Fig. 1). In both cases, we calculated the number of seeds retained per plant following the same methodology reported previously for annual seed production. The initial aerial seed bank was calculated using the density of live plants, and the remnant aerial seed bank was obtained using the density of withered plants from the previous flowering period that still remained in the population (Fig. 1).
Soil seed bank
The soil seed bank was studied at two moments: the initial soil seed bank, just after the current seed dispersal (recorded in October 2017 for P1, P2 and P3, and in November 2017 for P4), and the remnant soil seed bank, just before the seed dispersal of the next flowering period (recorded in October 2018 for each population) (Fig. 1). In each sampling, we randomly took 10 sediment samples per zone (unvegetated, S. ramosissima and scrub zones) at the four study saltpans using stainless steel cores of 50 mm diameter and 50 mm height. Sediment samples were placed in polyethylene bags, hermetically sealed and transported to the laboratory for analysis. In the laboratory, the sediment samples were frozen until analyzed. Dry sediment samples were sieved through a 0.4 mm-light sieve to separate the seeds from sediments, and the material retained on the sieve was examined under a magnifying glass (Polo-Á vila et al. 2019).
Seed dynamics
We calculated the percentage of the annual seed production incorporated into initial aerial and soil seed banks at the S. ramosissima zone, into the soil seed bank in adjacent vegetation zones and dispersed out of the study saltpans. The annual loss of aerial and soil seed banks was calculated as a percentage of seeds in the initial seed banks not present in the remnant seed banks (Fig. 1).
Data analysis
The data were analyzed using Statistica 8.0 (StatSoft INC., USA). Deviations from the arithmetic means were calculated as standard error (SE). Significant differences were considered when p \ 0.05. Data series or their transformations (using log (x ? 1), 1/ (x ? 1) or Hx functions) were tested for homogeneity Fig. 1 Schematic representation of the dynamics of seed production in Salicornia ramosissima and of aerial and soil seed banks from the moment before seed dispersal to the same moment of the next flowering period of variance and normality with the Levene test and the Kolmogorov-Smirnov test, respectively. The data series were compared between populations or vegetation zones using an one-way ANOVA and Tukey's test as post hoc analysis. When transformed data series did not show a normal distribution or homogeneity of variance, they were analyzed using the Kruskal-Wallis (H) and Mann-Whitney U tests with population or vegetation zones as grouping factors. The nonparametric Spearman's correlation coefficient (q) was used to analyze the relationships between initial and remnant aerial seed banks, annual seed production and density of plants.
Soil seed bank
The initial soil seed bank in S. ramosissima zones was similar for each population (H 3,120 = 1.98, p = 0.577), whereas the remnant soil seed bank tended to be higher at P1 and P2 (c. 350 seed m -2 ) than at P3 and P4, where no seed was recorded (H 3,120 = 7.53, p = 0.057) (Fig. 4). In the four study saltpans, the initial soil seed bank was the highest in the zone colonized by S. ramosissima, but while it was significantly different to that reached in the two other zones for P2 and P4, it was similar to that in the adjacent soils colonized by halophylous scrubs for P1 and P3 (P1, H 2 (Fig. 4).
In contrast, the remnant soil seed bank was similar in each vegetation zone at every study saltpan, always lower than 500 seed m -2 (Kruskal-Wallis test, p [ 0.05) (Fig. 4).
Seed dynamics
The percentage of the annual seed production retained in the initial aerial seed bank ranged from 2.91% for P2 to 75.93% for P3. The seeds accumulated in the initial soil seed bank varied from 0.36% for P4 to 17.10% for P2. Less than 1.00% of the annual seed production was dispersed from S. ramosissima zones to other zones in the study saltpans for each population. Predation and dispersal out of the saltpans were between 18.60% (P3) and 85.93% (P4) of the annual seed production (Fig. 5). The percentage of the initial aerial seed bank predated and dispersed from the plants throughout the year ranged from c. 71.84% for P2 and P4 to c. 99.87% for P1 and P3 (H 3,40 = 19.8, p \ 0.0001; U test, p \ 0.05). Thus, the percentage of seeds retained in the remnant aerial seed bank was always lower than 0.10% of the annual seed production, ranging from 19 to 15,302,374 seeds m -2 (Figs. 3 and 5). The percentage of seeds depleted from the initial soil seed bank during the year ranged from c. 92.55% for P1 and P2 to 100% for P3 and P4 (H 3,40 = 8.2, p = 0.043; U test, p \ 0.05; U test, p \ 0.05). These percentages corresponded to less than 0.35% of the annual seed production for each population, varying from 0 to 407 seeds m -2 (Figs. 4 and 5). Plant density (a), seed production and number of seeds per plant (b) and annual seed production and aerial seed banks just after seed dispersal and just before next seed dispersal for four populations of Salicornia ramosissima (P1-4) colonizing saltpans in the Odiel Marshes (Southwest Iberian Peninsula). Different letters indicate significant differences between populations for the same trait (Mann-Whitney test, p \ 0.05). The data are mean ± SE (n = 10)
Discussion
Our results show that the seed production of the annual halophyte S. ramosissima depends mainly on plant density rather than on the number of seeds produced by each individual plant. In three of the four study populations, most of the annual seed production was exported out of the saltpans ([ 79%), and only between 14 and 20% was accumulated in the initial aerial and soil seed banks. These initial seed banks were highly depleted during the year until the next fruiting period, when they accumulated less than 1% of the annual seed production (from 19 to 15,302 seed m -2 ). In fact, S. ramosissima established a persistent soil seed bank in only two of the four populations. In this context, annual seed production would be key for the preservation of those S. ramosissima populations that do not establish persistent soil seed banks. We recorded high annual seed production ([ 48,000 seed m -2 ) for S. ramosissima growing in the harsh environmental conditions of saltpans marked by high sediment salinities (16-63 ppt). High salinity induces S. ramosissima seed dormancy without affecting seed viability (Rubio-Casal et al. 2003;Muñoz-Rodríguez et al. 2017). According to Davy et al. (2001), plant density varies greatly among Salicornia populations, regulated by a combination of densitydependent seed production and density-independent seedling mortality due to high levels of morphological phenotypic plasticity. In our study, annual seed production increased with the density of individual plants regardless of the seed production per individual plant, which was similar for each population. In fact, the highest seed production per individual plant (604 seeds plant -1 ) was obtained for the population with the highest plant densities (3610 mature plants m -2 ), rending 2,179,383 ± 614,577 seeds m -2 . This was probably due to the low plant densities recorded in our populations in relation to other studies that have reported close to 30000 mature plants m -2 (Davy et al. 2001). The seed dynamic changed markedly between Salicornia populations. For example, between 14.1 and 80.5% of the annual seed production (19,603-306,539 seeds m -2 ) was stored in aerial and soil seed banks. The initial aerial seed bank was larger than the initial soil seed bank in three of the four study populations, accumulating more than 2800 seeds m -2 in each population. Aerial seed banks help seed dispersal over time, and may protect seeds from being predated in the soil (Santini and Martorell 2013) and from unfavorable soil conditions such as high salinities (El-Keblawy and Bhatt 2015). Between 71.8 and 99.8% of the initial aerial seed bank may have been dispersed or predated during the first year, whereas the initial soil seed bank was totally depleted in two of the four study populations. The remnant aerial seed bank also increased together with the density of plants, storing between 19 and 15,302 seeds m -2 in different populations. Thus, between 19.5 and 85.9% of annual seed production was predated and exported out of Salicornia populations just after seed dispersal. Salicornia ramosissima shows a short-distance dispersal strategy since its seeds have hooker hairs that help them to anchor to sediments and vegetation (Polo-Á vila et al. 2019). Salicornia seeds disperse mostly on the soil surface since they float for less than one day (Huiskes et al. 1995). Genetic analyses have shown a strong tendency to inbreeding as a result of a lack of seed immigration from outside Salicornia populations (Davy et al. 2001). These previous observations together with our results, which show that less than 0.9% of the annual seed production was dispersed from Salicornia populations to adjacent zones in each study saltpan, suggest that predation would be more important than seed exportation in study populations.
Polo-Á vila et al. (2019) stated that S. ramosissima establishes persistent seed banks, but we recorded that the soil seed bank was drastically reduced, even totally depleted, during the first year after seed dispersal. This result is in line with Jefferies et al. (1981), who recorded the depletion of the seed bank of Salicornia europaea L. in the middle of the first summer following dispersal. The diminution of S. ramosissima seeds from its soil seed bank may be due to its high and fast germination during favorable conditions (Parsons 2012), and to seed predation recorded in different Salicornia species (Davy et al. 2001). Tessier et al. (2000) recorded the absence of a persistent seed bank for the annual species Suaeda maritima (L.) Dumort. due to very high germination during low salinity periods. The transitory condition of the soil seed bank for some S. ramosissima populations recorded in our study is in accordance with previous studies on different Salicornia species (Philipupillai and Ungar 1984;Thompson et al. 1997;Wolters and Bakker 2002;Rubio-Casal et al. 2003).
Conclusions
In view of our results, each wild population of S. ramosissima should be studied independently to design population-specific management plans for sustainable exploitation. For example, the establishment of a large persistent soil seed bank on some populations enables the collection of high percentages Fig. 5 Percentages of annual seed production accumulated in the initial aerial seed bank (black), remnant aerial seed bank (RASB), initial subterranean seed bank (ISSB; gray), remnant soil seed bank (RSSB), total exported (white), exported within saltpans (In) and predated and exported out of saltpans (Out) from four populations of Salicornia ramosissima (P1-4) in the Odiel Marshes (Southwest Iberian Peninsula). The data are mean (n = 10) of adult plants (c. 70-95% of the total population; 233-415 plants m -2 ), ensuring the formation of an initial soil seed bank with double the number of seeds than the recorded number of mature plants. In contrast, plant collection should be limited to 9% of the mature plants (c. 308 plants m -2 ) in other populations to achieve the same goal. In any case, more than 230 plants m -2 could be extracted from each study population, ensuring the formation of an initial soil seed bank with double the number of seeds than the recorded number of mature plants (Online Resource 3, Table S1). In addition, reintroduction experiments by sowing S. ramosissima seeds should be carried out in parallel with studies on the sustainable harvesting of S. ramosissima in order to create new populations and reinforce currently existing populations, as reported for S. europaea (Nae-Kyu and Lee 2012). | 2022-12-26T15:07:20.308Z | 2021-04-03T00:00:00.000 | {
"year": 2021,
"sha1": "4c268f534f3c492a55833cada6a3a79bdfbb5c43",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-176135/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "4c268f534f3c492a55833cada6a3a79bdfbb5c43",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
266182889 | pes2o/s2orc | v3-fos-license | The Effectiveness of Rajyoga Meditation as an Adjuvant for Panic Anxiety Syndrome
Objective: One of the most prevalent psychiatric conditions that affect a person's quality of life, ability to function and productivity, and consequently the loss of national income, are anxiety disorders. Rajyoga meditation (RM) is a form of meditation that is performed without rituals or mantras and can be practiced anywhere at any time. In this study, we attempted to evaluate the modulation of psycho-physiological parameters in panic disorder patients by a technique of short-term on RM. Methods: In this prospective randomized control study, 110 patients with panic disorder were randomized into two groups, Group A (standard treatment + RM) and Group B (Standard treatment). The participants of both group participants were subjected to sleep quality score, Physical Health Questionnaire-9 score, Panic Disorder Severity Scale (PDSS), and Hamilton Anxiety Rating Scale (HAM-A) questionnaires before starting the study (baseline) and at the end of the 8th week. Study groups were compared at baseline and at the end of 8 weeks. Results: We found that the PDSS/HAM-A was not statistically different among the study groups at baseline (P > 0.05); however, there was a statistically significant difference in mean z-scores of PDSS and post-HAM-A scores among the study groups at 8 weeks (P < 0.001). The composite score was created by adding the z-scores of pre- and post-PDSS and HAM-A. We found a statistically significant difference in postcomposite scores between the study groups (P < 0.001). Analysis of co-variance for PDSS and HAM-A among study groups showed statistical significance (P < 0.001). Conclusion: When used in conjunction with pharmaceutical treatments for the treatment of panic disorder, RM is a successful therapy. The key factors are adherence and motivation while being supervised by a licensed therapist.
Introduction
One of the most prevalent psychiatric conditions that affects a person's quality of life, ability to function and productivity, and consequently, the loss of national income, are anxiety disorders (ADs).According to the World Mental Health Survey, the lifetime prevalence of ADs varies between 3% and 19% among nations. [1]According to the Global Burden of Disease 2015 report, ADs contributed to the sixth-highest number of years with a disability. [2]Despite the enormous disease burden, ADs are generally underdiagnosed and undertreated, which results in significant health damage and financial loss.Panic disorder (with or without agoraphobia), agoraphobia without panic, social phobia (social AD), specific phobia, generalized AD (GAD), acute stress disorder, posttraumatic stress disorder, obsessive-compulsive disorder, and AD not otherwise specified are the major categories of ADs listed in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-IV) (American Psychiatric Association). [3]nic attacks happen frequently and without warning in people with panic disorder.Even when there is no obvious threat or trigger, these attacks are distinguished by a rapid wave of panic, discomfort, or a sense of losing control.Not everybody who has a panic episode goes on to have a panic disorder.
One of the most prevalent ADs is panic disorder, with lifetime prevalence rates in the general population estimated to range between 2.1% and 4.7%. [4,5]Since panic disorder is frequently accompanied by a chronic progression that causes financial hardship and a loss of quality of life, it is crucial to properly prevent and treat panic disorder.
Psychological and pharmaceutical therapies are two of the main strategies for treating panic disorder.Cognitive-behavioral therapy is a type of psychological intervention.There are elements of their therapy regimens that may additionally improve their corresponding medical illnesses in patients with a panic disorder and concurrent comorbid medical ailments.
Meditation is usually defined as a form of mental training that aims to improve an individual's psychological capacities.Meditation encompasses a family of complex practices that include yoga meditation, mindfulness meditation, mantra meditation, and tai chi. [6]Rajyoga meditation (RM) is a form of meditation that is performed without rituals or mantras and can be practiced anywhere at any time.Rajyoga word has been derived from Raja, meaning king, and yoga, meaning union between soul (spiritual energy) and supreme soul (ocean of spiritual energy). [7]e objective of this study was to evaluate the modulation of psycho-physiological parameters in panic disorder patients by a technique of short term on RM.
Methods
In this prospective randomized control Study, 110 patients with panic disorder, as defined by the DSM-5 criteria, [8] were selected by consecutive sampling from the Psychiatry Department at the All India Institute of Medical Sciences, Patna.The same control group design was applied.
All the participants were subjected to the preapproved demographic questionnaire and Physical Health Questionnaire-9 (PHQ 9), which were used for screening purpose.
Patients in the age group of 18-60 years of either gender diagnosed according to the DSM-5 criteria for Panic AD with comorbid depression and PHQ 9 score <20 were included in the study.Patients with a history of comorbid psychiatric syndrome, subjects with a history of neurological trauma, vascular disease, and organic brain disorder, and patients with alcohol addiction were excluded from the study.
All the study subjects have been explained about the study protocol in their language of communication and were asked to sign the informed consent before beginning of the study.Prior ethical approval has been taken from the Institute Ethical Committee vide letter number AIIMS/Pat/ IEC/2019/411 dated November 30, 2019.The study has also been registered on the trial registry portal of Government of India vide Registration Number CTRI/2020/12/029462 before starting the study.
The study participants were randomized into two groups using computer-generated block randomization tables.The two groups were defined as follows: • Group A: Experiment (standard treatment + RM) group: n = 55 • Group B: Control (standard treatment) group: n = 55.
Standard treatment
Selective serotonin re-uptake inhibitor (SSRI) as well as cognitive behaviour therapies (CBT) as per psychiatric advice and clinical psychologist supervision was given as standard treatment.
Rajyoga meditation
The RM technique from the Brahma Kumaris school of thought, which emphasizes the cultivation of positive thoughts for oneself and others through guided technique, was used in the RM practice in the current study.With a focus on the idea of the ultimate soul, a hypothetical primary source of energy for each and every person, particularly emphasizes the intentional transfer of cognitive process from a body-conscious to the soul-conscious state.
The intervention arm of the study population has been subjected to supervised RM sessions tailored specifically for the purpose of anxiety spectrum disorder by a team of long-term (>10 years) Rajyoga meditators.The RM session protocol (6 domains) was practiced sequentially by all the intervention group participants 25-30 min each morning and evening daily for 5 days or more every week for 8 weeks.
Meditation quality was assessed by subjective and objective feedback.
Subjective feedback
Each fortnight, the six components of the meditation practice were assessed for the self-reported effectiveness by the participants on 0-10 scale with a score of 0 if the participants could not attain any focus for the meditation practice most of the time during the fortnight and a score of 10 if he could attain desired focus most of the time during the fortnight.The maximum possible score could be 60 for the fortnight and the maximum possible total score for the complete duration would be 240.
Objective feedback
Each fortnight, the six domains were scored between 0 and 4 for the level of understanding of the concept of meditation by the participants.A score of 0 was given if a participant could not understand the domain and 4 if the participant could be able to make a very good understanding of it.The maximum score could be 28 for the fortnight, and 140 for the total period [Table 1].
The participants of both the group participants were subjected to the following before starting the study (baseline) and at the end of 8 th week.
Data analysis
The data have been compiled using Microsoft Excel [Internet] (Microsoft Corporation, 2010).The data have been further analyzed using SPSS version 27.0 (IBM Corp. (2020).IBM SPSS Statistics for Windows).All the normally distributed continuous variables have been compared, computing mean and standard deviation with a 95% confidence interval.Nonparametric tests have been used for nonnormally distributed variables; the outcome is presented as proportions.Pearson's correlation has been used for assessing the associations.A value of P < 0.05 has been taken as the level of significance.
Sociodemographic characteristics
The sociodemographic data of the participants of the two groups are presented in Table 2.The two study groups were comparable in terms of sociodemographic variables (P > 0.05).
Table 3 shows the comparison of comorbidities among study groups.The two study groups were comparable in terms of mean sleep hours, diagnosis, mean duration of untreated period, family history, and comorbidities (P > 0.05).Figure 1 shows panic disorder and panic attacks with anxiety was distributed equally among study groups (P > 0.05).
We calculated individual z-score for pre-and post-PDSS/HAM-A.
We found that the pre-PDSS/HAM-A were not statistically different among the study groups (P > 0.05); however, there was a statistically significant difference in mean z-scores of post-PDSS and post-HAM-A scores among the study groups (P < 0.001).
The composite score was created by adding the z-scores of pre-and post-PDSS and HAM-A.We found a statistically significant difference in postcomposite scores between the study groups (P < 0.001) [Table 4].
We used preintervention PDSS/HAM-A as a covariate when comparing the postintervention PDSS/HAM-A between Group A and Group B. Therefore, we ran a one-way ANCOVA with: (a) Postintervention PDSS/HAM-A as the dependent variable; (b) The control and two intervention groups as levels of the independent variable, and (c) The preintervention PDSS/HAM-A as the covariate.
Tables 5 and 6 show whether there was an overall statistically significant difference in postintervention PDSS/ HAM-A between the study groups once their means had been adjusted for preintervention PDSS/HAM-A.
The data have further been evaluated for the possible correlation between the difference in clinical scores of the
Discussion
To improve a patient's total health and well-being, complementary treatments or alternative medicines are utilized in conjunction with traditional medical management. [9]Yoga, massage therapy, progressive muscle relaxation, acupuncture, acupressure, reflexology, aromatherapy, music therapy, guided imagery, and meditation are a few examples of these treatments.Normally, medications are used to treat anxiety before and after surgery, but mounting research supports the value of complementary therapies in the healing process.These treatments are used to lessen stress, headaches, the length of hospital stays, and the need for sedatives, and to improve patients' relaxation, sleep, satisfaction, and overall health. [10,11]r study found that the z-scores of PDSS and HAM-A were not statistically different among the study groups (P > 0.05) during preintervention period; however, there was a statistically significant difference in mean z-scores of PDSS and post-HAM-A scores after intervention among the study groups (P < 0.001).Our study also found a statistically significant difference in postcomposite scores between the study groups (P < 0.001).We also found that there was an overall statistically significant difference in postintervention PDSS/HAM-A between the study groups once their means had been adjusted for preintervention PDSS/HAM-A.
According to neuroimaging studies, meditation causes the prefrontal cortex, thalamus, and inhibitory thalamic reticular nucleus to become more active, as well as the functional differentiation of the parietal lobe. [12]Anxiolytic effects may result from neurochemical changes brought on by meditation, enhanced parasympathetic activity, decreased locus ceruleus firing with decreased noradrenaline, enhanced GABAergic drive, increased serotonin, and lower levels of the stress hormone cortisol are the elements that reduce anxiety during meditation.The anti-anxiety effects of meditation are also influenced by the elevated levels of endorphins and arginine-vasopressin. [12] According to a study by Parmentier et al., [13] mindfulness meditation lessens anxiety and depressive symptoms by regulating emotions, reducing concern, and decreasing thinking rumination.
Another study, a randomized clinical trial by Hoge et al. (2013) on a subset of participants with GAD, found that although both the intervention group had shown a significant reduction in the HAM-A score after an 8-week and stress management education, the group that had received the MBSR intervention had a significantly lower stress score and better tolerability. [14]e prefrontal cortex and amygdala are crucial for controlling emotions and the stress response.The prefrontal cortex is known to focus more on the positive handling of similar inputs, whereas the amygdala is known to attribute meaning to unpleasant emotional stimuli.The cortical brain region's inverse relationship to a person's behavior in a stressful circumstance appears to be modulated by meditation.According to studies, during meditation, prefrontal activity surpasses amygdala reaction. [15,16] is a term that is frequently used in relation to meditation and has references in numerous traditional Indian literary works, such as the Bhagwadgeeta and Patanjali's Yoga Sutras.According to some recent studies, RM boosts happiness through neuroplasticity.In one study, it was found that RM practices were linked to a significant increase in the grey matter volume in the brain regions related to emotion regulation, happiness, and reward centers, the right insular cortex, and the left inferior orbitofrontal cortex. [17]It is widely established that the etiology of panic disorder is directly related to both the orbitofrontal cortex and the insular cortex. [18]The neural plasticity brought on by the guided meditation practice of the intervention group members may have contributed to the short-term meditation intervention's success in reducing anxiety symptoms and the severity of the condition in the current study.
According to a study by Kiran et al., [19] patients who practiced Rajyoga also had elevated blood cortisol levels on the 2 nd postoperative day, but the increase was markedly less than in the control group.On the 5 th postoperative day, the Rajyoga group's cortisol levels generally returned to normal range as the stress was resolved.In addition, Creswell et al. discovered that practicing mindfulness meditation for a brief amount of time changes how the brain and body react to stress. [10]Similarly to this, Turakitwanakan et al. have found that practicing mindfulness meditation lowers blood cortisol levels, which suggests that it can reduce stress. [20]ainey et al. found that in diabetic patients, meditation decreased glycated hemoglobin and cortisol levels. [21]
Conclusion
When used in conjunction with pharmaceutical treatments for the treatment of panic disorder, RM is a successful therapy.The key factors are adherence and motivation while being supervised by a licensed therapist.The results of the current study indicate that RM techniques can improve sleep quality and lower clinical panic AD scores.We conclude that irrespective of the quality of meditation, mere meditation practice for the long term would help alleviate the anxiety and panic disorder symptoms.
Figure 1 :
Figure 1: Bar graph showing diagnosis of study groups
Table 2 : Sociodemographic variables among study groups
# Chi-square test.SD: Standard deviation, SES: Socioeconomic status | 2023-12-13T14:58:31.408Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "c5fb47f43342f5a0a14bf3083127cc6bf97c292b",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fe846655f949f6d3bab6ae394569bd53347f905c",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1250773 | pes2o/s2orc | v3-fos-license | Identification and validation of sRNAs in Edwardsiella tarda S08
Bacterial small non-coding RNAs (sRNAs) are known as novel regulators involved in virulence, stress responsibility, and so on. Recently, a lot of new researches have highlighted the critical roles of sRNAs in fine-tune gene regulation in both prokaryotes and eukaryotes. Edwardsiella tarda (E. tarda) is a gram-negative, intracellular pathogen that causes edwardsiellosis in fish. Thus far, no sRNA has been reported in E. tarda. The present study represents the first attempt to identify sRNAs in E. tarda S08. Ten sRNAs were validated by RNA sequencing and quantitative PCR (qPCR). ET_sRNA_1 and ET_sRNA_2 were homolous to tmRNA and GcvB, respectively. However, the other candidate sRNAs have not been reported till now. The cellular abundance of 10 validated sRNA was detected by qPCR at different growth phases to monitor their biosynthesis. Nine candidate sRNAs were expressed in the late-stage of exponential growth and stationary stages of growth (36~60 h). And the expression of the nine sRNAs was growth phase-dependent. But ET_sRNA_10 was almost expressed all the time and reached the highest peak at 48 h. Their targets were predicted by TargetRNA2 and each sRNA target contains some genes that directly or indirectly relate to virulence. These results preliminary showed that sRNAs probably play a regulatory role of virulence in E. tarda.
But the fundamental pathogenic mechanism of E. tarda still remains to be discovered. In recent years, some significant experimental and theoretical evidence suggested that small noncoding RNAs (sRNAs) could coordinate virulence gene regulations and pathogen survival a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 during infecting the host [14][15][16][17]. At the same time, sRNAs are crucial players of regulatory cascades, coordinating the expression of virulence genes in response to environmental or other changes [16,17]. They are able to adapt the expression of virulence genes to stress and metabolic requirements [17]. These sRNAs function either directly on virulence genes and/or on regulators of virulence genes [16].
While sRNAs have been well known for some time and some examples have been confirmed in Escherichia coli and other pathogenic bacteria [18][19][20][21][22], our knowledge of the networks involving sRNAs and controlling pathogenesis in E. tarda is still in its infancy. Here, we systematically identify sRNAs in E. tarda genome by RNA sequencing and bioinformatics prediction for the first time. Then, the cellular abundance of validated sRNA was detected by quantitative PCR (qPCR) at different growth phases to monitor their biosynthesis. In addition, the potential targets of sRNAs were also predicted by bioinformatics analysis. Our results will provide insight into the knowledge of virulence regulation of E. tarda and pave the way for eradicating edwardsiellosis.
Materials and methods
Ethics statement E. tarda S08 (Accession no. KX279865) was isolated from diseased turbot. Disease outbreaks occurred on some marine turbot farms in Qingdao, China. The farm owners hoped us to determine the causative agents of these outbreaks and assess potential therapies for the treatment of these infections. So they provided a large number of diseased turbot to us for the study. This experiment as described was carried out in strict accordance with the approval of the Animal Care and Use Committee of the Institute of Oceanology, Chinese Academy of Sciences.
Bacterial strains and growth conditions E. tarda S08 isolated from diseased turbot was used for most experiments. The strain was routinely cultured in Tryptic Soy Broth (TSB, Difco) or TSA medium supplemented with additional 1% NaCl at 28˚C, 180 rpm. Colistin was added at a final concentration with 12.5 μg/mL when necessary. The growth in the TSB was determined by spectrophotometric values (OD540 nm) at the interval of 2 h. Then, the growth curve was plotted using optical density against time points (2 h, 4 h, . . .. . ., 72 h). While the cultures of series of time points at the interval of 6 h were collected for the next step experiments. All the samples were run in triplicate.
In silico prediction of sRNAs
The genome sequences of E. tarda S08 (data unpublished) and E. tarda EBI202 (Accession no. CP002154.1) were chosen for in silico prediction. The computational methods were applied for the prediction of sRNAs including sRNAscanner and sRNAPredict3. sRNAPredict3 identified sRNAs based on intergenic conservation and Rho-independent terminators in the closely related bacterial genomes. sRNAscanner computes the locations of the intergenic signals using the Positional Weight Matrix (PWM) strategy for the search of intergenic sRNAs. All the parameters were set as the default analytical criteria for the two methods.
sRNA extraction and RNA sequencing E. tarda S08 was grown in TSB medium at 28˚C and harvested with centrifugation (at 6, 000×g for 5 min) at the series of time points. Finally, all the samples from different time points were mixed together at equal volumes. The sRNAs were isolated from cell pellets with bacterial small RNA isolation kit (OMEGA, USA). All RNA was treated with RNase free DNase I and library was built for Illumina Hiseq 2000 platform with library constructions kit following the manufacturer's protocol.
Promoter prediction and in silico validation of predicted sRNAs
The program BPROM was used to predict the promoters of the bacterial sRNAs. The promoter prediction was conducted to search 200 bp upstream of the sRNA start site. RNAfold program was used to carry out the secondary structure prediction based on the lowest folding energy. The sRNAs were blasted into Rfam database to assess the novelty.
Quantitative PCR assays
Total RNA was extracted using Trizol reagent (Life tech, USA) and then reverse transcribed using oligo dT and random mix primers (ToYoBo, Japan) according to the manufacturer's protocol. Quantitative PCR was performed to validate the reliability of predicted sRNAs and check the expression abundance of the validated sRNAs at the different growth phages. The qPCR primer pairs for the 10 candidate sRNAs were designed using Primer Premier 6.0. 16 S ribosomal RNA gene was used as internal control for normalization of gene expression. Quantitative PCR was run on Bio-Rad CFX (USA) with initial denaturation of 3 min at 95˚C and a subsequent run of 40 cycles each comprising 10 s at 95˚C, 10 s at 62˚C, and melt curve was performed to assess the primer specificity. The samples were run in triplicate. The 2 -ΔΔCq method (relative quantization) was used in which Cq value (threshold cycle) was normalized to endogenous reference gene 16S (ΔCq = Cqtarget-Cqreference) [23]. Using student's t test, data were considered statistically significant when p < 0.05.
Target prediction of validated sRNAs
Web-based program TargetRNA2 was used to predict the target genes for each validated sRNA. TargetRNA2 considers each mRNA in the replicon as a possible target of the sRNA. 80 bp before the start codon and 20 bp after the start codon were searched. After searching all mRNAs in the specified replicon for interactions with the sRNA, TargetRNA2 outputs a list of likely regulatory targets ranked by p-value.
Results
Bacterial growth condition of E. tarda S08 E. tarda S08 was cultured in TSB medium at 28˚C, 180 rpm. The OD 540nm value was monitored at the interval of 2 h and the growth curve was plotted (Fig 1). After 24 h, the strain was showed to grow into post-exponential phage and after 40 h into stationary phage. It entered into decline phase after 60 h.
Bioinformatic prediction of sRNAs and RNA sequencing
Two computational methods were used to predict the sRNAs and the comparative results were provided as follows (Table 1). After aligned the results, a total of 10 sRNA candidates were predicted (>100 bp in length). Genomic location and the orientations of sRNAs were also analyzed. Table 2 categorized a detailed description of the candidate sRNAs.
Promoter and second structure analysis of candidate sRNAs
The web-based program, BPRORM, was implemented to perform the promoter analysis. By searching 200 bp upstream of the candidate sRNA start site for the -10 box and -35 box, the results showed that all 10 candidate sRNAs were successfully found the -10 and -35 promoter sites and corresponding TF binding sites. The average distance for the -10 box and -35 box were 53 and 76 bp upstream of the candidate sRNAs, respectively. Secondary structure analysis were carried out using RNAfold program and depicted in Fig 2. Next, the 10 candidate sRNAs were undergone to blast against Rfam database for the novelty. Two of 10 candidate sRNAs, named ET_sRNA_1 and ET_sRNA_2 (homologues to tmRNA and GcvB), showed the homology in Rfam. While the other candidate sRNAs were first found. The sequence of 10 sRNAs genes was analyzed for terminator prediction. Rho-independent terminators were predicted at the 3' end using ARNold (S1 File).
Experimental validation by qPCR assays under different growth phage
Further experimental validation was performed for the 10 candidate sRNAs. The qPCR primer sequences used for sRNA genes were listed in Table 3. The total RNA was extracted from different time points and reverse transcribed. The cDNAs were used as the templates for qPCR to assess the expression of candidate sRNAs. ET_sRNA_10 was almost expressed all the time and reached the highest peak at 48 h (Fig 3). However, the other nine sRNAs were expressed in the late-stage of exponential growth and stationary stages of growth (36~60 h). And their transcript level reached the highest point at the final phase of stationary growth (60 h) (S1-S9 Figs). This showed that the expression of the nine sRNAs was growth phase-dependent.
Target prediction of validated sRNAs
Accurate prediction of sRNA targets plays an important role in studying sRNA function. The targets of 10 sRNAs were predicted by TargetRNA2 (S2 File). TargetRNA2 outputs a list of likely regulatory targets ranked by p-value (p 0.05). A total of 385 potential targets were identified. We parsed the predicted mRNA targets based on their respective protein function (Table 4) [24]. Our result demonstrated that the majority of known targets for sRNAs were involved in metabolism (114), virulence (59), and transport (35). However, a large number of target genes were categorized as 'other' (49) and 'hypothetical proteins' (115), respectively (Table 4). Each sRNA targets contain a number of genes that directly or indirectly relate to virulence. The result preliminary shows that sRNAs probably play regulatory roles of virulence. Of course, the related work is being verified by experiments.
Discussion
E. tarda is associated with edwardsiellosis in cultured fish, resulting in heavy losses in aquaculture. The pathogenesis of E. tarda has been studied for a long time and some virulence factors have been identified. However, the fundamental pathogenic mechanism of E. tarda still remains to be discovered. More and more evidence shows that the use of sRNAs is among the strategies developed by bacteria to fine-tune gene expression. They are involved in many biological processes to regulate iron homeostasis [25][26][27], expression of outer membrane proteins [28,29], quorum sensing [30,31], and bacterial virulence [16,17] through binding to their target mRNAs or proteins. In this research, it is the first time to report the existence of small RNAs within the genome of E. tarda. In principle, four major computational methods were applied for the prediction of sRNA locations from bacterial genome sequences: (1) secondary structure and thermodynamic stability, (2) comparative genomics, (3) 'Orphan' transcriptional signals and (4) ab initio methods regardless of sequence or structure similarity [32]. Transcriptional signal-based sRNA prediction tools include sRNApredict [33], sRNAscanner [34], and sRNAfinder [35]. sRNAPredict depends on the promoter signals, transcription factor binding sites, rho-independent terminator signals predicted by TRANSTERMHP [36] and BLAST [37] outputs as predictive features of sRNAs. sRNApredict3 is recent version of the sRNApredict suite that is used in the efficient prediction of sRNAs, with a high level of specificity. Some researchers found that sRNAPredict provided the best performance by comprehensively considering multiple factors [38]. The main advantage with sRNAscanner is that it uses its own algorithm and the training PWM dataset to calculate the genomic locations of the promoter, transcription factor, and terminator signals. Moreover, the sensitivity and specificity profile of sRNAscanner was first evaluated through the Receiver Operator Characteristic (ROC) curves and confirmed its satisfactory performance [32]. In this research, we choose transcriptional signal-based sRNA prediction tools (sRNA predict3 and sRNA scanner) for in silico prediction.
Most of these tools are applied to locate the putative genomic sRNA locations followed by experimental validation of those transcripts. Then 10 sRNAs were validated by RNA sequencing and qPCR, of which 8 novel sRNAs were found. The other two sRNAs, ET_sRNA_1 and ET_sRNA_2, were homolous to tmRNA and GcvB, respectively. TmRNA (also known as 10Sa RNA or SsrA RNA) is a unique bi-functional RNA that acts as both a tRNA and an mRNA to enter stalled ribosomes and direct the addition of a peptide tag to the C terminus of nascent polypeptides. TmRNA is widely distributed among eubacteria and has also been found in some chloroplasts [39]. The sRNA GcvB was first described in E. coli as being transcribed from a promoter that is divergent from that encoding gcvA, which is a transcriptional regulator of the glycine-cleavage-system operon [40][41][42][43].
What's more, the cellular abundance of 10 validated sRNA was detected by qPCR at different growth phases to monitor their biosynthesis. ET_sRNA_10 was almost expressed all the time and reached the highest peak at 48 h, which indicated that ET_sRNA_10 was probably house-keeping sRNA. But the expression of the other nine sRNAs was growth phase-dependent and they were expressed in the late-stage of exponential growth and stationary stages of growth. It had been reported that the expression of some sRNAs in gram positive and negative pathogens was growth phase-dependent. The expression of 11 candidate sRNAs was characterized in Staphylococcus aureus strains under different experimental conditions, many of which accumulated in the late-exponential phase of growth [44]. The characteristics of 11 sRNAs were studied in Enterococcus faecalis V583, six of which were specifically expressed at exponential phase, two of which were observed at stationary phase, and three of which were detected during both phases [45]. The expression of twenty-four sRNAs was also phase-and mediadependent in Streptococcus pyogenes M49 [46]. In Clostridium difficile, the expression of six sRNAs was growth phase-dependent, three of which (RCd4, RCd5 and SQ1002) were induced at the onset of stationary phase, whereas three of which (RCd2, RCd6 and SQ1498) was high during exponential phase and decreased at the onset of stationary phase [47]. Among the twelve non-coding RNAs found in Listeria monocytogenes, two of these non-coding RNAs were expressed in a growth-dependent manner [48]. In Brucella melitensis, three validated sRNAs were significantly induced in the stationary phase [49]. In this research, nine sRNAs show growth phase-dependent expression profile. In addition, it has been reported that the expression of some virulence determinants and associated factors in E. tarda is also growth phase-dependent [50][51][52]. So, we speculate that some of growth phase-regulated E. tarda sRNAs may be involved in the control, as previously observed in some gram-positive and gram-negative bacteria [53][54][55].
Despite the abundance of sRNAs in all bacterial lineages, little is known about their function and mechanism of action within the bacterial genomes and only a few sRNAs have been assigned with functions till date [56]. Using TargetRNA2, we have predicted the target mRNAs of 10 sRNAs.
Functional categorization of the target genes regulated by sRNAs resulted in identification of genes involved in key pathways of cell division, cell wall, transport, virulence,type III secretion system, type VI secretion system, ribosomal protein, and metabolism. A majority of these pathways are critical for the growth and survival of E. tarda in the host cytoplasm. A significant number (29.87%) of predicted target genes were categorized as 'hypothetical protein', which is not surprising considering that nearly 30.89% of E. tarda EIB202 genes are still reported as hypothetical proteins.
Of course, the related work is being verified by experiments. The mutant strains E. tarda S08⊿SsrA, E. tarda S08⊿Gcv and E. tarda S08⊿ET_sRNA_10 have been constructed. The next step is going to verify in vivo regulation functions of sRNAs. Once the regulation function of virulence is further confirmed, the unique nature of sRNAs that can be exploited for the development of novel diagnostic tools and therapeutic interventions will maybe come true in the future [57].
Conclusion
This report presents the study of small non-coding RNAs on E. tarda for the first time. Ten sRNAs were validated by RNA sequencing and qPCR. ET_sRNA_1 and ET_sRNA_2 were homolous to tmRNA and GcvB, respectively. However, the other candidate sRNAs have not been reported till now. ET_sRNA_10 was almost expressed all the time and reached the highest peak at 48 h. However, the other nine sRNAs were expressed in the late-stage of exponential growth and stationary stages of growth (36~60 h), which showed that their expression was growth phasedependent. And they probably played regulatory roles during the biological process. The targets of 10 sRNAs were also predicted by TargetRNA2. Each sRNA targets contain some genes that directly or indirectly relate to virulence. These results preliminary showed that sRNAs probably play a regulatory role of virulence in E. tarda. The related work is being verified by experiments.
Supporting information S1 File. Sequence analysis of novel sRNAs. The region in yellow and green shows start (5') and stop (3') codons respectively. 5' and 3' start and ending sites respectively are as predicted by SIPHT/ sRNAPredict3. The region in red shows Rho-independent terminators. The qPCR primer sites are shown in blue. | 2018-04-03T03:11:04.585Z | 2017-03-07T00:00:00.000 | {
"year": 2017,
"sha1": "c1c7bbfc79983cafe8aeab0d02eb073f47f6c969",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0172783&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1c7bbfc79983cafe8aeab0d02eb073f47f6c969",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
128038151 | pes2o/s2orc | v3-fos-license | THE MANAGEMENT MODEL OF BUDDHIST ORGANIZATIONSIN THAI SOCIETY.
The purposes of this research were 1) to study the management of Buddhist organizations in Thai society, and 2) to present the management model of Buddhist organizations in Thai society. The researcher applied quantitative research methods in conducting this research. The research results show that most respondents agreed with management of Buddhist organizations in Thai society at the highest level. In particular, International Organizational Management had the highest mean score, followed by Public Welfare, Non-Profit Organization, and Good Governance respectively.The hypotheses results show that with regard to the correlation between the components of management of Buddhist organizations in Thai society, the components had the relationship with the statistical significance at the level of .01.With regard to the statistics to evaluate the concordance of the model after the modification indices, the results show that Chi-square Probability Level was .057, CMIN/DF was 1.111, GFI was .948, and RMSEA was .015. This means the management model of Buddhist organizations in Thai society was relevant to the empirical The research results show that the management model of Buddhist organizations in Thai societyin the aspect of Non-Profit Organization (NPO), the component with the highest mean score was operating the public welfare without seeking for the profit, followed by managing the budget in transparent and verifiable method. This is relevant to the concept of Chansom (2012) that the non-profit organization would not focus on generating the profit, but its financial goal is to achieve the objective of organization foundation. Regarding the temple, as one of the kind of non-profit organization, its target is to be the religious place, the mind center of Buddhist, the place for doing religious activities, including the residence for monk. Although the temple does not work for profit, it must have appropriate directions for financial management following the financial management principles of non-profit organization for the transparency in operation which can be verified.According to Supajakwattana (2011), in addition that the non-profit organization must not seek for the profit, and should rely on the basis of organizational management, organizational design, performance assessment, including the qualifications of executives who have to work for others without expecting any return or profit. Above all, the most important is honesty and responsibility at the highest level. Moreover, the operation of non-profit organization must be transparent as well as the governance of government organization. People must be able to always verify the operation about assistance provision to the disadvantaged and the budget management in order to reduce the problem about money laundering and corruption. The researcher thought that this shows an important conflict issue in the organization about the non-profit operation. In contrast, nowadays most temples do not make the account recording and financial report correctly. The research results show that the management model of Buddhist organizations in Thai societyin the aspect of Good Governance (GOG), the component with the highest mean score was treating people with gentle manner, followed by performing the mission in good faith, sincerity, and loyalty. This is relevant to the concept of Uwanno (1999) that one of the important principles about good governance was that the performer must rely on accuracy, sense of duty; must be honest, sincere, diligent, and patient; and must have discipline and respect others’ right. According to Uwanno (2008),
The purposes of this research were 1) to study the management of Buddhist organizations in Thai society, and 2) to present the management model of Buddhist organizations in Thai society. The researcher applied quantitative research methods in conducting this research. The research results show that most respondents agreed with management of Buddhist organizations in Thai society at the highest level. In particular, International Organizational Management had the highest mean score, followed by Public Welfare, Non-Profit Organization, and Good Governance respectively.The hypotheses results show that with regard to the correlation between the components of management of Buddhist organizations in Thai society, the components had the relationship with the statistical significance at the level of .01.With regard to the statistics to evaluate the concordance of the model after the modification indices, the results show that Chisquare Probability Level was .057, CMIN/DF was 1.111, GFI was .948, and RMSEA was .015. This means the management model of Buddhist organizations in Thai society was relevant to the empirical data.
…………………………………………………………………………………………………….... Introduction:-
The condition of Thai society has been changed into the direction of consumerism. The influence of this change or development leads to the competition of society of materialism. The business organization seizes opportunity to generate profits for their survival without any concern about stakeholders. This influence also causes an effect on the management of non-profit organization, such as the Buddhist organizations.
The Buddhist organization is one of the social institutes encountering the thinking dynamics, questioning about status, role, and appropriate education management, including the challenge on thought towards discipline and power relations in the culture of Buddhist organizations (PhraBrahmapundit (PrayoonDhammacitto), 2015). In Budha era, the management of Buddhist organizations mainly relied on Dhamma discipline. Regarding Buddhism in Thailand, the management of Buddhist organizations has been changed following the forms of governance. The benefits of present management of Buddhist organizations are the unity and the closeness with government; whereas the disadvantages are the mayhem against Dhamma discipline and the loss of people faith (Preechapermprasit, 2015). Besides, people do not go to the temple; which one of the reasons is the problem from management of Buddhist organization that was present by many kinds of media.
Nowadays, the feature and role of temple dramatically changed from the past. First, the temple becomes the residence of layman instead of monk and novice. Second, the temple that used to be a place of ordination for local people becomes a place of ordination for people from other places, which even the preceptor did not know their background. Third, the temple, which used to be a place of charity for local villagers according to the Theravada cult, has been changed to be similar to the Mahayana Buddhist monk's house. Forth, the temple, which used to be tidy, clean, cool and pleasant, becomes disorderly and dirty; thereby people lose faith and do not want to go to the temple. Fifth, the temple, which used to be the center of desirable social activities, becomes the source of amulet; some monk does not behave well or even violate the Buddha's teaching or destroy the people's faith. Some temple applied the business approach to attract people to donate money for material returns; some also built the sanctuary buildings more than necessary (PhraAkanitSiripanyo (Artwichai), 2011).
Moreover, the management structure of temple and the formatting of administration of Thai Buddhist monks are centralized. There is not the audit organization. The work isnot continuous. This leads to a lack of effectiveness in managing the activities of Buddhist monk.
In addition, the problem on public welfare that lacks management relationship for creating enough benefit for every part. Including that the management style of temple in present does not develop; the temple management strategies are not concrete; and the problem of temple management does not appropriately solved following the Buddhist direction (PhrakruVisuddhaNundhakun (SurasukVisuddhajaro), 2014).
From the above information, these problems are not correctly solved following the proper temple management principles. This might become a big reason leading or causing the damage or the decay of Buddhism in Thailand in the future (PhrakruVisuddhaNundhakun (SurasukVisuddhajaro), 2014). Therefore, the researcher realized the importance ofthese problems, and conducted the research about the management model of Buddhist organizations in Thai society, that perform public welfare missions without seeking for profits, and use the Buddhist good governance with international organization with systematic and standardized management in order to increase the faith towards Buddhist organization and the happiness of people in Thai society. The management model of Buddhist organizations in Thai society Literature Review:-Concept about public welfarePublic welfare means to organize activities that are public benefits of agencies or groups of individuals or individuals that operate for the public interest, or the place that is public property of general people, or the public welfare that monks or temples operate nowadays. The objective is for the public interest (Department of Religious Affairs, 1985; PhrathepPariyatsuthi (WorawitKhongkhapanyo), 2002). The operation method of public assistance and public welfare to be in accordance with the regulations of The Sangha Supreme Council of Thailand can be categorized by its style as follows (Department of Religious Affairs, 1985); 1) business operation to help and support, 2) supporting the activities of business of others which are for public benefit, 3) supporting the places which are the public property,4) supporting the general people, and 5) housing assistance for the temple for further benefits. The importance of public welfare is as follows; 1) improving the quality of life and well-being of people, 2) developing and promoting morality in life, 3) being a quality mental development, 4) developing the intelligence to solve various problems and obstacles,5) coordinating to do some activities which leads to a better life, 6) to be able to solve problems and obstacles that occur, 7) making the society to live together happily, 8) beingconsidered as a source of merit, and 9) creating good deeds. To give assistance is broadcasted to every people; Buddha had His own teaching style or various forms of public welfare which can be divided in to 4 forms (PhrakhruPariyatThammawong, 2015); 1) discussion form, 2) lecturer form, 3) answerer form, and 4) Regulatory form. The teaching technics or the public welfare of the Buddha are (PhrakhruPariyatThammawong, 2015); 1) teaching from things that are easy to understand to things that are difficult to understand, 2) teaching the content by gradually increasing its complexity, 3) teaching by the real situation or direct experience, 4) teaching directly on the subject matter, 5) teaching with reason, so that the learners can think and realize by themselves, and 6) teaching only the necessary things that is sufficient for the learner to understand that topic.
Concept about International Organizational Management:-
Management means the process to work step by step, or the group of activities consisting of planning, organizing, leading/directing, and controlling. These have a direct relationship with organization resources in order to generate benefit, and to achieve an important goal of success according to the organization's goal effectively and efficiently. The activities of effective management are such as decision making in management, strategic management, human resource management, group management, and management in international environment (Schermerhorn, 1999;Serirat et al., 2002;Kaewchamnong, 2011;Drucker, 2006). In addition, Bartol and Martin (1997) suggested that the process of management consists of 4 important processes; planning, organizing, leading, and controlling; which these processes continually change following the organizational environment. Therefore, the executive must be always ready to handle changes. Fayol (1949) presented 14 management principles, and emphasized that they are flexible and able to be always adapted to demand. These 14 principles consists of division of work, authority and responsibility, discipline, unity of command, unity of direction, subordination of individual interests to general interests, remuneration, centralization, scalar chain, order, equity, stability of tenure of personnel, initiative, and esprit de corps. Moreover, Fayol also presented 5 elements of management, so called 'POCCC' (Sheldrake, 1996), which are 1) planning, 2) organizing, 3) command, 4) co-ordination, and 5) control. Gulick and Urwick (1937) presented 7 steps of management processes 'POSDCoRB', which consists of planning, organizing, staffing, directing, coordinating, reporting, and budgeting.
Concept about non-profit organization:-Non-profit organization means the organization that can be founded by anyone or any group of people who have the same attitude. The objective must be to provide service to community without expecting a return. This kind of organization is a combination between the business organization and government organization due to the reason that the work is flexible under the management of committee who is volunteer or professional executives. The non-profit organization is an independent 'third sector', which means it does not under the governance of the government (Lohmann, 2007; McNamara, n.d.; Luckert, n.d.;Sornmanee, n.d.). According to Supajakwattana (2011),in addition that the non-profit organization must not seek for the profit, and should rely on the basis of organizational management, organizational design, performance assessment, including the qualifications of executives who have to work for others without expecting any return or profit. Above all, the most important is honesty and responsibility at the highest level. Moreover, the operation of non-profit organization must be transparent as well as the governance of government organization. Regarding the roles of non-profit organization, it should take part in supporting the project that the disadvantages are not treated equally from public or private sector, and in developing the social safety nets, in creating the system of social insurance for poor household. Chansom (2012) said that the financial management of both non-profit and profit organization are not completely different. The slight differences are about form of investment assets, sources of funds, and sources of income and expenses, especially the target setting in financial management. Therefore, to set the target of financial management to be in accordance with the organization objective is the mainly important issue of management. Normally, the non-profit organization must make the financial report, which its financial target is to achieve the objective of organization foundation. Regarding the temple, as one of the kind of non-profit organization, its target is to be the religious place, the mind center of Buddhist, the place for doing religious activities, including the residence for monk. Although the temple does not work for profit, it must have the appropriate direction for financial management following the financial management principles of non-profit organization for the transparency in operation which can be verified.
Concept about good governance:-Good governance means the good way to govern the country and society, which is the important direction to organize the state society,private sector and the public sector to live together peacefully. The international good governance emphasizes the norm that sets the structure, process, and relationship among public sector, private sector, and people sector, in managing the economy, politics, and society of the country. This must be based on accuracy and righteousness, and prioritizes the participation of people as important. This will lead to the sustainable In the society of monk that Buddha established, He set up a system of rules and regulations for the coexistence of human beings, based on the principles of truth in nature as a basis for all human beings to access the system of dhamma, which is called 'discipline'. The social development to be the social governance, which is one level of good governance nowadays, consists of 2 main parts: ideal dhamma which is principle or goal, and discipline which is systematization.
Research Methodology:-
The quantitative research method was applied in conducting this research. The secondary data were retrieved by reviewing concepts and theories from documents and related research. The primary data were collected by using the questionnaire as the research tool, which is divided into 3 parts. Part 1 is concerned about the demographic profiles of respondent. Part 2 consists of the questions about the management style of Buddhist organizations in Thai society. Finally, part 3 is the recommendation towards the management style of Buddhist organizations in Thai society. The research populations are 5,682,415 populations registered in Bangkok (National Statistical Office, 2017). The sample size is indicated by using the table of Taro Yamane, receiving 400 samples (Prutipinyo, 2010). The sampling method is non-probability sampling and accidental sampling with people who cooperate in answering questionnaire. 350 questionnaires were returned. The statistics used in analyzing data were frequency, percentage, mean, standard deviation, Pearson correlation, and analysis path.
Results:-
The analysis of demographic profiles of respondentWith regard to the demographic profiles, from 350 respondents, most were female, followed by male. Most respondent aged 31-40 years, followed by aged less than 30 years, aged 41-50 years, and aged 51-60 years respectively. All respondent always participated in the activities of Buddhist organization. Most works as the employee/personnel in private organization, followed by business owner, officer of government or state enterprise respectively. The highest correlation was a relationship between Non-Profit Organization (NPO) and International Organizational Management(IOM), which its correlation was .84. The lowest correlation was a relationship between Good Governance(GOG) and Public Welfare (PUW), which its correlation was.39 (see table 2).
The analysis of components of the management style of Buddhist organizations in Thai society
With regard to the statistics value to evaluate the concordance of the model after the modification indices, the results show that Chi-square Probability Level was .057, CMIN/DF was 1.111, GFI was .948, and RMSEA was .015. All these 4 statistics value passed the evaluation (see table 3).
Discussion:-
The research results show that the management model of Buddhist organizations in Thai societyin the aspect of Public Welfare (PUW), the component with the highest mean score was to provide assistance to the sufferer as much as you can. This is relevant to the study of Inpech (2010), who studied about the roles played by Buddhist monk in social development:a case study of PhrathepSakornmuni (KaewSuvanajoto), that the public welfare is the operation of the enterprise for the public benefit or places that is the public property of general people. In case that the villagers are not ready to operate by themselves, the monk, as the community leader, should take the role to give initial suggestions about what should be done in the community for the common benefit. The appropriateness and circumstance should be necessarily recognized in operatingthe public welfare. In addition, it is also relevant to the teaching methods or the public welfare operation of the Buddha, according to PhrakhruPariyatThammawong (2015), Buddha had His methods to declare, to tell, and to explain so that the learners understand and realize by themselves and they can help themselves to survive through physical and mental suffering. One of His methods is to teach only the necessary things that is sufficient for the learner to understand that topic. Moreover, this is relevant to the study of The research results show that the management model of Buddhist organizations in Thai societyin the aspect of International Organizational Management (IOM), the component with the highest mean score wasbeing the organization of the center of faith that can motivate people to practice goodness, followed by defining the power, duty, responsibility according to the command line explicitly. This is relevant to the process of organizing which composes of leading, motivating, persuading organization members to achieve the mission (Bartol and Martin, 1997;Naveekan, 2001). This is also relevant to the management principle in terms of centralization (Fayol, 1949), and the study of PhrakhruPariyatKitjapiwat (2015), who studied about leadership and performance management of public welfare, that leadership and efficiency management of public work in temple required system and good organization style, which consisted of the appropriate personal management, the workingsystem, and other supportive factors. The organizing system should be efficient from the planning system, the work structure system, the personnel management, the assignment, and the capability to control the work. In addition, the researcher also suggests that the research results in this aspect can answer the research result in the past that the temple was disorderly, becomes the source of amulet; some monk does not behave well; thereby people lose faith and do not want to go to the temple. The research results show that the management model of Buddhist organizations in Thai societyin the aspect of Non-Profit Organization (NPO), the component with the highest mean score was operating the public welfare without seeking for the profit, followed by managing the budget in transparent and verifiable method. This is relevant to the concept of Chansom (2012) that the non-profit organization would not focus on generating the profit, but its financial goal is to achieve the objective of organization foundation. Regarding the temple, as one of the kind of non-profit organization, its target is to be the religious place, the mind center of Buddhist, the place for doing religious activities, including the residence for monk. Although the temple does not work for profit, it must have appropriate directions for financial management following the financial management principles of non-profit organization for the transparency in operation which can be verified.According to Supajakwattana (2011), in addition that the non-profit organization must not seek for the profit, and should rely on the basis of organizational management, organizational design, performance assessment, including the qualifications of executives who have to work for others without expecting any return or profit. Above all, the most important is honesty and responsibility at the highest level. Moreover, the operation of non-profit organization must be transparent as well as the governance of government organization. People must be able to always verify the operation about assistance provision to the disadvantaged and the budget management in order to reduce the problem about money laundering and corruption. The researcher thought that this shows an important conflict issue in the organization about the non-profit operation. In contrast, nowadays most temples do not make the account recording and financial report correctly.
The research results show that the management model of Buddhist organizations in Thai societyin the aspect of Good Governance (GOG), the component with the highest mean score was treating people with gentle manner, followed by performing the mission in good faith, sincerity, and loyalty. This is relevant to the concept of Uwanno (1999) that one of the important principles about good governance was that the performer must rely on accuracy, sense of duty; must be honest, sincere, diligent, and patient; and must have discipline and respect others' right. According to Uwanno (2008), good governance in Buddhism is Dasavidha-rājadhamma ("tenfold virtue of the ruler"), especially in the aspect of Maddava, which means kindness, gentleness, humbleness, politeness, being gracefulto be devoted and respected; and Ājjava, which means honesty and artless; operating mission with honesty and sincerity.This is also relevant to the study of PhramahaRittichaiYanitiko (Namsombat) (2016), who studied about temple management in good governance, the management according to the good governance principles in responsiveness method consisted of the capability to provide quality services, and to finish the operation within the specified period to create reliability and trust. This is also relevant to the principle about disclosure, transparency; by that the operation must be transparent and verifiable, and that the useful news and information must be disclosed to be acknowledged thoroughly. According to the mentioned discussion, Good Governance (GOG) in the Buddhist organization can lead to Non-Profit Organization (NPO) which contains honesty, sincerity, and loyalty, which leads to International Organizational Management (IOM) that can create trust and reliability to the society.
The hypothesis results also show that public welfare, international organizational management, non-profit organization, and good governance were the important components of the management of Buddhist organization in Thai society.
Recommendations:-
The researcher presented the recommendations in each component, as follows; Regarding the public welfare, it should focus on teaching people to know, to understand, and to realize. The teaching methods should be appropriate to the learners in order to receive real results so that people can be self-reliant at last.
Regarding the international organizational management, it should focus on always verifying the performance of every department in the organization, and making a report of the work and publicize it to the public in order to upgrade the standard of Buddhist organization to be reliable and to be the role model to other social organization.
Regarding the non-profit organization, it should focus on managing the organization with the middle way, and performing the mission with volunteerism in order to create good new image to the Buddhist organization in Thai society.
Regarding the good governance, it should focus on sacrificing happiness for the benefit and peace of the country, including behaving well, comprehending body, words, and mind to be respected in order to create reliability and faith to the organization in the society of materialism. | 2019-04-23T13:23:43.941Z | 2019-02-28T00:00:00.000 | {
"year": 2019,
"sha1": "b1309c360161d3a43817162a212dfc85169f355b",
"oa_license": "CCBY",
"oa_url": "http://www.journalijar.com/uploads/47_IJAR-26188.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "aeebc7bac8b5273a48b73a811481e4d87113ede1",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Sociology"
]
} |
119071225 | pes2o/s2orc | v3-fos-license | J-GEM observations of an electromagnetic counterpart to the neutron star merger GW170817
The first detected gravitational wave from a neutron star merger was GW170817. In this study, we present J-GEM follow-up observations of SSS17a, an electromagnetic counterpart of GW170817. SSS17a shows a 2.5-mag decline in the $z$-band from 1.7 days to 7.7 days after the merger. Such a rapid decline is not comparable with supernovae light curves at any epoch. The color of SSS17a also evolves rapidly and becomes redder for later epochs; the $z-H$ color changed by approximately 2.5 mag in the period of 0.7 days to 7.7 days. The rapid evolution of both the optical brightness and the color are consistent with the expected properties of a kilonova that is powered by the radioactive decay of newly synthesized $r$-process nuclei. Kilonova models with Lanthanide elements can reproduce the aforementioned observed properties well, which suggests that $r$-process nucleosynthesis beyond the second peak takes place in SSS17a. However, the absolute magnitude of SSS17a is brighter than the expected brightness of the kilonova models with the ejecta mass of 0.01 $\Msun$, which suggests a more intense mass ejection ($\sim 0.03 \Msun$) or possibly an additional energy source.
Introduction
After the first detections of gravitational wave (GW) events from binary black hole (BBH) coalescence (Abbott et al. 2016a(Abbott et al. , 2016b(Abbott et al. , 2017, the detection of GWs from a compact binary coalescence including at least one neutron star (NS) has been eagerly awaited. This is because the compact binary coalescence including an NS is ex-pected to be accompanied by a variety of electromagnetic (EM) emissions. An optical and near-infrared (NIR) emission driven by the radioactive decays of r-process nuclei, "kilonova" or "macronova" (Li & Paczyński 1998;Kulkarni 2005;Metzger et al. 2010), is one of the most promising EM counterparts. Optical and NIR observations of these events enable us to understand the origin Fig. 1. Three-color composite images of SSS17a using z-, H-, and K s -band images. The size of the image is 56×56 arcsec 2 . From left to right, the combined images created from the images taken between t = 1.17 and 1.70 days and between t = 7.17 and 7.70 days are shown.
On August 17, 2017, 12:41:04 GMT, the LIGO (Laser Interferometer Gravitational-Wave Observatory) Hanford observatory (LHO) identified a GW candidate in an NS merger (The LIGO Scientific Collaboration and the Virgo Collaboration 2017a). The subsequent analysis with three available GW interferometers including the LIGO Livingston Observatory (LLO) and Virgo shrank the localization to 33.6 deg 2 for a 90% credible region (The LIGO Scientific Collaboration and the Virgo Collaboration 2017b) and confirmed the detection (GW170817; The LIGO Scientific Collaboration and the Virgo Collaboration in prep.). A Fermi-GBM trigger, approximately 2 s after the coalescence, coincided with this GW event and provided additional initial information regarding the localization with an error radius of 17.45 deg (The LIGO Scientific Collaboration and the Virgo Collaboration 2017c), which covers the area localized by the GW detectors. Coulter et al. (2017, in prep.) reported a possible optical counterpart SSS17a, within the localization area, near NGC 4993. The source located at (α, δ) =(13:09:48.07, -23:22:53.3), 10 arcsec away from NGC 4993 (Figure 1), is an S0 galaxy at a distance of ∼40 Mpc (Freedman et al. 2001).
We conducted coordinated observations in the framework of Japanese collaboration for Gravitational-wave Electro-Magnetic follow-up (J-GEM) (Morokuma et al. 2016;Yoshida et al. 2017;Utsumi et al. in press) immediately after the discovery of the strong candidate SSS17a and investigated the characteristics of the optical and NIR emission. In this paper, we present the results of the J-GEM follow-up observations of SSS17a. All magnitudes are given using the unit of AB.
J-GEM Observations
A broad geometrical distribution of observatories was required to observe SSS17a because it was visible for a limited amount of time after sunset in the northern hemisphere. J-GEM facilities were suitable for observing this target because they are distributed all over the Earth in terms of the longitude, which included the southern hemisphere where the visibility was better. We used the following facilities to perform follow-up optical observations of GW170817: 8.2 m Subaru / HSC (Miyazaki et al. 2012) and MOIRCS (Suzuki et al. 2008) at Mauna Kea in the United States; 2.0 m Nayuta / NIC (near-infrared imager) at the Nishi-Harima Astronomical Observatory in Japan; 1.8 m MOA-II / MOA-cam3 (Sako et al. 2008;Sumi et al. 2016) and the 61 cm Boller & Chivens telescope (B&C) / Tripol5 at the Mt. John Observatory in New Zealand; 1.5 m Kanata / HONIR (Akitaya et al. 2014) at the Higashi-Hiroshima Astronomical Observatory in Japan; 1.4 m IRSF / SIRIUS (Nagayama et al. 2003) at the South African Astronomical Observatory; and 50 cm MITSuME (Kotani et al. 2005) at the Akeno Observatory in Japan.
We reduced all the raw images obtained using the aforementioned instruments in a standard manner. After eliminating the instrumental signatures, we made astrometric and photometric calibrations. The astrometric calibrations were performed with astrometry.net (Lang et al. 2010) using the default reference catalog USNO-B1.0 (Monet et al. 2003), while the PanSTARRS catalog (Chambers et al. 2016) was used for the HSC calibration because it is a standard catalog for the HSC reduction. The number density of stars in the B&C / Tripol5 images was not sufficient for solving the astrometric solution using astrometry.net. We therefore used Scamp (Bertin 2006) for the B&C / Tripol5 image astrometric calibration instead. The photometric calibrations were performed using the PanSTARRS catalog for the optical data and the 2MASS catalog (Cutri et al. 2003) for the NIR data. We did not apply system transformation for adjusting small differences between the band systems because it required the assumption of a spectrum of the source, except in the case of the MOA-cam3 photometry. The R-band used by MOA-cam3 of MOA-II is largely different from the standard Johnson system. We determined an empirical relation for the differences between the catalog magnitudes and the instrumental magnitudes as a function of the color constructed from the instrumental magnitudes (e.g. Koshimoto et al. 2017). Using Lupton's equation, the catalog magnitudes were converted from the PanSTARRS magnitude to the Johnson system 1 . We converted Vega magnitudes to AB magnitudes using the method specified in Blanton & Roweis (2007).
A large contamination from NGC 4993 was a problem in performing accurate measurement of the flux of SSS17a ( Figure 1). In order to minimize the systematic uncertainties in the techniques of background subtraction and photometry used for obtaining our measurements, we applied the same procedure for all the data, which is described as follows. First, we subtracted the host galaxy component from the reduced image using GALFIT (Peng et al. 2002). The model employed was a Sersic profile with free parameters describing the position, integrated magnitude, effective radius, Sersic index, axis ratio, and position angle. A PSF model constructed using PSFEx (Bertin 2011) was used in the fitting procedure. Once the fitting converged, GALFIT generated residual images. We obtained photometry from the images in which the target was clearly visible after the subtraction. We then ran SExtractor 2.19.5 (Bertin & Arnouts 1996) on the residual images, thus enabling the local sky subtraction with a grid size of 16 pixels-which was larger than the seeing size for all measurements-and the PSF model fitting for photometry. The residuals of the host galaxy subtraction could be reduced owing to this local sky subtraction. We adopted MAG_POINTSOURCE, an integrated magnitude of the fitted PSF, as a measure of magnitude and MAGERR_POINTSOURCE as the error of the measurements. We confirmed that the measurements for z-band obtained from SExtractor were consistent with those of hscPipe, which is a standard pipeline of HSC reduction (Bosch et al. 2017). We also confirmed that the brightness of a reference star was constant in all the measurements for an individual instrument. The measurements are presented in table 1.
Results
The top panel of Figure 2 shows the light curves of SSS17a in various bands based on our photometry. The magnitudes have been corrected for the Galactic extinction by assuming E(B − V) = 0.10 mag (Schlafly & Finkbeiner 2011). We do not consider the measurements obtained using Kanata / HONIR, Nayuta / NIC, and MITSuME because the measurements are not reliable owing to the strong contamination from twilight or bad weather.
A remarkable feature of SSS17a is the rapid decline in the z-band brightness by 2.5 mag in 6 days. In contrast, the fluxes in the NIR bands decline more slowly than those in the optical band. The densely sampled observations by IRSF / SIRIUS exhibit a slight brightening at the earliest epochs in the H-and K s -bands and demon- strate that the light curves in the redder bands start to decline subsequently and fade more slowly. The declines in 6 days after the peak are 1.47 mag, 1.33 mag, and 0.96 mag in the J-band, H-band, and K s -band respectively.
These features are also depicted in the evolution of colors in the z-, J-, H-, and K s -bands (as shown in the bottom panel of Figure 2). The colors in the z-band and NIR band rapidly become redder, and the reddening in 6 days are 2.43 mag and 1.00 mag in the z − K s color and z − J color respectively. In contrast, the reddening in the colors in the NIR bands are as slow as 0.34 mag in the J − H color and 0.83 mag in the H − K s color. As a result, the optical-NIR color of SSS17a progressively becomes redder with time ( Figure 1). Figure 3 shows the z-band light curves for SSS17a, Type Ia supernova (SN Ia, Nugent et al. 2002), Type II plateau supernova (SN IIP, Sanders et al. 2015), and three kilonova models with an ejecta mass of M ej = 0.01M ⊙ as mentioned by Tanaka et al. (2017). The kilonova models are a Lanthanide-rich dynamical ejecta model and post-merger wind models with a medium Y e of 0.25 and high Y e of 0.30. The model with Y e = 0.25 contains a small fraction of Lanthanide elements while that with Y e = 0.30 is Lanthanide-free. The rapid decline of SSS17a is not similar to the properties of known supernovae, and the z-band magnitude of SSS17a at t = 7.7 days is > 3 mag fainter than supernovae Ia and IIP. However, the rapid decline of SSS17a is consistent with the expected properties of kilo- novae, although SSS17a is 1-3 mag brighter than all the three kilonova models.
Origin of SSS17a
The rapid evolution of SSS17a is characterized by a magnitude difference in the z-band (∆z) between t = 1.7 days and 7.7 days (6 days interval). The red point in Figure 4 shows the ∆z and i − z color at t = 1.7 days (see Utsumi et al. in press). For the purpose of comparison, we show ∆z in a 6-day interval and (i − z) 1st color at the
Absolute magnitude
Days after GW170817 Fig. 3. Absolute magnitude of z-band observations (dots) compared with models of supernovae (in gray curves) and kilonovae (colored curves). The kilonova models are calculated assuming that the mass of the ejecta from a neutron star merger M ej is 0.01M ⊙ . The absolute magnitudes of the kilonova models quickly decline as compared with supernovae. The z-band light curve of SSS17a follows the decline of the kilonova models although the observed magnitudes are 1-3 magnitude brighter than the model predictions.
The arrows indicate the behaviors of the brightness decline corresponding to various ∆z, which is the difference in the magnitude of the two epochs for an interval of ∆t = 6 days.
1st epoch for supernovae using the spectral template of Nugent et al. (2002). The points show ∆z and (i − z) 1st with a 1-day step from the day of the merger, and their time evolutions are connected by lines. The fastest decline observed for supernovae is approximately 0.5 mag in 6 days, and therefore, models of supernovae cannot explain the rapid decline of SSS17a. The wind model with medium Y e (Y e = 0.25) at t = 1 day or 2 days provides the best agreement with the observation. The color evolution of SSS17a is also consistent with that of kilonova models. Figure 5 shows the z − H and J − H color curves of SSS17a as compared with those of the three kilonova models. The J − H colors and the absence of the strong evolution are broadly consistent with the models. The z − H color and its temporal reddening are similar to those of the models comprising Lanthanide elements. In contrast, the z − H color of SSS17a is not consistent with the Lanthanide-free model (blue curve in Figure 5), i.e., the high opacities of Lanthanide elements provide a better description of SSS17a.
The properties of SSS17a, i.e., the rapid evolution, red color, and rapid reddening, are consistent with the standard model of a kilonova. The color evolution suggests that the ejecta contain a small amount of Lanthanide elements. This means that r-process nucleosynthesis beyond the 2nd peak takes place in the NS merger event GW170817/SSS17a. However, the absolute magnitude plane with kilonova and supernova models. For SSS17a (red symbol), ∆z is the magnitude difference between the two epochs , t = 1.7 and 7.7 days (∆t = 6 days) after the detection of GW170817, and (i − z) 1st is the color at the first epoch (t = 1.7 day). The models for kilonovae and supernovae are shown by colored dots and gray dots, respectively. Each dot corresponds to different starting epoch of ∆t with an increment of 1 day. The larger dots in the kilonova model loci show the values for the case that the starting epoch of ∆t is the 2nd day from the merger. The kilonova models are located far from the crowds of those for supernovae at 40 Mpc, especially in terms of ∆z. The data point of SSS17a is consistent with the model of medium Y e wind. of the brightness of SSS17a is greater than that of the kilonova models of M ej = 0.01M ⊙ . This discrepancy can be explained by adopting a larger ejecta mass, e.g., M ej = 0.03M ⊙ , which gives a higher radioactive luminosity. Since the high ejecta mass makes the timescale of the longer, a higher ejecta velocity may also be required to keep the good agreement in the timescale shown in our paper (Tanaka et al. in press). Or possibly, a higher luminosity can be accounted for by an additional energy source, such as the cocoon emission (Gottlieb et al. 2017).
Summary
We present J-GEM observations of SSS17a, a promising EM counterpart to GW170817. Intensive observations are performed with Subaru (z and K s -band), IRSF (J, H, and K s -band), B&C (g, r, and i-band), MOA-II (V and Rband), Nayuta (J, H, and K s -band), Kanata (H-band), and MITSuME (g, r, and i-band) telescopes. SSS17a exhibits an extremely rapid decline in the z-band, which is not explained by any type of supernova at any epoch. In addition, the evolution of the color is quite rapid; the z − H color is changed by approximately 2.5 mag in 7 days. We show that the observational properties, i.e., rapid evolution of the light curves, the red color, and its rapid evolution, are consistent with models of kilonovae having Lanthanide elements. This indicates that r-process nucleosynthesis beyond the second peak takes place in the NS merger event GW170817. However, the absolute magnitude of SSS17a is brighter than that of kilonova models of M ej = 0.01M ⊙ . This suggests that the mass ejection is more vigorous (∼ 0.03M ⊙ ) or that there is an additional energy source. | 2017-10-16T16:57:57.000Z | 2017-10-16T00:00:00.000 | {
"year": 2017,
"sha1": "dc54f0858ad2b6fd13d2c3676210907e119be394",
"oa_license": null,
"oa_url": "https://academic.oup.com/pasj/article-pdf/69/6/101/22194224/psx118.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "dc54f0858ad2b6fd13d2c3676210907e119be394",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
232327500 | pes2o/s2orc | v3-fos-license | Short Communication: Efficacy of Two Commercial Disinfectants on Paenibacillus larvae Spores
Paenibacillus larvae is a spore-forming bacterium causing American foulbrood (AFB) in honey bee larvae. The remains of a diseased larva contains billions of extremely resilient P. larvae spores viable for decades. Burning clinically symptomatic colonies is widely considered the only workable strategy to prevent further spread of the disease, and the management practices used for decontamination requires high concentrations of chemicals or special equipment. The aim of this study was to test and compare the biocidal effect of two commercially available disinfectants, “Disinfection for beekeeping” and Virkon S on P. larvae. The two products were applied to P. larvae spores in suspension as well as inoculated on two common beehive materials, wood and Styrofoam. “Disinfection for beekeeping” had a 100 % biocidal effect on P. larvae spores in suspension compared to 87.0–88.6% for Virkon S which, however, had a significantly better effect on P. larvae on Styrofoam. The two disinfectants had similar effect on infected wood material.
INTRODUCTION
Paenibacillus larvae is a spore-forming, Gram-positive bacterium causing the severe disease American foulbrood (AFB) in honey bee larvae. Honey bee larvae become infected from ingesting food contaminated with P. larvae spores that germinate in the midgut and eventually kills the larvae. The remains of the larvae contains billions of spores and serves as sources for new infections. The P. larvae spores are resilient and can remain viable in the environment for decades (1)(2)(3). A common way to control AFB is by burning the contaminated hives and bees, although the latter can sometimes be saved as an artificial swarm, housed on new or disinfected material (4). Hive material can be decontaminated using chemical disinfectants or heat. Chemical disinfectants have been shown to have a high efficacy on spores in suspension, but less effective on wood-based equipment (3,5). There are several methods using heat for decontamination of hive material, for example dipping in hot paraffin, scorching, dry heat and autoclaving (3). These methods are effective (3,6), but requires access to advanced equipment.
Our aim was to test and compare the biocidal effect of 2 disinfectants, "Disinfection for beekeeping" (DFB) (Swienty, Denmark) and Virkon S (Lanxess, Germany) on P. larvae spores. DFB is developed for disinfection of hive material, gloves and tools and, according to the manufacturer (www.swienty.com, viewed October 9 2019), have a 99.99% biocidal effect on all viruses, bacteria, spores and fungi. Virkon S is a common disinfectant that have been on the market for over
MATERIALS AND METHODS
A spore suspension was prepared from P. larvae cultures on agarplates (14 days to obtain sporulation) in sterile 0.9% saline solution. The spore suspension was stored at 4 • C, heat shocked at 85 • C for 10 min and diluted to the desired concentrations before the start of each experiment.
The experiments were performed as described in Figure 1 and repeated at least 3 times. P. larvae were cultured according to standard cultivation methods (8).
The biocidal effect of the disinfectants was calculated by comparing the number of CFUs from the treated samples and the untreated spore suspension or the mock treated wood and Styrofoam pieces.
Student's t test (unpaired, 2-tailed) was used to identify statistically significant differences, with a P ≤ 0.05 considered significant.
RESULTS
DFB had the highest biocidal effect (100% already after 2 min) on spores in suspension and was significantly more efficient than the 5 and 15 min Virkon S treatments (all P = 0.01, Figure 2A).
On wood, no significant differences could be seen between DFB and Virkon S, or the between the different treatment times (Figure 2B).
On Styrofoam, a significantly higher biocidal effect was observed after 30 min treatment with Virkon S compared to 2 and 10 min treatment with DFB (both P = 0.02, Figure 2C). The 30 min treatment with Virkon S had also a significantly higher biocidal effect than the 5 min treatment (P = 0.01, Figure 2C).
DISCUSSION
This study compares the biocidal effect of 2 disinfectants on P. larvae spores. Both disinfectants had an effect on the bacterial spores in suspension and on wood and Styrofoam. DFB had the best effect on the bacterial spores in suspension where all P. larvae spores were killed. These results are in line with the information from the manufacturer saying that DFB kills all viruses, bacteria, fungi and spores within 45 s. However, the effect of DFB on spores on wood and Styrofoam was lower than in suspension (Figure 1). Virkon S was slightly less effective than DFB on spores in suspension, but the differences were not significant. Thirty minutes treatment (recommended by the manufacturer) of Virkon S on contaminated Styrofoam was significantly more effective than the treatment with DFB ( Figure 2C). Virkon S has in a previous study been shown to kill 80% of P. larvae spores (9). In this study however, the biocidal effect ranged from 88.6 to 96.8% after 30 min treatment (Figure 1). The effect of both disinfectants on wood varied more than the effect on Styrofoam and in suspension, most likely due to difficulties recovering P. larvae from wood. This is probably because wood is more porous and absorbs the liquid with the spores. P. larvae spores can "hide" in wood, making it more difficult for the disinfectant to access the bacterium. The wood and Styrofoam pieces used in this study were clean, i.e., they were not covered in wax or propolis. Any disinfectants will probably be less effective on used, non-cleaned hive material where large amounts of bacterial spores may be inaccessible to the disinfectants. It is therefore important that infected materials are thoroughly cleaned before being treated with disinfectants.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
EF and AN developed the research concept. JK, EF, and AN designed and performed the experiments. JK, JMut, JMug, and AN co-wrote the manuscript. EF provided the resources, supervision, and funding assistance. All authors contributed to the article and approved the submitted version. | 2021-03-24T13:22:39.537Z | 2021-03-17T00:00:00.000 | {
"year": 2022,
"sha1": "eb8c3e1214eef132734633cbd263af2595def6be",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2022.884425/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "035d8b59fbfb2e442a5c7f1181d15d1fb25e7658",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
79294666 | pes2o/s2orc | v3-fos-license | CONGENITAL SENSORY NEUROPATHY (HSAN II)
A 5 year old girl having hereditary sensory neuropathy, type II manifesting as congenital absence of pain sensation and trophic changes in the skin is reported. This child presented with presented with multiple ulcers over hands and feet since 2 years of age. The ulcers were non-healing type with serosanguineous discharge. There is abnormal gait and weakness in upper and lower limbs. On examination there are deep ulcers measuring 5x7x2cms over left feet. Fingers of both hands and feet were mutilated with loss of phalanges, sensations to fine touch, pain and temperature are decreased bilaterally below the mid arm and feet, vibration sensations were normal, proprioception could not be tested due to deformities. Sensory and motor nerve conduction studies showed evidence of sensorimotor axonal neuropathy. INTRODUCTION: The hereditary sensory and autonomic neuropathies (HSAN) includes a number of inherited disorders that are associated with sensory dysfunction (Altered pain and temperature, depressed reflexes) and differing degrees of autonomic dysfunction (Gastroesophageal reflux, Postural hypotention, Excessive sweating). Dyck and Ohta proposed the numerical classification of four distinct forms of HSAN.1 Clinical features, the degree of both sensory and autonomic dysfunction, helps in Diagnosis of these disorders. Biochemical evaluations, with pathologic examinations serving to further confirm differences. Treatments for all these disorders are supportive. Congenital sensory neuropathy (HSAN II) HSAN type II is very low. In prevalence worldwide. CLINICAL PRESENTATION: HSAN II is non-progressive and presents in infancy or early childhood. It is characterized by profound sensory loss and pronounced hypotonia. HSAN II occurs sporadically or with autosomal recessive inheritance. There is no particular ethnic preponderance, or sex preference. Till to date there is no increased incidence of consanguinity. All peripheral sensations are affected but distribution of somatic involvement may alter. Pain, temperature, and position senses are involved. Trophic changes are present, especially in upper and lower extremities, as in our case. Morvan is the physician who first described this disease and hence HSAN II been termed as “Morvan’s” disease. The occurrence of 'painless whitlows' or acrodystrophic neuropathy).2,3 HSAN II is associated with repeated occurrence of unrecognized injuries and fractures of hands, feet, and limbs, as well as Charcot joints.4,5 Deep tendon reflexes are decreased and hypotonia is common. Hypotonia delays attainment of developmental milestones. Inspite of marked sensory abnormalities, other aspects of the neurological examination may be normal including mental function, cranial nerves, cerebellar and DOI: 10.14260/jemds/2015/1665
motor functions. There is no muscle atrophy or muscle weakness even though the tendon reflexes are decreased or absent.
PATHOLOGY: Sural nerve biopsy shows a marked reduction in nerve size and depletion of large and small myelinated fibers but only a slightly decreased number of unmyelinated fibers as in our case.
There are no cutaneous sensory receptors or nerve fibers are seen but catecholaminergic sympathetic fibers can be demonstrated by aldehyde induced fluorescence.
DIAGNOSIS:
The diagnosis is based on documenting profound peripheral sensory involvement of both peripheral and cranial nerves, (This can be demonstrated by an absent axon flare after intradermal histamine). Clinical identification is done by the finding of a mutilating acropathy with a severe, distally pronounced impairment of all sensory qualities (Light touch sensation, position sense and vibratory perception, as well as pain and temperature perception). Supportive evidence consists of sel mutilation, hypotonia and delayed milestones, and normal somatic growth.
Abnormal vibratory thresholds, and quantitative sensory testing are shown with the neurophysiological evaluation. It may also reveal elevated thermal thresholds at the hands and feet 6 . Typically, nerve conduction studies confirm marked impairment of sensory nerve conduction velocities and absent sensory nerve action potentials, but motor nerve conduction velocities are at or slightly below the normal limit.
MANAGEMENT: Management is essentially symptomatic and preventative. If feeding problems compromise nutrition and if gastroesophageal reflux is also present, surgical options like fundoplication with gastrostomy is recommended. Sleep pneumograms are helpful if there is central apnea and if respiratory support is needed. Parents and patients education is very much required to learn how to avoid injury and how to be alert for signs of unrecognized trauma.
Early diagnosis is crucial for prevention of injury, self-mutilation and growth retardation. This case Report highlights the importance of assessment of pain sensation be a part of routine examination of newborn.
CASE REPORT:
A 5 year old female child born of non-consanguineous marriage presented with multiple ulcers over hands and feet since 2 years of age. There was history of loss of pain and temperature sensations over hands and feet, followed by gradual ulceration and loss of fingers. The ulcers were non-healing type with serosanguineous discharge. There is abnormal gait and weakness in upper and lower limbs.
There is no history of fever and recurrent respiratory or gastrointestinal infections. There were no similar complaints in family. On examination there are deep ulcers measuring 5x7x2cms over left feet, base is fixed to underlying muscle, floor is covered with pale granulation tissue, edges were slopy, discharge from the ulcer is serosanguinous. Surrounding skin of feet is oedematous and hyperpigmented.
There is fanning of toes. Fingers of both hands and feet were mutilated with loss of phalanges, sensations to fine touch, pain and temperature are decreased bilaterally below the mid arm and feet, vibration sensations were normal, proprioception could not be tested due to deformities.
Peripheral nerves are not thickened. Sympathetic skin response is absent bilaterally below mid fore arm and feet. Motor examination power is normal. Deep and lower limbs. Other system examination is normal. There was no other abnormality detected in intelligence, cranial nerve examination is normal. Routine blood investigations are normal. Sensory and motor nerve conduction studies showed evidence of sensorimotor axonal neuropathy. Slit skin smear from patient is normal.
The case is diagnosed as hereditary sensory axonal neuropathy-type 2, managed conservatively with regular dressings, avoidance of physical trauma, hand and foot care. | 2019-03-16T13:13:57.592Z | 2015-08-17T00:00:00.000 | {
"year": 2015,
"sha1": "715eeacd22713050fe1883a9e3aaf97de9c5f61d",
"oa_license": null,
"oa_url": "https://doi.org/10.14260/jemds/2015/1665",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2c1c004988db75c1d0a67cbc56f87d0f5a7a819f",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201125036 | pes2o/s2orc | v3-fos-license | Analyses of virus/viroid communities in nectarine trees by next-generation sequencing and insight into viral synergisms implication in host disease symptoms
We analyzed virus and viroid communities in five individual trees of two nectarine cultivars with different disease phenotypes using next-generation sequencing technology. Different viral communities were found in different cultivars and individual trees. A total of eight viruses and one viroid in five families were identified in a single tree. To our knowledge, this is the first report showing that the most-frequently identified viral and viroid species co-infect a single individual peach tree, and is also the first report of peach virus D infecting Prunus in China. Combining analyses of genetic variation and sRNA data for co-infecting viruses/viroid in individual trees revealed for the first time that viral synergisms involving a few virus genera in the Betaflexiviridae, Closteroviridae, and Luteoviridae families play a role in determining disease symptoms. Evolutionary analysis of one of the most dominant peach pathogens, peach latent mosaic viroid (PLMVd), shows that the PLMVd sequences recovered from symptomatic and asymptomatic nectarine leaves did not all cluster together, and intra-isolate divergent sequence variants co-infected individual trees. Our study provides insight into the role that mixed viral/viroid communities infecting nectarine play in host symptom development, and will be important in further studies of epidemiological features of host-pathogen interactions.
In addition, complex mixed infections have been found among fruit tree-infecting viruses 2,5,12,13 . Thus, the potential contribution of each single virus infection to the symptoms observed cannot easily be associated with a disease in the infected Prunus trees. In fact, many horticultural plants that are routinely clonally propagated are reservoirs of a large variety of viruses and viroids. The importance of the virome in mammalian biology, and the emerging concept of virome-host interactions and their relationship to host genetics was first described by Virgin (2014) 14 . The virome of the microbiome interactions with the host, especially in mammalian biology, has recently become a hot research topic that relies on bioinformatic tools and NGS technology [14][15][16][17][18] . However, only a limited number of studies have revealed viral communities or viromes in peach 17 . In this study, we used NGS technology to study the viral communities in nectarine trees with different disease phenotypes. We identified both known and novel viruses and viroids, and performed comparative analyses of the potential contribution of the pathogens to disease symptoms. Our results will extend the range and kinds of virus and viroid species that infect peach trees, and provide insight into the viral synergisms and the agents that might be associated with disease symptoms in nectarine.
Virus and viroid accumulation and pathogen communities within individual nectarine trees.
In the five nectarine tree samples, T01 and T02 were collected in greenhouse #1 from the same nectarine cultivar 'Youtao 1233' (10 year-old trees), while T03, T04, and T05 were collected in greenhouse #2 from nectarine cultivar 'Zhongyou 4' (5 year-old trees). The five samples came from trees that showed different leaf and fruit symptoms ( Table 1, Fig. 1).
To perform comparative analyses of the different symptoms observed in the nectarine trees, we used NGS of the sRNAs extracted from the five samples to obtain a complete survey of the virus and viroid communities infecting each tree. The Illumina reads obtained from sequencing the five cDNA libraries, which were prepared using RNA extracted from the scion parts of the grafted trees, yielded between 22,434,184 and 30,438,485 raw sRNA reads per library. From these, we obtained between 20,906,239 and 28,435,618 clean reads for samples T01 to T05 (Table S1). A virus and viroid library was constructed from virus and viroid genomes available from NCBI and was then used for mapping of the sRNA reads using the short-sequence alignment program Bowtie. The majority of the reads were 18 to 25 nt in length, with most being either 21 nt or 22 nt. De novo assembly of the sRNAs and blastn and blastx searches resulted in assemblies of 15 to 744 contigs, with lengths ranging from 33-474 nt, that were associated with known viruses/viroids. The virus/viroid-associated reads per sample ranged from 1.11% to 9.60% of the clean sRNA reads for the five samples. We found that samples T01 and T02 from greenhouse #1 had the highest number of virus/viroid-associated reads, with 5.53% and 9.60% of the clean sRNA reads, respectively, while the virus/viroid-associated reads in samples T03, T04, and T05 ranged from 1.11% to 2.71% (Table S1).
To compare the different viral communities and the relative numbers of individual viruses and viroids in each sample, we examined individual contig numbers for previously-identified viruses and viroids and calculated the percentage of individual virus or viroid-associated reads by dividing the number of virus or viroid-associated reads by the total number of clean sRNA reads (x 100).
The virus and viroid communities and the numbers of individual pathogens differed between samples T01 and T02 from trees that had different fruit symptoms. Sample T02 had the most identified viral species, and also the highest number of contigs for one viroid and nine virus genera in five families; these included PLMVd in the genus Pelamoviroid, (family Avsunviroidae); two unassigned viruses NSPaV and PaLV (family Luteoviridae); PBNSPaV in the genus Ampelovirus (family Closteroviridae); ACLSV in the genus Trichovirus and APVs in the genus Foveavirus (most contigs were identitied that correspond to segments, but a few contigs were identified as APV1 and APV3 that corresponded to segments with high sequence similarities), CGRMV, cherry necrotic rusty mottle virus (CNRMV) in the Robigovirus (family Betaflexiviridae); PeVD in the genus Marafivirus (family Tymoviridae); and another 11 contigs that may represent a novel unknown virus in the family Tymoviridae which showed identities to segments from marafi-, tymo-, and maculaviruses with low sequence similarities ( Fig. 2A). Excluding CGRMV and CNRMV, other seven viruses (PLMVd, NSPaV, PaLV, PBNSPaV, ACLSV, APVs, PeVD) were identified in sample T01. (Fig. 2A). This is also the first report of the identification of PeVD in Prunus trees in China.
PBNSPaV and PLMVd (34.32% and 50.45% of the total virus and viroid -associated sequence reads, respectively) were the dominant viruses/viroids identified in sample T01, while these same two pathogens were also the dominant viruses/viroids in sample T02, accounting for 63.75% (PBNSPaV) and 19.74% (PLMV) of the total www.nature.com/scientificreports www.nature.com/scientificreports/ virus and viroid -associated reads. In addition, CGRMV (5.77% of the total virus-associated reads) and CNRMV (5.77% of the total virus-associated reads) were only detected in sample T02 (Fig. 2B,C). The number of contigs in T01 and T02 that mapped to the PBNSPaV genomes in GenBank was different, and indicated the presence of divergent sequence variants in the two samples ( Table 2). We also found that the number of contigs that mapped to several isolates of PBNSPaV (Phm-WH-3, WH-1, PR258-2) collected from plum trees with disease symptoms 19,20 were significantly increased in sample in T02 (Table 2). By comparing the number of assembled viral contigs and the percentages of virus-associated reads between samples T01 and T02, we found that the T02 sample collected from fruit pitting tree had the higher levels reads of the viruses PBNSPaV, CGRMV, and CNRMV and the higher mapping number of contigs of PBNSPaV isolate from sample with disease symptoms.
The viral communities and the numbers of the individual virus varied significantly in the three samples with diverse disease phenotypes (T03, T04 and T05). In the asymptomatic sample T03, we only identified contigs associated with two known viruses/viroids (PLMVd and NSPaV), while in sample T04, which had dimpled fruits, we identified contigs from another two viruses (ACLSV and APV2) in addition to PLMVd and NSPaV (Fig. 3A). Figure 3(B,C) shows that PLMVd and NSPaV accounted for the highest percentages of the individual pathogen-associated reads and were the dominant pathogens in samples T03 and T04. From these results, we can infer that higher levels of ACLSV and APV2 co-infection or their interactions with PLMVd or NSPaV may be associated with the fruit dimpling symptoms seen in sample T04. Sample T05 with chlorotic mottle leaf symptoms contained mainly PLMVd sequences (14 assembled contigs, 2.27% of the total clean reads), and one contig that showed identity to sequences related to a segment (68/84 nt) of GRGV in the genus Maculavirus, (family Tymoviridae), but the read number was very low (0.0012% of the total clean reads) and this will need to be further confirmed by subsequent RT-PCR to determine whether the low level of reads observed resulted from sample contamination (Fig. 3).
Confirmation of identified viruses and viroids by RT-PCR.
In order to determine whether the viruses and viroids identified by NGS were actually present in the five nectarine tree samples, we conducted RT-PCR using the primer pairs specific for the individual viruses/viroids (Table S2). We found that it was very difficult to distinguish between very closely related viruses such as APV1, APV2, and APV3, and CGRMV and CNRMV, and to identify the presence of viruses that had very few and short contigs with low read numbers in the NGS data. The RT-PCR results obtained, followed by Sanger sequencing, indicated that excluding APV3 and grapevine red globe virus (GRGV), the samples were positive for all other viruses identified by NGS, suggesting that these viruses were actually present in the sRNA extracted from the nectarine tree samples (Fig. 4). In order to further confirm the results of the NGS screen, we also detected individual viruses/viroids in 36 samples collected from different greenhouses and cultivars using RT-PCR assays, and these results are summarized in Table S4. RT-PCR detection indicated that the other test samples (T01-type: N8, N8-2; T02-type: N9, N9-2; T05-type: P12, P13, P14; T03-type: P25, P26, P27; T04-type: P20, P21, P23, P24) with disease symptoms similar to those from the five individual trees (T01, T02 and T03, T04, and T05), respectively, had uniform virus and viroid communities that
Phylogenetic analysis of the identified viruses/viroids. We identified PLMVd infection in all five
tested nectarine trees, and 17 complete genome sequences of PLMVd isolates were obtained from the five samples by RT-PCR and cloning. A combined phylogenetic analysis of the genomic sequences of the PLMVd isolates from this study with some PLMVd sequences in GenBank gave three major phylogroups, and we found that the PLMVd sequences associated with symptomatic and asymptomatic trees did not all cluster together, except for two previously-reported peach calico (PC) isolates 21 (which clustered alone in Group II; Fig. 5). Further pairwise comparisons of the 17 PLMVd genome sequences showed that the PLMVd isolates from the nectarine samples are quite divergent, and share between 80.0 and 99.7% nucleotide identity. Of the five sampled trees, three sequences www.nature.com/scientificreports www.nature.com/scientificreports/ from the T03 isolate (asymptomatic sample) that clustered in subgroup IA shared a high degree of nucleotide identity (98.5-98.8%), whereas sequences from the other four sampled trees were highly variable, and clustered into different subgroups, IA to IF (Fig. 5). These results suggest that intra-isolate, genetically distinct sequence variants in individual nectarine trees were present in four of the tree samples (T01, T02, T04, and T05) based on the distribution of sequences in different phylogroups. We conclude that divergent sequence variants that co-infect individual trees is common.
In the same way, we constructed two phylogenetic trees based on one nearly complete genome sequence (excluding the 5' and 3' terminal sequences) and 11 RNA-dependent RNA polymerase (RdRp) gene sequences from NSPaV isolates T01, T02, T03, and T04 (Fig. 6A). Three NSPaV RdRp gene sequences from T02 grouped with isolate SK from a nectarine in South Korea 22 . Other sequences including RdRp and nearly complete genome sequences of the T01, T03, T04 NSPaV isolates were found to be closely related to isolate NSPaV/12P42 derived from a nectarine in the USA 5 , but three NSPaV RdRp gene sequences from T04 were not consistant, and the T04-1 sequence was detected as a recombinant. Six coat protein (CP) gene sequences from PBNSPaV isolates from T01 and T02 are closely related to the known isolates Plm-WH-3 and WH-1 19 from peach in China. One PBNSPaV CP gene sequence (T01-1) is closely related to the known isolate GS 19 , also from peach in China (Fig. 6B). Four homologs of the heat shock protein 70 (HSP70h) gene sequences from PBNSPaV isolates from samples T01 and T02 are also closely related to the known isolates Plm-WH-3 and WH-1, and three T01 HSP70h gene sequences are also closely related to the known isolates GS (Fig. 6B). Seven CP sequences from the ACLSV The reads from APV1 and APV3 were combined ("APVs") due to their high degree of sequence similarity. Sequencing reads that aligned to another 11 contigs with identities to marafi-, tymo-, and maculaviruses, corresponding to segments with low sequence similarity that may be a novel unknown virus in the family Tymoviridae, were combined as "other". isolates from samples T01, T02, and T04 in our study all grouped together with the known isolates Z1 and Z3 23 from peach trees in China (Fig. 6C). Six CP sequences from the APV1 isolates from T01 and T02 are closely related to each other and to the known isolates D2363 and D2367 (Fig. 6D). The three CP sequences from APV2 isolates from sample T04 are closely related to the APV2 isolate Bonsai 12 from Japan (Fig. 7E). The three CGRMV CP sequences from isolate T02 are closely related to isolate F9 24 from China (Fig. 7F). The three CP sequences from CNRMV isolates from sample T02 are closely related to isolates Pe-WH-18 25 and 103-13 from China (Fig. 7G). For the few available sequences in GenBank for PaLV and the newly reported PeVD sequences, we analyzed their phylogenetic relationships using an outgroup. As expected, six RdRp and three CP sequences from PaLV and five PeVD replicase polyprotein gene sequences from isolates T01 and T02 were found to be closely related to the isolates in GenBank (Fig. 7H,I).
Sequence comparisons using the T01-T05 datasets showed that the NSPaV and PBNSPaV sequences in this study are highly variable, with the nucleotide sequence identities among the NSPaV RdRp gene sequences ranging from 89.4 to 99.6%, with comparable values of 93.2 to 99.5% for the PBNSPaV CP gene sequences. This contrasts with other gene sequences from ACLSV, APV1, APV2, CGRMV, CNRMV, PaLV, and PeVD that are highly homologous (96.7 to 100%) in identified isolates.
Discussion
NGS technologies provide a powerful way to detect and identify viral pathogens with no prior knowledge of virus genome sequences 3,26 . This technology is finding increased applications in revealing the viromes that contribute to host phenotype and also the recent evolutionary history of RNA viruses [14][15][16][17][18]27,28 .
In this work, we studied co-infecting virus and viroid communities using NGS technology in five individual trees of two nectarine cultivars associated with different disease phenotypes, and then confirmed the identified viruses/viroids using RT-PCR in 36 samples from four cultivars. Identifying comparable plant materials with the same genetic background and cultivation history associated with the different disease symptoms can be difficult. In this study, some valuable samples were collected from trees in two small greenhouses that were grown under the same environmental conditions, with the same cultivation methods, pesticides, and fertilizers, which allowed for a direct comparison of the effects of co-infecting virus and viroid communities in trees with different disease phenotypes. The results indicated that the viral species present in viral communities isolated from the four different cultivars were diverse. A single tree, T02 of cultivar 'Youtao 1233' that showed symptoms of fruit pitting, harbored most of the nine different viruses/viroids from five families including PLMVd, NSPaV, PaLV, PBNSPaV, ACLSV, APV1, CGRMV, CNRMV, and PeVD, while T01, that did not show fruit pitting, harbored seven viruses/ viroids from the above group in addition to CGRMV and CNRMV. To our knowledge, this is the first report that most identified viral species form co-infections in a individual peach tree, and also the first report of PeVD in peach in China. These results confirmed and extended the utilization of NGS to detect fruit tree viruses and to provide further insight into the complex multiple infections in individual trees and between different cultivars. www.nature.com/scientificreports www.nature.com/scientificreports/ Combined genetic variation and sRNA data analyses of the co-infecting viruses/viroids in this study implied that viral synergisms and divergent sequence variants play an important role in determining disease symptoms. This includes synergisms among a few genera in the viral families Betaflexiviridae, Closteroviridae, and Luteoviridae and the possible effects on facilitating divergent virus sequence variants in nectarine symptom expression, increasing the titers of pathogenic viral genotypes.
Two functional classes of viral genes, a transcription-related RdRp gene and a structural CP gene, are often used for phylogenetic classification of plant viruses 19,23,29,30 . The HSP70h genes are used for phylogenetic classification of plant viruses in the family Closteroviridae 31 . A study by Qu et al. also showed that the evolutionary relationships of global PBNSPaV isolates can be reliably inferred using HSP70h sequences 19 .
Phylogenetic analyses of the identified 10 viruses and one viroid in the five individual nectarine trees using whole genome sequences, nearly complete genome sequences, and CP, RdRp, and HSP70h gene sequences showed that PLMVd, NSPaV, and PBNSPaV gene sequences were more divergent, and presented different sequence variants or recombinants. However, gene sequences from ACLSV, APV1, APV2, CGRMV, CNRMV, PaLV, and PeVD from the five individual trees showed much less variability. It should be noted that full genome sequences of the identified viruses could not be obtained in this study; however, the phylogenetic relationships were analyzed using two functional classes of viral genes that might be involved in viral synergisms.
Stem pitting has been diagnosed in peaches infected by tomato ringspot virus (ToRSV) 32 and PBNSPaV 19 , and in nectarine infected by NSPaV 5 . The PBNSPaV peach isolate WH-1 causes trunk gummosis, cracking, necrosis, and stem pitting symptoms 19 . Based on the results of our phylogenetic analysis, CGRMV and CNRMV are most similar to apple stem pitting virus 24 , but the effects on fruit pitting in Prunus have not been reported.
Compared with sample T01 (no fruit pitting), CGRMV and CNRMV were only detected in the fruit pitting sample T02, and their CP sequences clustered into a single group with known peach isolates from China. Also, for PBNSPaV, the number of PBNSPaV-specific reads in T02 was higher than in T01, and the number of contigs in T02 that mapped to the known PBNSPaV isolate WH-1 was significantly higher. The NSPaV isolates in T02 and T01 were clustered into different groups, confirming that they represent divergent sequence variants. Taken together, this result indicates that the fruit pitting observed in the T02 nectarine tree could be the result of interactions between PBNSPaV in family Closteroviridae, CGRMV and CNRMV in family Betaflexiviridae, and NSPaV in family Luteoviridae. Therefore, we can speculate that CGRMVand CNRMV might possibly serve as "helper" viruses in the involvement of specific variants of either NSPaV or PBNSPaV or both in the expression of fruit pitting symptoms. Synergisms have been reported to occur between the members of the genera Closteroviridae and Betaflexiviridae, and the p10 silencing suppressor from grapevine virus A in the family Betaflexiviridae enhances the infectivity of a Closterovirus, beet yellows virus 33 , but the synergisms of serveral viral genera reported in this study is first discovered.
Again, samples T03, T04, and T05 showed different disease phenotypes; we detected PLMVd and NSPaV in the asymptomatic tree T03, PLMVd, NSPaV, APV2, and ACLSV in tree T04 with dimpled fruits, and only PLMVd in tree T05 with mottled leaves. In addition, of the five sampled nectarine trees, APV2 was only found in sample T04. This result was also confirmed in an additional five sampled trees with dimpled fruits using an RT-PCR screen. These results suggest that fruit dimpling at least may be related to the increase in APV2 and ACLSV titers. Previous studies have shown that, based on the amino acid sequences of the CP, ACLSV isolates have been classified into the types Z1/Z3 and Ta Tao5 from peach samples 23 , and P205 and B6 from apple samples 30 . In our study, seven ACLSV CP sequences from T01, T02, T03, and T04 were all closely related to isolate Z1. This result would seem to exclude a contribution of ACLSV to the dimpled fruit symptoms in T04. According to a previous report, APV2 infection could contribute to leaf symptoms in the GF305 peach indicator 12 , but we have not found documented fruit dimpling symptoms. We have stated that NSPaV sequences in this study are more divergent, and that the T04-1 sequence, one of three RdRp gene sequences for NSPaV, was found to be a recombinant. Taken togather, we infer that dimpled fruit symptoms also result from the synergistic effects of co-infection with NSPaV and APV2. As previously reported, some luteoviruses, such as groundnut rosette assistor virus 34 , serve as "helper" viruses for the transmission of other viruses that cause disease.
PLMVd is the only viroid shared by all five nectarine trees that displayed three leaf disease phenotypes (asymptomatic, bleached, and chlorotic mottle) in this study. Some studies have shown that PLMVd infection can be associated with albinism (peach calico, PC) and green mosaic symptoms, and revealed a close association between the albino phenotype and variants containing an insertion of 12-14 nt, folding into a hairpin capped by a U-rich loop in the proposed PLMVd branched secondary structure 21,35 . It is worth noting that we did not find this 12-14 nt insertion in this study. However, evolutionary analysis of PLMVd shows that the PLMVd sequences associated with symptomatic and asymptomatic trees do not all cluster together, and that there are intra-isolate divergent sequence variants present in every individual symptomatic tree. It has been suggested that divergent sequence variants of PLMVd co-infecting a single tree could contribute to host phenotype.
Biological data on the direct association of disease symptoms with the viruses identified in this study in fruit tree are scant in the literature. The main reasons for this are the fact that alternative herbaceous hosts are difficult to identify, and viral particles are very difficult to obtain in purified form for fruit tree viruses. Mixed viral communities that co-infect individual trees could increase viral genotypic complexity with implications for consequences to host pathology. However, increased application of NGS technology has resulted in the recent discovery of a large number of co-infecting viral communities in plants. Typically, this technology is used to identify candidate pathogens that may be associated with disease symptoms in plants 36 . In the present study, several valuable field samples with disease phenotype differences combined with NGS data and genetic analyses supported the association between multiple co-infecting viruses and the disease symptoms observed. The data reported in this study will be important for further study of the biological and epidemiological features of virus/ viroid interactions in plant hosts. (2019) 9:12261 | https://doi.org/10.1038/s41598-019-48714-z www.nature.com/scientificreports www.nature.com/scientificreports/ Methods Plant sources. In the spring of 2014, several 10-year-old trees of nectarine cultivars 'Youtao 1233' and 'Youtao 126' growing in greenhouse #1 in Daliang, Liaoning Province were found to have bleached leaves and rusty stem spots or fruit-pitting symptoms. In the spring of 2017, some 5-year-old trees of the nectarine cultivar 'Zhongyou 4' from greenhouse #2 (also in Daliang) were found to have symptoms of fruit dimpling but no visible leaf or fruit symptoms, and another group of trees had leaves that showed symptoms of chlorotic mottling. Some asymptomatic samples from 2-year-old trees of nectarine cultivar 'Chaoyue 1' from neighboring greenhouse #3 (also in Daliang) were also collected. The area of the three greenhouses is approximately 0.5-1.0 Chinese mu (1 mu = 666.7 square meters). The thirty-six symptomless and symptomatic tissue samples from greenhouses #1, #2, and #3 were collected and stored at −80 °C prior to use in the experiments. Of the 36 samples, the several different classes of symptoms were observed simultaneously on trees in greenhouses #1 and #2, which share the same environmental conditions as well as cultivation methods, pesticides, and fertilizers, and this allowed us to further study the etiological agent(s) associated with the different disease symptoms. Thus, five tissue samples from five different trees (two 'Youtao 1233' from greenhouse #1 and three 'Zhongyou 4' from greenouse #2) with or without disease symptoms on leaves and fruits were screened for viruses by sequencing the small RNAs using NGS as shown in Table 1 and Fig. 1. Viral communities identified by NGS in the small RNA libraries. Total RNA was extracted from each sample, and the small-RNA libraries were constructed using the NEB Multiplex Small RNA Library Prep kit (NEB, USA) following the manufacturer's recommendations. Unique index codes were added to attribute the individual sequence reads to each sample library. The libraries were size-selected in 6% polyacrylamide gels prior to sequencing on an Illumina HiSeq. 2500 SE50 instrument (Biomarker Technologies Co., Ltd), and paired-end reads were generated.
The raw read data in fastq format were initially processed using in-house perl scripts. In this step, clean reads were obtained by removing reads containing adapters, reads containing multiple Ns (unknown bases), and low quality reads from the raw data. The reads were trimmed and cleaned by removing sequences smaller than 18 nt or longer than 35 nt. The phred quality scores (Q20 and Q30), GC-content, and sequence duplication level were calculated for the clean data. All downstream analyses were conducted with high quality clean data.
The sequence reads were assembled de novo into contigs using Velvet Software with k-mer = 17 37,38 . The contigs obtained were subsequently annotated by BlastN and BlastX searches of the Genbank virus and viroid Reference Sequence Database. sRNA reads that mapped to individual viral genomes were also tabulated to identify candidate viruses present in the analyzed nectarine samples.
Virus and viroid detection with RT-PCR.
Total nucleic acids were extracted from each sample using the RNAprep Pure Plant Kit (Tiangen Biotech (Beijing) Co., Ltd). Seventeen specific primer pairs (Table S2) were designed to amplify genomic regions corresponding to the CP, RdRp, or HSP70h genes from APV1, APV2, APV3, ACLSV, CGRMV, CNRMV, PBNSPaV, PeVD, NSPaV, PaLV, and GRGV, and also the complete genome sequence of PLMVd. The specific primer pairs to amplify nearly the complete genome of NSPaV are listed in Table S3. Reverse transcription (RT) was performed at 42 °C for 1 h using 1 μL of total RNA and 1 μL of oligo (dT) primer and 6-mer random primers in a 10 μL reaction volume containing Maloney murine leukemia virus (M-MLV) reverse transcriptase (Promega, Madison, WI, USA), according to the manufacturer′s protocol. Following RT, PCR assays were performed in 25 μL reaction volumes containing 1.5 μL of the RT reaction, 12.5 μL of 2X Taq Mix [Tiangen Biotech (Beijing) Co., Ltd.], 9.0 μL distilled water, and 1.0 μL (10 pmol) of the forward and reverse primers. The thermocycling conditions were as follows: an initial denaturation step of 5 min at 94 °C, followed by 35 cycles of 30 s at 94 °C, 30 s at 52 °C-55 °C, and 90 s at 72 °C, with a final extension step of 10 min at 72 °C.
RT-PCR products were purified using a PCR purification kit (AXygen), and the resulting DNA fragments were then cloned into the pMD18-T vector (Takara) for sequencing by the Sanger sequencing method. At least three clones of each amplified fragment were sequenced. Sequence reads were assembled using DNAMAN 6.0 (Lynnon Biosoft, Quebec, Canada).
Phylogenetic analyses of the identified viruses/viroids. We amplified complete genome sequences
for PLMVd, nearly complete genome sequences for NSPaV, and full-length or partial sequences of the CP, RdRp, and HSP70h genes for the identified viruses from the five tree samples T01, T02, T03, T04, and T05 using RT-PCR with specific primers (Tables S2 and S3). The PCR products were cloned and sequenced, with at least three clones sequenced from each sample tree. All CP, RdRp, and HSP70h gene sequences from the identified viruses were aligned and the flanking sequences in the amplified fragments were removed to obtain the full or partial-length gene sequences for each virus. In total, 17 PLMVd genomes (336 to 338 bp), one nearly complete NSPaV genome (4,578 bp), 11 NSPaV RdRp genes (987 bp), seven ACLSV CP genes (582 bp), six PBNSPaV partial CP genes (963 bp) and six partial HSP70h genes (587 bp), six APV1 CP genes (1,206 bp), three APV2 partial CP genes (1,182 bp), six PaLV partial RdRp genes (981 bp) and three CP genes (647 bp), five PeVD genes (695 bp), three CGRMV CP genes (807 bp), and three CNRMV CP genes (804 bp) were used in the phylogenetic analyses. We download other known complete genome, CP, and RdRp gene sequences from GenBank (www.ncbi.nlm.nih. gov) to determine the phylogenetic relationships with known viruses/viroids. If there are too many virus/viroid sequences deposited in GenBank except for recently-identified PeVD, PaLV, and NSPaV sequences, after filtering partial sequences, we only retrieved complete genome sequences homologous to each virus and a few representative sequences (with different disease symptoms) from the NCBI nucleotide database to use in phylogenetic tree construction. We aligned the genome or gene sequences using the ClustalW multiple alignment program and calculated a sequence identity matrix using BioEdit 39 with the default parameters. The aligned sequences were checked for potential recombination events using RDP 13,40 . After sequence alignment, a phylogenetic tree was | 2019-08-22T15:15:00.154Z | 2019-08-22T00:00:00.000 | {
"year": 2019,
"sha1": "5cc26559e980f1566f5ccdd272dfe8b8a08d880c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-48714-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5cc26559e980f1566f5ccdd272dfe8b8a08d880c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
267847181 | pes2o/s2orc | v3-fos-license | Rapid detection of West Nile and Dengue viruses from mosquito saliva by loop-mediated isothermal amplification and displaced probes
Arthropod-borne viruses are major causes of human and animal disease, especially in endemic low- and middle-income countries. Mosquito-borne pathogen surveillance is essential for risk assessment and vector control responses. Sentinel chicken serosurveillance (antibody testing) and mosquito pool screening (by RT-qPCR or virus isolation) are currently used to monitor arbovirus transmission, however substantial time lags of seroconversion and/or laborious mosquito identification and RNA extraction steps sacrifice their early warning value. As a consequence, timely vector control responses are compromised. Here, we report on development of a rapid arbovirus detection system whereby adding sucrose to reagents of loop-mediated isothermal amplification with displaced probes (DP-LAMP) elicits infectious mosquitoes to feed directly upon the reagent mix and expectorate viruses into the reagents during feeding. We demonstrate that RNA from pathogenic arboviruses (West Nile and Dengue viruses) transmitted in the infectious mosquito saliva was detectable rapidly (within 45 minutes) without RNA extraction. Sucrose stabilized viral RNA at field temperatures for at least 48 hours, important for transition of this system to practical use. After thermal treatment, the DP-LAMP could be reliably visualized by a simple optical image sensor to distinguish between positive and negative samples based on fluorescence intensity. Field application of this technology could fundamentally change conventional arbovirus surveillance methods by eliminating laborious RNA extraction steps, permitting arbovirus monitoring from additional sites, and substantially reducing time needed to detect circulating pathogens.
Introduction
Mosquitoes are responsible for transmitting numerous and diverse arthropod-borne pathogens (arboviruses), which have major negative impacts on the health of humans and other Reverse transcription loop-mediated isothermal amplification (RT-LAMP) is considered a cost-effective and sensitive alternative to RT-qPCR [14].Advantages include amplification without RNA extraction from the sample [15], single temperature cycles [16,17], sensitivity (<10 genetic copies of target nucleic acids), and rapidity of results (within 30 min).Results can be visually observed and identified by measuring fluorescence intensity [18], turbidity [19], or color changes [20] through the addition of molecular tags to the reaction mixture.This specificity is advantageous for testing relatively impure biological samples, making it suitable for on-site testing.This assay can be applied to a variety of pathogens, including viruses [10,21], protozoan parasites (e.g., Plasmodium) [22], and bacteria (e.g., Salmonella, Staphylococcus, Vibrio) [23].It is also useful for multiplexing and real-time monitoring, targeting different viruses in a single reaction at a time [21].Recently, our group successfully detected viral RNA in RT-LAMP with displaced probes from quaternary ammonium functionalized paper by honey-induced mosquito salivation and showed potential fesibility of arbovirus stability and detection [24].
The sensitivity, simplicity, and rapidity of RT-LAMP are attractive for arbovirus surveillance systems.If arboviruses in mosquito saliva can be collected directly into RT-LAMP reagents rather than using the honey card technique, an easy-to-use screening platform could be established to monitor pathogen presence in an area, facilitating timely vector control response.Here, we combined the advantages of honey card and RT-LAMP with displaced probes (DP-LAMP) to assess the feasibility of rapid arbovirus detection.In iterative laboratory bioassays, our first objective is to determine the optimum sugar formulation for mosquito acceptance of sweetened DP-LAMP reagents.Our second objective is to validate the sensitivity and stability of arbovirus detection on sweetened DP-LAMP reagents over time.The third objective is to screen arboviruses directly from infectious mosquito saliva by DP-LAMP.Additionally, we aim to visualize an endpoint of DP-LAMP products using an optical image sensor.This work was performed with WNV and DENV-I, as well as their respective vectors, Culex quinquefasciatus Say and Aedes aegypti L., two of the most important mosquito-borne pathogen systems.This work integrates entomology, molecular biology, and engineering, with the future goal of developing a rapid arbovirus detection platform with low cost and adequate spatiotemporal coverage.This platform will streamline the capacity of mosquito control districts to perform timely arbovirus surveillance, resulting in an improved ability to protect humans and animals against deadly pathogens.
Maintenance of mosquitoes
Laboratory colonies of both Cx.quinquefasciatus and Ae.aegypti were maintained under controlled environmental conditions (27.0 ± 0.5˚C, 80.0 ± 5.0% RH, and a 14:10 (L:D) h photoregime) at the Florida Medical Entomology Laboratory (FMEL).Mosquito larvae were reared in enamel pans (24.8cm x 19.7 cm x 3.8 cm) containing 1.0 L of distilled water at low density (less than 500) for more homogeneity with respect to individual size.Mosquito larvae were fed an equal mixture of brewer's yeast and lactalbumin or a diet of fish food, TetraMin™ (Tetra, VA, USA) on a standardized mosquito-rearing schedule [25].Pupae were collected daily and transferred to 30 mL plastic cups containing distilled water.These cups were placed inside 24.0 cm x 24.0 cm mesh screen cages (BioQuip Products Inc., CA, USA) for adult eclosion.Adults were provided ad libitum 10% sucrose solution through/via cotton wool/balls in a 30 mL plastic cup placed inside each cage.
Optimum sugar formulation for maximum mosquito feeding
We examined various sweeteners (carbohydrates) commonly found in natural nectar or commercial products to identify which sweetner elicited the highest feeding response in mosquitoes.These sweeteners included corn syrup (Lorann Oils, MI, USA), fructose (Modernist pantry, ME, USA), glucose (Modernist pantry, ME, USA), honey (Red Bunny Farms, PA, USA), sucralose (BulkSupplements, NV, USA), and sucrose (Publix, FL, USA).In no-choice assays, we placed 15 female mosquitoes (5-7 days old, laboratory colony) of either Cx.quinquefasciatus or Ae.aegypti, which had been starved for 24 hours, into 470 mL paperboard cups (Webstaurant Store, Lititz, PA, USA) covered with mesh netting.A cotton round (6 cm in diameter) was saturated with 5 mL of a 10% aqueous solution of each sweetener, dyed with 0.1 mL of blue food coloring (McCormick & Company, Inc., MD, USA) as a visual cue for sugarfeeding.The saturated cotton round was placed on an acrylic Petri dish inside each paperboard cup.Female mosquitoes were provided ad libitum access to the sweetener solutions over 24 hours in an environmental chamber.As a negative control, we included starved mosquitoes with no access to water or sugar.Five assay cups were tested for each sweetener and species.Subsequently, the mosquitoes were cold-anesthetized at -80˚C, and their individual mass was measured using a microbalance (model Orion Cahn C-33; Thermo Electron Corporation, MA, USA) with 1μg precision.The degree of sugar engorgement for each female was determined under a dissection microscope.Feeding extent was quantified as the mean mass after feeding and percent engorgement using a qualitative scale (0-4).
We assessed mosquito engorgement by testing various concentrations of sucrose and honey, under the assumption that increased feeding correlates with greater expectoration of saliva.Employing the same protocols as described earlier, we used starved mosquitoes (with no access to water or sugar) as a negative control, and provided cohorts of female mosquitoes with concentrations of 5%, 10%, 20%, and 40% of honey or sucrose, reflecting sugar concentrations commonly found in nectar [26].Five assay cups (N = 15 / cup) of mosquitoes were tested with each sugar concentration and species.
To determine mosquito feeding acceptability on sweetened DP-LAMP buffer, either 410 μL of honey or sucrose solution (conc.50.1%) was added in DP-LAMP reaction mixtures containing 50 μL of 10X isothermal amplification buffer (New England Biolabs, MA, USA), 30 μL of 100 mM MgSO 4 (New England Biolabs, MA, USA), and 10 μL of blue food coloring for a 40% final concentration.The cotton round was saturated with 500 μL of sweetened DP-LAMP buffer and placed on an acrylic Petri dish inside the paperboard cup.Five assay cups (N = 15 / cup) were tested with sucrose or honey solution with or without DP-LAMP reagents.Starved females (no water or sugar) served as negative control and 40% solution of honey or sucrose served as a positive control.
Sensitivity and stability of sweetened DP-LAMP over time
West Nile virus (Indian River County, FL, GenBank: DQ983578.1)or DENV-l (Key West, FL, GenBank: JQ675358.1)was propagated in tissue culture flasks (175 cm 2 ) with confluent monolayers of Vero cells (Vero E6, ATCC CRL-1586).After 1 h of incubation at 37˚C with a 5% CO 2 atmosphere, 25 mL media: 199 media, 10% fetal bovine serum, 0.2% amphotericin B (Fungizone 1 ), and 2% penicillin-streptomycin were added into tissue culture flasks by following previous methods [27].Tissue culture flasks with virus were harvested after 3 (WNV) or 7 (DENV-I) days of incubation and diluted in a ten-fold serial dilution.Viral RNA from the diluted virus was extracted in accordance with the procedures by the QIAamp viral RNA mini kit (Qiagen, CA, USA) and was amplified with 20 μL reaction mixtures that contained the following components: 10.8 μL of PCR Master Mix reagents (Invitrogen, CA, USA), 2.2 μL of DEPC-treated water (Fisher BioReagents, PA, USA), 1 μL of 10 μM of each primer, and 5 μL of RNA template by RT-qPCR (Bio-Rad Laboratories, CA, USA) using methods described elsewhere [28,29].All viral RNA samples were stored at −80˚C until required for experiments.Premixed DP-LAMP buffer consisted of 12.5 μL of WarmStart 2X MM (NEB, MA, USA), 2.5 μL of WNV or DENV-I 10 X DP-LAMP primers and probes (1.6 μM of FIP and BIP, 0.4 μM of LB for WNV or 0.5 μM of LB for DENV, 0.5 μM of LF for WNV or 0.4 μM of LF for DENV-I, 0.2 μM of F3, and B3, 0.1 μM of fluorescent FAM-3 0 labeled probe and 0.15 μM of Iowa Black FQ-5 0 labeled LB/LF probe), 0.5 μL of RNase Inhibitor (NEB, MA, USA), 0.5 μL of Antarctic UDG (NEB, MA, USA), 0.25 μL of ET SSB (NEB, MA, USA), and 0.2 μL of dUTP (Promega, WI, USA) [21,24,30].An aliquot of 7.55 μL of aqueous sucrose was added to the pre-mixed DP-LAMP buffer to achieve specified concentrations, which included 0%, 5%, 10%, 20%, and 40%.The mixture was thoroughly homogenized and centrifuged for 3 s.An aliquot of 1.0 μL of viral RNA, containing 4.0 log10 PFU/mL of either WNV or DENV-I, was added as a template to the premixed DP-LAMP buffer.In each DP-LAMP buffer without sucrose, a positive control template containing 4.0 log 10 PFU/mL of either WNV or DENV-I and a negative control (NTC) containing nuclease-free water were included.A positive control template containing 4.0 log 10 PFU/mL of either 1.0 μL WNV or DENV-I and a negative control (NTC) containing nuclease-free water were included in each DP-LAMP buffer without sucrose.The total reaction volume was 25μL.Premixed DP-LAMP was performed in a thermocycler (Bio-Rad Laboratories, CA, USA) at 65˚C for 30 s (60 cycles), 95˚C for 300 s (1 cycle), and 37˚C for 300 s (1 cycle).Fluorescence intensity from the DP-LAMP assays after thermal treatment was measured by relative fluorescence units (RFU).Primer and probe sequences are listed in S1 Table .To test the duration and stability of the DP-LAMP assay with the addition of sucrose under simulated field conditions, the premixed DP-LAMP buffers including 10 X DP-LAMP primers and probes were added to 40% aqueous sucrose.One set of premixed DP-LAMP buffers with 40% aqueous sucrose was used as a positive control template containing 4.0 log 10 PFU/mL of WNV or non-template control containing nuclease-free water.Other sets of premixed DP-LAMP buffer with nuclease-free water (without sucrose) were used as a positive control template or non-template control.The premixed DP-LAMP buffer was stored in an environmental chamber (27.0 ± 0.5˚C, 80.0 ± 5.0% RH) for up to 72h, and the DP-LAMP assays were performed in a thermocycler at 0, 8, 24, 48, and 72h.Fluorescence intensity from the DP-LAMP amplicons was measured by RFU.
Detection of arboviruses from mosquito saliva by sweetened DP-LAMP
Five to seven-day-old females of Cx. quinquefasciatus or Ae.aegypti were individually infected by intrathoracic inoculation with 69 nL of Dulbecco's Modified Eagle Medium (DMEM, Gibco, MT, USA) containing 5% EquaFetal (Atlanta Biologicals, GA, USA), 1% PenStrep (Gibco, MT, USA), and 1% L-Glutamine (Gibco, MT, USA) containing WNV or DENV-l in a concentration of 4.0 log 10 PFU/mL using a programmable microinjector (Nanoject III, Drummond Scientific, PA, USA) after cold anesthesia for 3 min.The WNV and DENV-l titers utilized in this study fall within the concentration range observed in naturally or artificially infected mosquitoes [31][32][33].This method permitted us us to obtain consistent numbers of infectious females with a low variablitiy in virus titer [34].Control (sham) mosquitoes were also intrathoracically inoculated with 69 nL of only media (DMEM).All inoculated mosquitoes were transferred into a 470 mL paperboard cup (Webstaurant Store, Lititz, PA, USA) and covered with a double layer of mesh to prevent escape.The paperboard cups were placed with water-soaked cotton balls to provide moisture and held within incubators at 27.0 ± 0.5˚C, 80.0 ± 5.0% RH, and 14:10 (L:D) h photoregime for a 9-day incubation period, which is expected to be sufficient time for dissemination and salivary gland infection for both viruses [34].
No-choice assays were performed by placing a single female mosquito of either WNVinfected Cx. quinquefasciatus or DENV-I-infected Ae. aegypti, starved for 24 h, into a 33 mL clear polystyrene plastic vial.A 24 μL sweetened premixed DP-LAMP buffer including primers and probes was applied into the cap of the assay tube inserted into the bottom of the plastic vial which mosquitoes permitted ad libitum access to sweetened DP-LAMP buffer and directly salivated virus-infected mosquito saliva into the assay tube.Once DP-LAMP buffer feeding was confirmed individually by observation (visualization of blue coloring in the mosquito crop), we screened the viruses from sweetened DP-LAMP buffer using DP-LAMP assay.Then, virus-infected mosquitoes were anesthetized using ice, and wings and legs were removed for saliva collection.The proboscis of each female was inserted into a microhematocrit capillary tube (Fisherbrand, Houston, TX, USA) containing 100 μl of mineral oil (Cargille, NJ, USA) and was left to salivate for at least 45 min at room temperature.The capillary tube contents containing mosquito saliva and virus were expelled into a polypropylene centrifuge tube with 300 ul of DMEM.The viral RNA extractions were performed using the QIAamp Viral RNA Kit to screen viruses by RT-qPCR.The results were compared with their sensitivity and specificity.Samples returning a cycle threshold (CT) value of �38 were considered putatively positive.
Visualization of DP-LAMP products by an optical image sensor
A blue light transilluminator was designed and then manufactured using a 3D printer (Ender-3 S, CREALITY, China) to capture an image of the DP-LAMP amplification for arbovirus detection based on turbidity, bioluminescence, and target specific probes (S3 Fig) .Two 475 nm led (XPE BBL-L1-0000-00301-SB01, New Energy Ltd, NC, USA) one at each end of the enclosure, generated the blue light.The transilluminator was placed in a 220 X 200 x 100 mm cardboard box with a rectangular hole in its lid.The box supported a cellphone camera that viewed the samples through an amber 570 nm filter (2422 amber acrylic sheet, ePlastics, San Diego, CA, USA) covering the hole.After thermal treatment, we captured the optical image of DP-LAMP products in a polypropylene centrifuge tube with or without sunlight exposure at various WNV titers (i.e., 2, 3, 4, 5 log 10 PFU) over time points (i.e., 0, 15, 30, 60 min).OpenCV with Python was used to measure hue (H-value), saturation (S-value), and brightness (B-value) with a range from 0 (black) to 100 (white).
Statistical analysis
Differences in mean mass change in mosquitoes were determined by analysis of variance (ANOVA) test followed by a post hoc analysis for pairwise multiple comparisons.Kruskal Wallis non-parametric tests for statistical significance were used for percent engorgement between treatment groups.All statistical procedures were conducted by JMP Statistics, Version 15.0 (SAS Institute Inc., Cary, NC, USA).Alpha was set at 0.05 for all statistical tests.
Results and discussion
We determined an optimal sugar formulation as a critical parameter in the development of sweetened DP-LAMP assay to induce maximum feeding and associated salivation by measuring mosquito mass and scoring a volume of engorgement, from which we found greater engorgement in higher concentrations (20-40%) of honey and sucrose (both p <0.0001) (Fig 1A and 1B).Both male and female mosquitoes feed on a variety of sugar sources acquired from (extra) floral nectar [35], honeydew [36], fruits [37], plant tissues [38], and possibly vertebrate blood (i.e., glucose) [39].Mosquitoes can differentiate between sugar diets through sugar-sensitive neurons located in sensilla on their proboscis and legs when given a selection of sugar sources [40].This distinction is potentially associated with nutritional value and has significant effects on various aspects of their biological fitness, such as flight performance (e.g., swarming and host-seeking) [41], insemination rates [42], longevity [43], immune response [44], and vectorial capacity (resulting in reduced host blood meal size and frequency) [45].The composition and concentration of floral nectar vary from plant species, age, and environmental conditions (e.g., temperature, and humidity) [25], but the common ranges of sugar concentrations found in nature are from 10 to 50% [25,46].Several studies showed that aroma volatiles increased with sucrose concentrations in dynamic headspace assays [47].Also, the electrophysiological analysis identified that the higher sucrose concentration induced the maximal response of sugar-sensitive neurons in sensilla [48].Although we did not determine the mosquito sugar preference by choice assays, our no-choice assays indicate that trapped females will successfully feed upon sweetened LAMP reagents.The tendency of feeding more on high concentrations of sugars in nature could be explained by evolutionary adaptation by two driving factors; energy and foraging costs [49] because higher sugar concentrations yield more energy and nutritional values for mosquito activities by reducing metabolic and predation costs.Mosquitoes were found to salivate more in highly concentrated nectars because it is too viscous to ingest [45], which is beneficial for arbovirus detection platform with sweetened DP-LAMP, assuming that more expectorated saliva positively relates to the amount of virus.We also found mosquitoes that fed on sweetened DP-LAMP buffer (honey or sucrose) were significantly higher in mass and engorgement (p <0.0001), compared to the only DP-LAMP buffer provided (Fig 2A and 2B).This observation is significant because higher engorgement levels are advantageous.We assume that greater engorgement leads to more saliva being deposited, resulting in a larger amount of virus available for testing.This, in turn, reduces the risk of false-negative results, ensuring an adequate quantity of viruses for detection using DP-LAMP.
Through experimental screening for WNV and DENV-I, we determined that the DP-LAMP assay, when supplemented with various concentrations (0%-40%) of sucrose solution, effectively detects viral RNAs with high sensitivity and stability (Fig 3A).This capability enhances the feasibility of using DP-LAMP as a molecular detection tool for arboviruses.Our preliminary study showed that DP-LAMP assay could detect 10 −2 PFU of WNV and 10 −3 PFU of DENV-I, suitable for lower threshold values in arbovirus detection (S2 Fig) .In a recent detailed study of comparison between DP-LAMP and RT-qPCR, a study by Burkhalter et al. [50] supports our results that DP-LAMP detected the same number of positives as RT-qPCR in laboratory assays and mosquito pools from the field and that DP-LAMP specificity was equivalent to that of RT-PCR.Most recently, a premixed DP-LAMP assay containing reagent/ enzyme mixtures and coupled target-specific fluorescent tags detected multiple arboviruses, implemented with relatively impure biological samples (e.g., unprocessed urine) [21].Importantly, viral nucleic material can be detected from even intact virus isolated in cell culture media without the need for an RNA isolation step, making it advantageous for field use and in a laboratory research setting.We also found that the sensitivity and specificity of sweetened DP-LAMP over various storage time periods (0-48h) under simulated field conditions (27.0 ± 0.5˚C, 80.0 ± 5.0% RH up to 48 h) were significantly higher when compared without the addition of aqueous sucrose (Fig 3B).This result is supported by Lee et al. [51] and Shukla [52] indicated that sugar (e.g., sucrose) plays a role to preserve enzymatic activity without efficacy degradation and DP-LAMP reagents can be stable up to day 14 at room temperature.To date, RT-qPCR is a gold standard for sample analysis, but enzyme-based reagents should be maintained in a cold chain, presenting a limitation in the field surveillance and epidemiology for arbovirus detection.We expect DP-LAMP assay with the addition of aqueous sucrose will stabilize DP-LAMP efficacy to maintain functionality and stability for an extended period when a full field deployment of the arbovirus detection platform under hot and humid weather conditions.
We validated the sensitivity and specificity of viral RNA detection on sweetened DP-LAMP reagents by allowing infectious mosquito females to feed.Subsequently, we confirmed the results by employing RT-qPCR with saliva collected from the same mosquitoes during the DP-LAMP assay.The pairwise alignment of our results in DP-LAMP demonstrated that the viral RNAs directly from mosquito saliva exhibited a specificity (positive or negative) of 92.0% for WNV and 85.0% for DENV-I, in agreement with RT-qPCR results (Fig 3C and S2 Table).The two false-negative amplifications of WNV and DENV-I by DP-LAMP assay which were positive by collection of saliva in capillary tubes and subsequent RT-qPCR were likely due to insufficient feeding and expectoration in the sweetened LAMP assay.It was not feasible to determine whether a female that probed the sweetened LAMP buffer was actually feeding or merely probing the fluid.The false-positive (N = 1) DENV-I DP-LAMP sample may be due to off-target amplicons [53] and/or displaced probes may have caused non-specific amplification leading to a false positive, particularly based on the fluorescent strand and turbidity-based detection that have been observed in LAMP assays [54].Our DP-LAMP assay consistently yielded optimal results through the adjustment of dNTPs, primers, and Mg2+ concentrations, which are crucial for reducing false positives and negatives [55].This adjustment is essential to avoid potential over-or underestimation of arbovirus frequency in a given area.The RT-qPCR assay on mosquito whole bodies may not accurately reflect the true transmission risk in an area.The reason is that it only measures whole-body infection rates, and for human transmission to occur, the arbovirus must propagate within the mosquito and disseminate to secondary organs, including the salivary glands before being capable of introduction into humans through infected mosquito bites during blood-feeding.Therefore, the DP-LAMP assay presented in this study simplifies and more accurately reflects transmission events in the field through measurements of infectious vectors.The current approach has the potential to be integrated into an automated arbovirus detection system within a mosquito trap.The process comprises the following steps: 1) capturing vector mosquitoes alive into a collection chamber, 2) offering sweetened DP-LAMP for mosquito feeding, 3) amplifying the viral genome using a heating block, 4) screening fluorescence intensity through light transilluminator and camera observation, and 5) transmitting the results to a web interface via a cellular modem.
We employed a blue light transilluminator to visualize the endpoint of DP-LAMP products under various conditions: (1) sunlight exposure; (2) a gradient of titers using 10-fold serial dilutions of viral RNA (i.e., 2, 3, 4, and 5 log 10 PFU); and (3) different time points (i.e., 0, 15, 30, and 60 min).Our results showed that the DP-LAMP products with WNV templates (4 log 10 PFU/mL), regardless of conditions after a 65˚C thermal treatment, produced 19.3% brighter green fluorescence compared to negatives, resulting in the ability to distinguish between positive and negative samples (S3 and S4 Tables).Non-ionizing radiation (UV radiation) is known as a major limiting factor photobleaching of the excited fluorescence [56].However, we confirmed that fluorescence generated by a strand displacing activity had remarkable photostability and was consistent up to 60 min.In addition, fluorescence could readily indicate the presence or absence of these arboviruses regardless of viral titers, although it was not feasible to distinguish variations in titer among positive samples.Recently, colorimetric detection of the DP-LAMP reaction has been proposed as a diagnostic tool for arbovirus alternative to fluorescence intensity and turbidity-based detection [21,57].This new technique allows positive and negative amplifications to be distinguished immediately based on pH-induced color changes (e.g., violet-negative sample to sky blue-positive sample) under natural light.One major drawback is that the lower viral loads decrease the overall sensitivity of colorimetric assays [58], making it difficult to distinguish between positive and negative samples.The conventional diagnostic method (e.g., RT-qPCR) for arbovirus consists of three major steps: collection and identification of mosquito specimens, isolation of total RNA, and detection of amplified viral genome by RT-qPCR [13].The latter typically takes at least 2 h to confirm the result.This procedure also includes several requirements including cold chain management.Therefore, increased testing burden, particularly during the peak prevalence of mosquitoes, limits early detection of arbovirus transmission critical for planning and deploying mosquito control actions.The sensitivity, simplicity, and rapidity of our optimal image system (S3 Fig) are greatly advantageous for use in field deployment under hot and humid environments.Also, our low-cost pre-screening method under $10 without a complex optical system (e.g., spectrophotometer) provides an excellent option for arbovirus detection, especially in endemic low-and middle-income countries.
We show here that sweetened DP-LAMP assays (S4 Fig) were capable of inducing mosquito feeding and salivation, collecting arboviruses from infectious mosquito saliva, and detecting viral RNAs (WNV and DENV-I) with high sensitivity and specificity, enabling robust identification of positive samples based on fluorescence intensity.This study suggests that our simple and reliable method has the potential to be easily integrated into existing trapping devices, significantly reducing the time and effort required for disease surveillance, including sample processing, reaction steps, and result interpretation, all without the need for RNA extraction.Future work will involve developing sweetened DP-LAMP assays for additional arboviruses, such as CHIKV, yellow fever virus, and EEEV.It will also include field testing a prototype mosquito trap integrated with various components, including sweetened DP-LAMP, a heating block, light transilluminator, camera, and a cellular modem.This testing is crucial for assessing the usability and feasibility of an automated arbovirus detection platform.The exploitation of sugar-feeding via an arboviruses detection system from vector mosquitoes by DP-LAMP assay has the potential for augmenting or perhaps even replacing sentinel chickens for arbovirus surveillance if the techniques are implemented by vector control districts.
Fig 1 .
Fig 1. Palatability of various sweeteners for vector mosquitoes.Mean mass and engorgement of sweetener solutions for (A) Cx. quinquefasciatus and Ae.aegypti.Engorgement score (right y-axis) represents observable sugar solution (dyed blue) in the abdomen.Mean mass (left y-axis) recorded after 24-h access to treatments.Different letters indicate statistical significance (p < 0.05) by ANOVA with post hoc test.Error bars denote standard deviations.(B) Effect of honey and sucrose concentrations on mass and engorgement by Cx. quinquefasciatus and Ae.aegypti.In no-choice assays, females were provided access to honey or sucrose in 0%, 5%, 10%, 20%, or 40% aqueous solution for 24-h.Error bars denote standard deviations.The pictures of mosquitoes indicating the 0-4 ranking scale measuring sugar solution engorgement.Score 0: No visible sugar solution present; Score 1: Sugar solution visible only in the anterior and ventral parts; Score 2: Less than 30% engorged with sugar solution; Score 3: Approximately half full with sugar solution; Score 4: Nearly completely engorged with sugar solution.https://doi.org/10.1371/journal.pone.0298805.g001
Fig 2 .
Fig 2. Palatability of sweetened DP-LAMP buffer for mosquitoes.Mean mass and engorgement of (A) Cx. quinquefasciatus and (B) Ae. aegypti on DP-LAMP buffer with and without honey (left panels) or sucrose (right panels).Engorgement score (right yaxis, blue bars) represents observable sugar solution (dyed blue) in the abdomen.Mean mass (left y-axis, black bars) recorded after 24-h access to treatments.In no-choice assays, females were provided access to DP-LAMP buffer, sweetened DP-LAMP (40% honey or sucrose), 40% sucrose or honey in aqueous solution, or control for 24-h.Different letters indicate statistical significance (p < 0.05) by ANOVA with post hoc test.Error bars denote standard deviations.https://doi.org/10.1371/journal.pone.0298805.g002
Fig 3 .
Fig 3. Sensitivity and specificity of sweetened DP-LAMP for arbovirus detection.(A) Effect of sugar concentration on detection of WNV and DENV-l using DP-LAMP.Relative fluorescence units (RFUs) corresponding to cycle number (time) of the West Nile virus (WNV) or Dengue-I virus (DENV-I) amplification on DP-LAMP assay mixed with various concentrations of sucrose solution.(B) Effect of sugar and incubation on stability and detection of WNV using DP-LAMP, incubated at 27.0 ± 0.5˚C for 0, 8, 24, or 48 h.Sweeten DP-LAMP (+sucrose) indicates the reaction buffer mixed with 40% sucrose solution.DP-LAMP (-sucrose) indicates the reaction buffer mixed with nuclease-free water.(C) Detection of WNV and DENV-l directly from infectious mosquito saliva using sweetened DP-LAMP.Mosquitoes were infected with WNV 4.0 log 10 PFU (Cx.quinquefasciatus) or DENV-I (Ae.aegypti) via microinjection followed by 9-day incubation.Arrows denote false positive (FP) or false negative (FN) samples: numbers correspond to S2 Table. https://doi.org/10.1371/journal.pone.0298805.g003 | 2024-02-25T05:13:43.064Z | 2024-02-23T00:00:00.000 | {
"year": 2024,
"sha1": "bc5d89be213ec40f21c23fa21fc793512a43e806",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bc5d89be213ec40f21c23fa21fc793512a43e806",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51958597 | pes2o/s2orc | v3-fos-license | T. marneffei infection complications in an HIV-negative patient with pre-existing pulmonary sarcoidosis: a rare case report
Background Talaromyces marneffei (T. marneffei) is a thermal dimorphic pathogenic fungus that often causes fatal opportunistic infections in human immunodeficiency virus (HIV)-infected patients. Although T. marneffei-infected cases have been increasingly reported among non-HIV-infected patients in recent years, no cases of T. marneffei infection have been reported in pulmonary sarcoidosis patients. In this case, we describe a T. marneffei infection in an HIV-negative patient diagnosed with pulmonary sarcoidosis. Case presentation A 41-year-old Chinese man who had pre-existing pulmonary sarcoidosis presented with daily hyperpyrexia and cough. Following a fungal culture from bronchoalveolar lavage (BAL), the patient was diagnosed with T. marneffei infection. A high-resolution computed tomography (HRCT) chest scan revealed bilateral lung diffuse miliary nodules, multiple patchy exudative shadows in the bilateral superior lobes and right inferior lobes, air bronchogram in the consolidation of the right superior lobe, multiple hilar and mediastinal lymphadenopathies and local pleural thickening. After 3 mos of antifungal therapy, the patient’s pulmonary symptoms rapidly disappeared, and the physical condition improved markedly. A subsequent CT re-examination demonstrated that foci were absorbed remarkably after treatment. The patient is receiving follow-up therapy and assessment for a cure. Conclusion This case suggested that clinicians should pay more attention to non-HIV-related lung infections in patients with pulmonary sarcoidosis. Early diagnosis and treatment with antifungal therapy can improve the prognosis of T. marneffei infection.
Background
Talaromyces marneffei (Penicillium marneffei), first discovered in 1956, is a thermally dimorphic fungus that can cause severe infections in epidemic regions of Southeast Asia, particularly in immunocompromised patients [1][2][3]. It develops into a mycelium at 25°C and into yeast at 37°C, but only the yeast-like form has pathogenic potential. Patients with HIV/AIDS have been reported to be vulnerable to T. marneffei [4,5]. However, a growing number of T. marneffei-infected patients without HIV have been reported in recent years [6][7][8]. Among non-HIV-infected patients, to our knowledge, those who suffer from long-standing pulmonary sarcoidosis have rarely been reported to be subject to T. marneffei infection. We herein describe the details of the first such case worldwide.
Case presentation
A 41-year-old man, a native of Cangnan County in the Zhejiang province of southeast China, was admitted to our hospital because of a 3-week history of daily hyperpyrexia and sputum-coughing in April 2017. The first time that multiple pulmonary nodules and bilateral hilar lymphadenopathy were found in chest CT (Fig. 1a) was 7 years ago. The patient was diagnosed with pulmonary sarcoidosis according to the results of a transbronchial needle aspiration (TBNA) and transbronchial lung biopsy (TBLB), which revealed lymphocytes, columnar epithelial cells and a cloud of epithelial-like cells. In the following years, he received follow-up chest CT examination and corticosteroid treatment irregularly. The patient met the ATS/WASOG diagnostic criteria for sarcoidosis because there was no progression of the lesions in recent years. With the pre-existing pulmonary sarcoidosis, he had been diagnosed with the progression of pulmonary sarcoidosis in a certain hospital in Shanghai 12 days prior. At that time, he was examined with chest CT and central ultrasound bronchoscopy. The chest CT showed space-occupying lesions of the right superior lobe, probably a malignant tumour, mediastinal and right hilum lymphadenopathy, and plaques and nodules disseminated throughout the bilateral lung, probably pneumoconiosis and metastasis (MT) (Fig. 1b). Compared to the initial chest CT performed in 2015 (Fig. 1a), Fig. 1b shows increased miliary pulmonary nodules and a new pulmonary consolidation. Central ultrasound bronchoscopy revealed that a nodular projection was on the surface of both superior lobar bronchus and that stenosis appeared in the right superior lobar bronchus, especially the right apical segment (Fig. 2a). The patient received transbronchial needle aspiration (TBNA) 6 times when the ultrasound probed a tumour outside of the right primary bronchus and lymphadenectasis in 11R and 10 L. The pathology exam found fibrous tissue hyperplasia accompanied by apparent infiltration of monocytes and lymphocytes. There was no evidence of non-caseating epithelioid granuloma. Moreover, eosinophils were infiltrated in some areas. After 3 days of prednisone and levofloxacin, the fever and cough persisted, and there was no clinical improvement; even worse, skin lesions ( Fig. 3a) erupted on his back.
The patient was pale and had intermittent high fever on the day of admission to our hospital. Born and raised in Cangnan, he denied a residential history in any epidemic regions. After his admission, imipenem cilastatin was promptly used against the infection. Immediate HRCT revealed bilateral lung diffuse miliary nodules, multiple patchy exudative shadows in the bilateral superior lobes and right inferior lobes, air bronchogram in the consolidation of the right superior lobe, multiple hilar and mediastinal lymphadenopathies and local pleural thickening (Fig. 1c). The image did not change much compared to the previous one. The results of the following laboratory routine examinations are shown in Table 1. Microbiology analysis showed that repeated sputum smears, sputum culture, and blood cultures were negative. Combining these clinical manifestations and biochemical analyses, the chance of tuberculosis diagnosis could not be excluded. However, the T-spot was negative, and acid-fast bacilli were not present. It was difficult to explain the extremely elevated IgE and eosinophils in the blood as well as the eosinophil infiltration in the bronchus on the monism of tuberculosis. After providing informed consent, the patient underwent bronchoscopy again, which revealed an unevenness of the trachea in the subglottic region as well as protrusions on the tracheal wall, especially in the right superior lobar bronchus (Fig. 2b). A subsequent (1/3) -b-D-glucan (G) assay was positive, and a smear was positive for fungus in the BAL. All the evidence above was associated with a higher likelihood of fungal infection. Along with the evidence that there was no clinical improvement after 4 days of antibacterial therapy, the patient was suspected of pulmonary aspergillosis because it was endemic in the region of Cangnan. He was immediately treated with voriconazole at a dosage of 200 mg/dL every 12 h via intravenous administration starting on April 14, 2017. Ultimately, the fungal culture of bronchoalveolar confirmed the diagnosis of T. marneffei infection on April 20 (Fig. 3b). His fever returned to normal, and his respiratory signs disappeared gradually after an 8-day treatment, as well as his skin lesions. The results of the chest CT re-examination showed that the lung lesions were 3) -b-D-glucan assay; GM assay galactomannan antigen assay markedly absorbed after 3 months (Fig. 1d). He continued to receive follow-up antifungal treatment.
Discussion
Talaromyces marneffei, the only known dimorphic fungus of the genus Penicillium, was first isolated in 1956 in Vietnam from the bamboo rat Rhizomys sinensis [9]. A diagnostic characteristic of T. marneffei is mould-to-yeast conversion or phase transition, which is thermally regulated. Since the first natural T. marneffei infection was reported in 1973 [10], it has been increasingly observed both in AIDS patients and in HIV-negative individuals in recent years. Among non-HIV-infected individuals, pulmonary T. marneffei infection has been reported in patients with a history of pulmonary tuberculosis [6] or chronic obstructive pulmonary disease (COPD) [11]. However, to our knowledge, the infection has not been reported in patients with a history of pulmonary sarcoidosis.
In this article, we first present such a case of a confirmed diagnosis of T. marneffei infection in a non-HIV-infected patient with pre-existing pulmonary sarcoidosis. The main route of transmission of T. marneffei is inhaling the infectious agent; rarely is there direct animal contact. The typical clinical manifestations are fever, weight loss, skin lesions, generalized lymphadenopathy, hepatosplenomegaly, and respiratory signs, but the severity of the disease depends on the patient's immune status [12,13]. The patient in this case was a non-HIV-infected patient and was young, but his lung immunity was probably impaired due to long-standing pulmonary sarcoidosis. Early in 1988, Deng, Z. et al. [14] reported that southern China was one of the endemic regions for T. marneffei. Specifically, these clinical features, as indicated by hyperpyrexia, sputum-coughing, persistent elevated IgE and eosinophils in the blood, eosinophil infiltration in the bronchus, positive BAL-G assay (G assay of BAL), and T-spot negativity as well as the failure to reveal acid-fast bacilli, led to possible infection with a pulmonary fungus. As the accuracy of the BAL-G assay is marginal rather than absolutely specific for invasive fungal disease (IFDs), the results should not be interpreted alone but should be used as a part of a full assessment together with clinical features, image findings and other laboratory results for the diagnosis of IFDs [15]. Finally, the T. marneffei infection was confirmed with bronchoalveolar lavage culture. In addition to the BAL, commonly used clinical specimens in the literature include bone marrow aspirate, blood, lymph node biopsies, skin biopsies, skin scrapings, sputum, pleural fluid, liver biopsies, cerebrospinal fluid, pharyngeal ulcer scrapings, palatal papule scrapings, urine, stool samples, and kidney, pericardium, stomach or intestine specimens [16]. Different from previously reported cases of pulmonary T. marneffei infection in non-HIV-infected patients, the pre-existing pulmonary sarcoidosis covered up the clinical futures of T. marneffei and easily misled us about the progression of the original disease or lymphoma.
The T. marneffei presentation upon chest CT is non-specific, as displayed by multiple patchy exudative shadows, pulmonary consolidation, nodular shadows, a ground-glass appearance, miliary lesions, and nodular masses, commonly accompanied by mediastinal and hilum lymphadenopathy and sometimes by cavitary lesions [17]. Compared to the chest CT (Fig. 1a) 2 yrs prior, the chest CT ( Fig. 1b and c) showed the progression of pulmonary nodules and the new consolidation lesions. In this respect, we would be more likely to suspect the progression of pulmonary sarcoidosis and to ignore the possibility of fungal infection. The chest CT (Fig. 1d) re-examined after anti-fungal treatment for 3 months showed that the lung lesions as well as some pulmonary nodules were markedly absorbed. However, the mediastinal lymphadenopathy did not improve in all the groups, as shown in Fig. 1. There was a strong likelihood that the lymphadenopathy was due to long-standing pulmonary sarcoidosis.
The non-specific presentation of T. marneffei highlights the importance of the rapid diagnosis and treatment of this potentially life-threatening mycosis. T. marneffei is susceptible to itraconazole and amphotericin B in vitro [18,19]. A study from China revealed that voriconazole had the lowest MIC (ranged from 0.004 mg/L to 0.25 mg/L) in comparison to other antifungal agents, and the results showed that voriconazole and itraconazole are active against T. marneffei isolated in vitro [20]. However, a documented study reported that a single dose of itraconazole for the treatment of T. marneffei infection in HIV-infected patients was non-effective [21]. A retrospective study evaluating the efficacy and safety of voriconazole to treat patients with T. marneffei infection suggested that voriconazole was an effective, well-tolerated therapeutic option for this disease [22]. Taken together, we preferred voriconazole as the antifungal drug for this case. Indeed, the patient recovered rapidly, and the lung lesions were markedly absorbed after treatment.
Conclusions
In summary, our study reports a case of T. marneffei infection in a non-HIV-infected patient with a history of pulmonary sarcoidosis in an endemic fungal area. This study invites clinicians to consider T. marneffei infection in non-HIV-infected patients with underlying diseases because early diagnosis and proper treatment lead to a reduction in the mortality associated with T. marneffei. | 2018-08-13T06:23:16.990Z | 2018-08-10T00:00:00.000 | {
"year": 2018,
"sha1": "7f4ef33ae4a08a78f7e41491e4255630c3d7e21d",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-018-3290-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f4ef33ae4a08a78f7e41491e4255630c3d7e21d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5030560 | pes2o/s2orc | v3-fos-license | Categorical and anti-categorical approaches to US racial/ethnic groupings: revisiting the National 2009 H1N1 Flu Survey (NHFS)
Abstract Intersectionality theory calls for the understanding of race/ethnicity, sex/gender and class as interlinked. Intersectional analysis can contribute to public health both through furthering understanding of power dynamics causing health disparities, and by pointing to heterogeneities within, and overlap between, social groups. The latter places the usefulness of social categories in public health under scrutiny. Drawing on McCall we relate the first approach to categorical and the second to anti-categorical intersectionality. Here, we juxtapose the categorical approach with traditional between-group risk calculations (e.g. odds ratios) and the anti-categorical approach with the statistical concept of discriminatory accuracy (DA), which is routinely used to evaluate disease markers in epidemiology. To demonstrate the salience of this distinction, we use the example of racial/ethnic identification and its value for predicting influenza vaccine uptake compared to other conceivable ways of organizing attention to social differentiation. We analyzed data on 56,434 adults who responded to the NHFS. We performed logistic regressions to estimate odds ratios and computed the area under the receiver operating characteristic curve (AU-ROC) to measure DA. Above age, the most informative variables were education and household poverty status, with race/ethnicity providing minor additional information. Our results show that the practical value of standard racial/ethnic categories for making inferences about vaccination status is questionable, because of the high degree of outcome variability within, and overlap between, categories. We argue that, reminiscent of potential tension between categorical and anti-categorical perspectives, between-group risk should be placed and understood in relationship to measures of DA, to avoid the lure of misguided individual-level interventions.
Introduction
Over recent decades, intersectionality theory, which calls for understanding of categories like race/ ethnicity, sex/gender and class as interlinked rather than as separate has been advocated and sometimes integrated into studies of population health (Bauer, 2014). McCall (2005) distinguishes between categorical intersectionality research, which aims to analyze how interlocking systems of oppression drive disparities between existing social groupings, and anti-categorical intersectionality, which critiques categorization per se, as use of social categories may in itself contribute to perpetuation, creation or essentialization of difference between groups. In epidemiology, categorical intersectionality can inform the field's traditional mapping of health disparities, through the use of intersectional social categories, in measurement of between-group average risk (Bauer, 2014). In contrast, anti-categorical intersectionality poses a greater challenge to epidemiology since it urges researchers to make explicit the variability within, and overlap between, socially defined groups; and to consider implications of this heterogeneity for the usefulness of social categories and the design of public health policies. However, the important tensions between average risk and heterogeneity, which can be related to potential friction between categorical and anti-categorical perspectives, are seldom teased out in epidemiology, which may result in ambiguous recommendations to researchers and policy-makers regarding the use and value of social categories. For example, Lofters and O'Campo (2012, p. 105) ask epidemiologists to use quantitative intersectional methodologies to 'highlight the most vulnerable subgroups where action is most urgently needed and ensure the best use of resources for ameliorating inequities' and to consider heterogeneity within socially defined groups to avoid the lure of misguided individual-level interventions, but without discussing the potential conflict between the two recommendations.
This article seeks to further a conceptual and methodological discussion on use of categorical and anti-categorical approaches in studies of population health and US racial/ethnic groupings. We do this by juxtaposing, on the one hand, a categorical approach with traditional between-group risk calculations (e.g. odds ratios, ORs), and, on the other hand, the anti-categorical approach with the statistical concept of discriminatory accuracy (DA), which is routinely used to evaluate the performance of diagnostic, prognostic, or screening markers in epidemiology (Pepe, Janes, Longton, Leisenring, & Newcomb, 2004). The underpinning idea of the concept of DA is that, to be suitable for individual-level inference, most exposure categories, whether social, geographic, or biological, need to be robust in their capacity to discriminate between individuals who do and do not demonstrate the outcome of interest (Merlo, 2014;Merlo & Wagner, 2013). Therefore, measures of DA are highly relevant in public health even if they are still infrequently reported in the literature Mulinari, Bredström, & Merlo, 2015;Wemrell, Mulinari, & Merlo, 2017a). We demonstrate the salience of this approach using the empirical example of US racial/ethnic identification and its value for predicting non-receipt of seasonal influenza vaccine compared to other conceivable ways of organizing attention to social differentiation in public health.
In the US context, a large number of studies have investigated how seasonal influenza vaccine uptake is linked to socioeconomic and demographic factors such as household income, educational level, age, gender, and race/ethnicity (Ding et al., 2011;Linn, Guralnik, & Patel, 2010;Vlahov, Bond, Jones, & Ompad, 2012). In this literature, some studies focus specifically on racial/ethnic disparities (Lu, Singleton, Euler, Williams, & Bridges, 2013;Lu et al., 2014Lu et al., , 2015. Notably, the US Centers for Disease Control and Prevention (CDC) regularly publishes influenza vaccination rates using a four-level race/ethnicity standard: Hispanic (any race); non-Hispanic white only; non-Hispanic black only; and non-Hispanic, all other races or multiple races (CDC, 2011). Over the last two decades, data have consistently revealed higher influenza vaccination coverage among non-Hispanic White adults than among non-Hispanic Black adults or Hispanic adults (Lu et al., 2013(Lu et al., , 2014(Lu et al., , 2015, believed to translate into differences in flu-associated morbidity and mortality (Dee et al., 2011). The well-established and persistent racial/ethic disparities found in prior studies, together with the importance of other socioeconomic and demographic factors, provide an appropriate empirical setting for the intersectional approach advanced in this article.
Another reason for selecting seasonal influenza vaccine uptake as an empirical example is the on-going discussions on appropriate policies to reduce racial/ethnic disparities (Fiscella, 2005;Hutchins, Fiscella, Levine, Ompad, & McDonald, 2009). The majority of the suggested policies are broad, including, e.g. increasing vaccine availability; reducing patient 'out of pocket' costs; making the offering of vaccines in health care and other settings as a routine practice; educating about risks and benefit of vaccines; using patient reminder and recall systems; and standing orders for vaccination (Lu et al., 2014(Lu et al., , 2015. A shared feature of such policies is that they do not target individuals based on racial or ethnic identification, and may be beneficial across racial/ethnic groups while simultaneously reducing differences between racial/ethnic groups. For example, offering free or low-cost vaccination may increase vaccination rates in all groups, in particular among low-income individuals, but may also reduce differences because of disproportionately high poverty rates in some racial/ethnic groups. However, in addition to broad interventions, policies targeting specific racial/ethnic groups have been proposed (Chen, Fox, Cantrell, Stockdale, & Kagawa-Singer, 2007;Phillips, Kumar, Patel, & Arya, 2014;Wooten, Wortley, Singleton, & Euler, 2012). For example, it has been suggested that Black and Hispanic adults should be targeted with a text message campaign prompting them to talk to their doctors about vaccination to help address knowledge gaps and dispel misconceptions (Phillips et al., 2014). Conceptually, racially or ethnically tailored interventions involve the translation of group-level rates to individual-level risk. Yet this translation is questionable at best because of potentially important variability in outcome within groups and overlap between groups (Kaplan, 2014;Merlo, 2014;. Leaving concerns about stigmatization aside (Guttman & Salmon, 2004), suggestions to implement racially or ethnically tailored policies raise questions about the value of racial/ethnic identification as a predictor of vaccination status and its predictive value compared to and above other relevant social categorizations, e.g. those based on age, income, education, or gender, or of a combination of social categorizations.
With that in mind, our purpose was threefold. First, we sought to investigate average associations between standard social categorizations and non-receipt of seasonal influenza vaccine, consistent with the conventional mapping of health disparities. Second, we sought to explore the heterogeneity of observational effects within standard racial/ethnic categories by stratifying racial/ethnic groups by gender and education, consistent with a categorical intersectionality perspective. Third, we sought to investigate how well racial/ethnic categories predicted non-receipt of the vaccine compared to and above other relevant social categorizations. Consistent with an anti-categorical intersectionality perspective, the latter analysis of DA may challenge the practical value of standard social categories for individual-level prediction. For all purposes, we used data from 56,434 adults who responded to the National 2009 H1N1 Flu Survey (NHFS) (CDC, 2012).
The National 2009 H1N1 Flu Survey
The publically available NHFS and survey data have been described elsewhere (Ding et al., 2011). In brief, the NHFS was a one-time telephone survey conducted from October 2009 through June 2010 on behalf of the CDC to monitor and evaluate the 2009-2010 vaccination campaign (CDC, 2012). The survey collected data on the uptake of both the pandemic pH1N1 and usual trivalent seasonal influenza vaccines among adults and children. Among the contacted adults, 56,656 (45.2%) completed the interview. Individual-level and household-level socio-demographic information was requested from interviewees. For some variables (race/ethnicity, gender, age), missing values were imputed. The NHFS used a sequential hot-deck method to assign imputed values, which involves replacing missing values for a non-respondent with observed values from a respondent that is similar to the non-respondent with respect to characteristics observed by both cases (CDC, 2012). There is no information in the NHFS on the amount of imputed values but according to the CDC the amount was 'very small' (personal communication).
Outcome variable
The outcome variable was seasonal flu vaccination (yes or no). 'Yes' indicated that the person had received at least one seasonal influenza vaccination since August 2009. Two hundred and two (0.4%) individuals with missing values on this variable were excluded from the analysis.
NHFS explanatory variables
We used socio-demographic variables defined in the NHFS. 'Race and ethnicity' were based on self-reported information. It included the following groups: Hispanic (any race), non-Hispanic White, non-Hispanic Black, and non-Hispanic, other races or multiple races. This four-level race and ethnicity variable was derived from answers to two questions in the NHFS. Consistent with the revised Office of Management and Budget (OMB, 1997) standards for classification of race and ethnicity, the first question was 'Are you of Hispanic or Latino origin?' The interviewer was instructed to offer the following alternatives: 'Mexican/Mexicano, Mexican-American, Central American, South American, Puerto Rican, Cuban/Cuban American, or other Spanish-Caribbean' . This was followed by a second question: '[In addition to being Hispanic or Latino,] Are you White, Black or African-American, American Indian, Alaska Native, Asian, Native Hawaiian or other Pacific Islander?' The race/ethnicity variable in the NHFS, however, contains only four race/ethnicity categories; the NHFS 'other races or multiple races' category includes Asian, American Indian or Alaska Native, Native Hawaiian or Pacific Islander, and other races, as well as any non-Hispanic respondent selecting more than one race.
'Gender' was either man or woman. While from an intersectionality perspective, binary classification of gender is a limitation; an 'other' category was not permitted by the survey data. 'Age' was divided into five groups (18-34; 35-44; 45-54; 55-64; and 65 or more years). We assessed socioeconomic position using two variables: the 'poverty status' of the person's household and the participant's self-reported 'level of education' (college graduate; some college; 12 years;<12 years; missing or unknown). Household poverty categories (>=$75,000/year; above the poverty threshold but <$75,000/year; below the poverty threshold; poverty status unknown) were based on the number of adults and children reported in the household, the reported household income, and the 2008 Census poverty thresholds (CDC, 2012).
Intersectional explanatory variables
Recent public health studies have stressed the importance of considering social categories not only distinctly but also intersectionally (i.e. simultaneously in individuals) (Lofters & O'Campo, 2012). For instance, it is possible that the average risk of non-receipt of the vaccine is similar in intersectional subgroups defined by different 'race/ethnicity' (e.g. Black women vs. White men) but divergences within the same racial/ethnic group (e.g. White men vs. White women). If this was true, it would point to important heterogeneity of effects within and between standard racial/ethnic categories. Therefore, in addition to existing variables in the NHFS, we created two novel intersectional variables by stratifying the 'race and ethnicity' categories by, first, 'gender' and, second, 'gender' and 'education' . We used education rather than household poverty as a proxy for socioeconomic position in this combined variable because fewer values were missing for the former (5% vs. 17%).
Measures of association
We used logistic regression to examine the association between the potentially explanatory variables and non-receipt of seasonal influenza vaccine. We developed a series of analyses that modeled one variable at a time followed by more elaborate models that adjusted for age, household poverty, and level of education. In addition, we conducted separate analyses using the two intersectional variables mentioned above, created to investigate heterogeneity of effects within and between racial/ethnic groups. In all analyses, we used the provided survey weights that are calculated using a number of socioeconomic and demographic variables including age, gender, race/ethnicity, and state of residence (CDC, 2012). We expressed associations by means of ORs and 95% confidence intervals (CIs). The reference groups in the analyses were those presenting the highest vaccination rates.
Analysis of discriminatory accuracy
DA measures the ability of a diagnostic tool, marker or category to correctly discriminate between people with or without an outcome of interest (Merlo, 2014;Pepe et al., 2004). In principle, diagnostic tools, markers, or categories, often included as covariates in statistical models, need to have high DA to be deemed valid for diagnostic or prognostic assessment. It is well known that measures of association alone are inappropriate for gauging the DA of statistical models (Pepe et al., 2004). In fact, what we normally consider a strong association between an exposure and an outcome (e.g. an OR of 10) may be related to a rather low capacity of the exposure to discriminate cases and non-cases. For linear regression models, DA corresponds with the concept of variance explained (r 2 ) used to evaluate the general strength of findings in research fields including epidemiology (Merlo & Wagner, 2013). For logistic regression models, DA is assessed by means of receiver operating characteristic (ROC) curve analysis. The ROC curves were created by plotting sensitivity, or the true positive fraction (TPF), vs. 1-specificity, or the false positive fraction (FPF), at various threshold settings of predicted risk obtained from the logistic regression models. The TPF expresses the probability that given some covariates an unvaccinated individual belongs to the class coded as 1 (the individual is predicted to be unvaccinated) at a specific threshold setting of predicted risk. The FPF expresses the probability that, using the same threshold, a vaccinated individual belongs to the class coded as 1, i.e. the individual is misclassified as unvaccinated. We calculated the area under the ROC curve (AU-ROC), or C statistic, as a measure of DA. AU-ROC assumes a value from 0.5 to 1 where 1 is perfect discrimination and 0.5 is as informative as flipping an unbiased coin (i.e. the covariates have no predictive power) (Pepe et al., 2004). Here, the AU-ROC can be interpreted as the probability that a randomly selected non-vaccinated individual will have a higher predicted risk of non-receipt than a randomly selected vaccinated individual. For example, an AU-ROC = 0.6 means that if we randomly select one unvaccinated and one vaccinated individual, the probability of having a higher predicted risk of non-receipt for the unvaccinated individual is 60%. If the AU-ROC = 1, every unvaccinated individual would have higher predicted risk of non-receipt than every vaccinated individual.
In an initial series of simple logistic regression models, we calculated the AU-ROCs with 95% CIs of models including age alone or age plus one or more other variables. We assessed the incremental discriminatory value of a model by calculating the increase in AU-ROC. We used the AU-ROC of age as the baseline from which to assess the incremental discriminatory value of other models because age is a major determinant of influenza vaccine receipt and also a confounder of the association between race/ethnicity and influenza vaccination receipt (Lu et al., 2013(Lu et al., , 2014(Lu et al., , 2015. In a second series of logistic regression models, we calculated the AU-ROCs with 95% CIs of models including age and the variable 'race and ethnicity' together with 'gender' or with 'gender' , 'household poverty status' , and 'educational level' . This second series of modeling was done to assess the incremental discriminatory value of more elaborate models. Finally, we calculated the AU-ROCs with 95% CIs of models including age and the two intersectional variables to test whether the use of intersectional sub-groupings lead to improvement of DA compared to models that include 'race/ethnicity' , 'gender' and 'education' as separate terms. We performed the statistical analyses using SPSS Version 22.0 (SPSS Inc., Chicago, Illinois, USA) and STATA (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, TX: StataCorp LP).
Mapping of disparities through measurement of between-group average risk
As shown in Table 1, the overall non-receipt of seasonal influenza vaccine in the sample was 53.3%. According to the raw data, coverage was higher for individuals identified as non-Hispanic White compared to each of the other racial/ethnic groups, as well as in men compared to women. Vaccination coverage also generally increased with increasing age, household income, and educational level.
Our analyses revealed that, compared to the non-Hispanic White group, rates of non-vaccination receipt were significantly higher among non-Hispanic Blacks (OR = 1.72, CI 95% 1.52-1.94), Hispanics (OR = 1.88, CI 95% 1.63-2.17), and people identified as being of other or multiple races (OR = 1.19, CI 95% 1.04-1.37) ( Table 2). The associations remained conclusive for non-Hispanic Blacks and Hispanics after adjustment for age, but the strength of the associations diminished for both groups and especially for Hispanics (OR = 1.35, CI 95% 1.18-1.56). Additional adjustment for educational level and household poverty status further weakened associations but they remained statistically conclusive (Table 2). Moreover, men had a higher rate of non-receipt of seasonal influenza vaccine than women, and there were conclusive differences across age groups, as well as across household poverty and educational level categories ( Table 2).
Heterogeneity of effects between and within racial and ethnic categories
The combination of the race/ethnicity and gender variables that created 8 different intersectional subgroups revealed that in comparison to non-Hispanic White women, all other subgroups except women identified as being of 'other or multiple races' had higher rates of non-vaccination receipt (Table 3). However, ORs were similar for non-Hispanic White men (OR = 1.20, CI 95% 1.11-1.30) and Hispanic women (OR = 1.41, CI 95% 1.19-1.67), showing that the risk of non-vaccination receipt is heterogeneously distributed within and between racial/ethnic categories. Combining race/ethnicity, gender, and education variables to create 40 different intersectional subgroups resulted in an even more complex picture: we observed substantial heterogeneity of effects within and between groups defined by race/ ethnicity (Table 3).
Measuring the discriminatory accuracy of social categorizations
Despite these statistically significant associations, the DA of the categories studied was very low. Table 4 shows the AU-ROCs of models that included age alone or age together with one or more of the explanatory variables. The AU-ROC for age alone was 0.658 (Model 1) and it increased only slightly (+0.005) when information on race/ethnicity was included (Model 2). That is, if we randomly select one unvaccinated and one vaccinated individual from the NHFS, the probability of having a higher predicted risk of non-receipt for the unvaccinated individual in the two models is 65.8 and 66.3%, respectively. Similarly, information on gender did little to improve the DA above the model that included age (+0.006) (Model 3) or age and race (+0.004) (Model 4; compare to Model 2). Household poverty status and educational level were the most informative variables beyond age (each +0.014, not shown), but the model including age, household poverty status, and educational level still reached only an AU-ROC = 0.678 (+0.020) (Model 5). Notably, including race/ethnicity only added +0.001 (Model 6), which is consistent with a strong relationship between class and race/ethnicity. We observed the highest DA (AU-ROC = 0.681) for the model that included all explanatory variables (Model 7). However, this higher DA compared to the model including age only (+0.022) was mainly due to the socioeconomic variables. In the final analysis, we tested whether the composite intersectional variables improved the DA compared with the models where the 'race and ethnicity' , 'gender' and 'educational level' variables were kept separate; we found that use of intersectional sub-groupings did little to further improve DA (Models 4 vs. 8 and 7 vs. 9).
Discussion
Eliminating health disparities along lines of race/ethnicity is an important goal of public health policy. Our results confirm findings that adult seasonal influenza vaccination coverage is higher among non-Hispanic White adults than among non-Hispanic Black adults or Hispanic adults (Lu et al., 2013(Lu et al., , 2014(Lu et al., , 2015CDC, 2011). The group defined as 'non-Hispanic, other races or multiple races' also had lower vaccination coverage than the White majority group, but the difference disappeared when we controlled for age. When faced with no evidence of a difference between broadly defined racial/ethnic groups, researchers have sometimes sought to disaggregate groups since aggregating data can conceal inequities between sub-groups. For example, a study found no differences in vaccination coverage between the non-Hispanic White group and the broad Asian/Pacific Islander group, but found differences between the non-Hispanic White group and the Filipino American sub-group (Chen et al., 2007). A recognized problem with sub-group analyses is that conclusive findings may represent spurious associations (Sun, Ioannidis, Agoritsas, Alba, & Guyatt, 2014). However, our study highlights another issue of major importance to public health practice and research: while aggregate data may conceal differences between groups (Pande & Yazbeck, 2003), aggregating data can also conceal substantial outcome variability (and thus inequality) within groups and overlap between groups (Bleich, Thorpe, Sharif-Harris, Fesahazion, & LaVeist, 2010). If this heterogeneity is considerable, references to between-group differences in mean values, without simultaneous reference to within-group variation and betweengroup overlap, risk overemphasizing the value of racial/ethnic categories as a means of predicting the health-related or health care-seeking behavior of individuals (Mulinari, Juárez, Wagner, & Merlo, 2015;. Reminiscent of potential tension between categorical and anti-categorical approaches (McCall, 2005), then, between-group average risk should be placed and understood in relationship to measures of DA to avoid the lure of misguided individual-level interventions.
Assertion of the limited value of racial/ethnic categories for individual-level prediction is not new (Kaplan, 2014;Kaplan & Bennett, 2003), and its relevance extends beyond medicine and public health, e.g. to profiling by law enforcement and security personnel (Engel, 2008). In medicine, a meta-analysis of racial differences in response to antihypertensive drugs found that despite differences between US Whites and Blacks at the aggregate level, race has little value in predicting response to antihypertensive drugs, because Whites and Blacks overlap greatly in their response to all categories of drugs (Sehgal, 2004). Similarly, the use of human racial/ethnic categories in genetics has been heavily criticized because of the large genetic diversity within groups and continuous overlap between groups despite average differences in allele frequencies (Lewontin, 1972;Holsinger & Weir, 2009). The novelty of our study is the introduction of ROC curves as a measure of DA to gauge the overlap between US racial/ethnic categories. ROC curve analysis, or similar approaches like the multilevel analysis of individual heterogeneity (Merlo, 2003(Merlo, , 2014Wemrell, Mulinari, & Merlo, 2017b) (Beckman et al., 2004; categorizations are valid as instruments for individual-level predictions. In the present case, the large overlaps in vaccination coverage are reflected in the low DA of the racial/ ethnic categories used. A low DA effectively refutes the argument that although not every individual within a racial/ethnic group possesses a particular trait, racial/ethnic categories function well enough in predicting which individuals possess it. Because standard racial/ethnic categories do not function well enough for individual-level prediction, the reliance on racial/ethnic identification as a proxy in medical decision-making may lead to inappropriate treatment based on stereotyping (Kaplan, 2014). This does not preclude the possibility of other racial/ethnic categorizations having a higher DA, or that existing categorizations are more relevant for predicting other outcomes, but to our knowledge such a case awaits empirical confirmation. Table 4. AU-ROC analysis to evaluate the DA of different models for non-receipt of seasonal influenza vaccine. a 95% confidence intervals are ± 0.005 or 0.004. The gray shading indicates which variables are included in Models 1-9. For example, Model 1 only included the variable age.
Another argument professed in favor of using racial/ethnic identification to predict vaccination behavior is based on reports of unique barriers to adult influenza vaccination in different racial/ethnic groups (Chen et al., 2007). Yet on closer inspection, most of those barriers are not unique to any particular group. For example, Chen et al. (2007) found that 32% of African-American influenza vaccination absentees cited concerns over the vaccine causing influenza or serious side effects, while 18% of Whites, 13% of Latinos, 11% of Japanese Americans, and 22% of Filipino Americans cited the same reason. Nonetheless, the authors called for 'ethnic specific strategies to address the issues of mistrust by African-American expressed in sentiments such as their concern that the influenza vaccine causes influenza' (Chen et al., 2007). While there may be issues of mistrust among African-Americans related to racism and social exclusion, mistrust is not a racially unique phenomenon (Boulware, Cooper, Ratner, LaVeist, & Powe, 2003), nor is it a racially unique reason for not being vaccinated (Chen et al., 2007). Social inequity in vaccination coverage and social patterning of trust are unlikely to be effectively addressed by racially tailored interventions. On the contrary, experiences with tailored social programs suggest they tend to undermine social trust (Kumlin & Rothstein, 2005). Interventions may be particularly misguided when targeted at altering the behavior of selected individuals, as opposed to changing macro-or mesolevel factors that enable and constrain behaviors because targeting individuals carries a higher risk of stigmatization (Guttman & Salmon, 2004). To be clear, we are not questioning the importance of race/ ethnicity as an identity, or the lived experience of people in a racialized society. Rather, our concern is with the use of racial/ethnic categories for individual-level prediction and profiling. We believe this use would be dramatically reduced, if measures of DA be routinely reported alongside measures of associations when gauging group-level differences.
Our study also raises questions about the value of racial/ethnic identification for predicting vaccination status compared to other conceivable ways of organizing attention to social differentiation in public health. That the CDC routinely releases vaccination coverage data by race/ethnicity is consistent with federal mandates requiring agencies under the Department of Health and Human Services to collect and report race/ethnicity-based statistics to monitor and combat inequalities (Epstein, 2008). A major argument for collecting race/ethnicity-based statistics is that race/ethnicity is a primary axis of social distinction and is therefore associated with a broad array of factors with important modifying effects on health and health care delivery (Kaplan & Bennett, 2003). However, as pointed out by Epstein (2008), the federal endorsement of a specific set of racial/ethnic categories has resulted in the proliferation of studies that treat these taxonomic categories as the standardized formal units of analysis; in the process, other ways of classifying health risks, such as behavioral practices, and other ways of classifying populations, such as by social class, receive far less attention.
The CDC does not consistently report influenza vaccination coverage by socioeconomic status indicators such as income or education. The CDC acknowledges that racial/ethnic disparities in influenza vaccination coverage have been studied more extensively compared to other potentially relevant disparity domains, such as gender and socioeconomic position (Setse et al., 2011), suggesting that disparities along these lines are considered of lesser concern. Yet information on variables relevant to other disparity domains is readily available, and our analysis shows conclusive differences between women and men irrespective of age (i.e. not fully explained by pregnancy) and across socioeconomic groups, consistent with the results reported by others (Setse et al., 2011). These differences appear to be as large as or larger than those observed between individuals identified as Black or White. In fact, the ROC curve analysis showed that above age, the most informative variables were education and household poverty status (+0.020), with race/ethnicity providing very little additional information (+0.001). It is important to note that race/ethnicity and socioeconomic position are not independent, as the disadvantage that members of some minority groups suffer will translate into, on average, lower income and educational levels. Polices that effectively address socioeconomic inequities are therefore predicted to diminish, albeit not eliminate, racial/ethnic gaps. Ignoring socioeconomic inequalities risks diverting attention away from policies that could have major impact on vaccination rates among minority group members while simultaneously benefitting the large group of deprived Whites.
Intersectionality theory posits that social differentiation takes place along multiple, non-independent, and possibly interacting axes (McCall, 2005). In the case of vaccination coverage, one consequence of this social complexity is that most individuals can be construed as belonging to one or more major social groups with lower vaccination coverage than one or more comparison groups. It also means that, through application of a categorical intersectionality perspective, groups can be split into a number of smaller taxonomic units through the combination of more than one major axis of social differentiation, as we have done in this paper. Yet the ROC curve analysis showed that the composite intersectional variables did little to improve the DA compared with the models where the 'race and ethnicity' , 'gender' and 'educational level' variables were kept separate. This highlights the fact that splitting the population into increasingly smaller taxonomic units to 'hone in on … the most vulnerable subgroups' (Lofters & O'Campo, 2012, p. 105) may not ensure the best use of resources for ameliorating inequalities because of the high degree of outcome variability within, and overlap between, social categories. The problem, therefore, is how to justify focusing on one particular axis of social differentiation rather than any other. Decisions to focus on one particular set of social positions or intersection of positions will be guided by political, theoretical, and pragmatic choices and constraints. This point is underlined by the fact that routine stratification by race/ethnicity is primarily a US practice bolstered by federal mandates and standards (Epstein, 2008). While measures of DA provide no escape from this situation, at least they underscore the important points that social structures, such as racism, generate persistent patterns of inequality but not law-like regularities (Muntaner, 2013), and that there is a great deal of variance in health and health care seeking behavior that is not readily mapped onto social position (Dunn, 2012).
In sum, our study shows that the practical value of standard racial/ethnic categories, and other relevant social categorizations, for making inferences about individuals' vaccination status is questionable despite seemingly large and conclusive differences between groups. More generally, our study highlights the tension between average, between-group, risk and measures of DA, related to and understood by means of categorical and anti-categorical intersectionality. While quantitative intersectionality research has often been of the categorical type, anti-categorical approaches have usually been furthered through qualitative research, often encompassing philosophical critique of social categorization as potentially leading to demarcation, exclusion and furthered inequality. Operationalized through measurement of DA, anti-categorical approaches can also be investigated, expressed and developed within a quantitative framework.
Limitations
Because it is based on a cross-sectional telephone survey, our study has several weaknesses. Among these, it should be stressed that the response rate was relatively low (45.2%), which increases the risk of non-response bias, and that information was self-reported and may be subject to recall error. According to the CDC (2011), the survey overestimates seasonal influenza vaccination coverage; in part this may because of misclassification of pandemic pH1N1 vaccine for seasonal influenza vaccine. To test if the low DA of racial/ethnic categories was limited to seasonal influenza vaccination, we ran the analyses with 2009 pandemic pH1N1 vaccination status as the outcome, but conclusions were the same (available upon request). Finally, our analysis does not consider the fact that vaccination levels changed over the duration of survey administration which could a have slight effect on vaccination coverage estimates.
There is a substantial body of literature discussing the strength and weakness of different methods for assignment to racial/ethnic categories including self-report, investigator-assigned, based on administrative records, and using genetic markers; and study results can differ substantially depending on the method used (reviewed in Kaplan, 2014). In epidemiology, the 'gold standard' for racial/ ethnic assignment is self-report, consistent with the principle that people are who they say they are. Yet the complexity and fluidity of individual identity make it impossible to divide the population into non-overlapping racial/ethnic groups, or to validly and reliably allocate people to any given set of categories. Accordingly, research studies have found inconsistencies in the way that race and ethnicity are self-reported and recoded by investigators (Kaplan, 2014). However, because our purpose was to evaluate standard racial/ethnic categories used regularly by public health researchers and authorities, any limitations of race/ethnicity data, although important to acknowledge, do not undermine our finding that standard racial/ethnic categories have low DA for the studied outcome. | 2018-04-22T17:56:18.630Z | 2018-03-15T00:00:00.000 | {
"year": 2018,
"sha1": "6bee4269b4046663f5a45e9fe042f41f81244f81",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/09581596.2017.1316831?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "9fe1239efbf0dcbc748e4b010b5a15925db39f9e",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
9320453 | pes2o/s2orc | v3-fos-license | Partial palivizumab prophylaxis and increased risk of hospitalization due to respiratory syncytial virus in a Medicaid population: a retrospective cohort analysis
Background Infection with respiratory syncytial virus (RSV) is common among young children insured through Medicaid in the United States. Complete and timely dosing with palivizumab is associated with lower risk of RSV-related hospitalizations, but up to 60% of infants who receive palivizumab in Medicaid population do not receive full prophylaxis. The purpose of this study was to evaluate the association of partial palivizumab prophylaxis with the risk of RSV hospitalization among high-risk Medicaid-insured infants. Methods Claims data from 12 states during 6 RSV seasons (October 1st to April 30th in the first year of life in 2003–2009) were analyzed. Inclusion criteria were birth hospital discharge before October 1st, continuous insurance eligibility from birth through April 30th, ≥ one palivizumab administration from August 1st to end of season, and high-risk status (≤34 weeks gestational age or chronic lung disease of prematurity [CLDP] or hemodynamically significant congenital heart disease [CHD]). Fully prophylaxed infants received the first palivizumab dose by November 30th with no gaps >35 days up to the first RSV-related hospitalization or end of follow-up. All other infants were categorized as partially prophylaxed. Results Of the 8,443 high-risk infants evaluated, 67% (5,615) received partial prophylaxis. Partially prophylaxed infants were more likely to have RSV-related hospitalization than fully prophylaxed infants (11.7% versus 7.9%, p< 0.001). RSV-related hospitalization rates ranged from 8.5% to 24.8% in premature, CHD, and CLDP infants with partial prophylaxis. After adjusting for potential confounders, logistic regression showed that partially prophylaxed infants had a 21% greater odds of hospitalization compared with fully prophylaxed infants (odds ratio 1.21, 95% confidence interval 1.09-1.34). Conclusions RSV-related hospitalization rates were significantly higher in high-risk Medicaid infants with partial palivizumab prophylaxis compared with fully prophylaxed infants. These findings suggest that reduced and/or delayed dosing is less effective. Electronic supplementary material The online version of this article (doi:10.1186/1471-2431-14-261) contains supplementary material, which is available to authorized users.
Background
Annually between 75,000 and 250,000 hospitalizations in the United States (U.S.) may be attributed to infection with respiratory syncytial virus (RSV) among young children [1]. High-risk populations for severe RSV disease include premature infants ≤35 weeks gestational age (wGA), children with chronic lung disease of prematurity (CLDP), and children with hemodynamically significant congenital heart disease (CHD) [2,3]. RSV was responsible for 1.7 million office visits, 402,000 emergency room visits, 236,000 hospital outpatient visits, and between 75,000 and 125,000 hospital admissions in children under 5 years of age in the U.S. in 2000 [4]. The burden of RSV disease is well-documented in highrisk populations in Medicaid programs. In one study, the RSV hospitalization rates per 1000 children less than 1 year of age were 388 for infants with bronchopulmonary dysplasia (BPD), 92 for infants with CHD, and 57 to 70 for premature infants depending on wGA, compared to a rate of 30 for term infants without medical risk factors [5]. Others have found a higher risk of RSV hospitalization in Medicaid compared to non-Medicaid infants [6,7]. Complete and timely dosing with palivizumab is associated with lower risk of RSVrelated hospitalizations, yet research shows that up to 60% of infants who received palivizumab in Medicaid populations do not receive full prophylaxis [2,3,[8][9][10].
Per the package insert, palivizumab dosing consists of monthly intramuscular injections administered throughout the RSV season [2,11]. Mean half-life of palivizumab is approximately 20 days and compliance to the monthly dosing schedule is important to sustaining sufficient RSVneutralizing antibody levels throughout the therapeutic period. Efficacy of less frequent dosing has not been established [2,3,12].
The objective of the current study was to evaluate the association between partial palivizumab prophylaxis and the risk of RSV hospitalizations in a large population of high-risk infants with Medicaid coverage.
Data source
Study data was obtained from the MarketScan Medicaid Multi-State Database® (2003)(2004)(2005)(2006)(2007)(2008)(2009)) which contained the pooled experience of 12 million Medicaid enrollees from 12 geographically dispersed U.S. states. This database includes records of plan eligibility, inpatient and outpatient services, outpatient prescription drugs, and long-term care. Data are fully compliant with the Health Insurance Portability and Accountability Act of 1996. Because this study did not involve the collection, use, or transmittal of individually identifiable data, Institutional Review Board review was not required.
Study population selection and analysis periods
All infants born between May 1 st and September 30 th in 2003 through 2008 whose database records could be linked to their birth hospitalization record were selected. This selection window intentionally excludes infants born during RSV season because dosing of palivizumab during birth hospital stay cannot be identified in claims data. Potential study patients were required to have continuous medical and pharmacy benefits from the birth date (index date) through April 30 th of the first year of life, to have been discharged from the birth hospitalization prior to October 1 st of the birth year, and to have received at least one dose of palivizumab. The start of the RSV season varies across the US, and we included only infants whose first dose was in August or later. We focused on high-risk infants (preterm infants ≤34 wGA, infants with CLDP or with hemodynamically significant CHD regardless of wGA). While on-label palivizumab use includes 35 wGA infants, the ICD-9-CM code combines this group with 36 wGA thus precluding their identification for our study.
The time between birth and the first palivizumab administration was defined as the pre-period. While RSV season is traditionally defined as November through March, we allowed an additional month on either side since our study covers a wide geographic range and multiple seasons. The October start allows for early seasons and the April end allows for late seasons. RSV hospitalizations were examined during RSV season (Observation Period 1), defined as October 1 st to April 30 th of the first year of life. Observation Period 2 was of variable length and defined as the time after the first palivizumab administration through the end of RSV season.
CLDP, hemodynamically significant CHD, and other comorbidities of interest (Additional file 1) occurring in the pre-period were reported. Comorbid conditions were identified by the presence of a non-diagnostic claim with a relevant ICD-9-CM diagnosis code. Our CLDP definition was consistent with the American Thoracic Society definition, and in addition to a relevant diagnosis, we required use of a CLDP-specific medication or oxygen before the first palivizumab claim [13]. Similarly, a relevant medication in conjunction with a CHD-specific procedure or relevant ICD-9-CM diagnosis code identified hemodynamically significant CHD infants. The inclusion of infants with hemodynamically significant CHD is consistent with labeled indications for palivizumab in the U.S.
Because healthcare utilization is a proxy for health status, infants with emergency department (ED) visits or inpatient admissions for any cause prior to the start of the RSV season or the first palivizumab dose were identified and these data were used as covariates in multivariate analyses.
Palivizumab prophylaxis
Infants in the study population were classified as receiving partial or full prophylaxis based on palivizumab doses received up to the date of the first RSV-related hospitalization or the end of follow-up, whichever occurred first. Consistent with Frogel et al., infants who obtained the first palivizumab dose by November 30 th , with no more than 35 days between consecutive doses were considered fully prophylaxed [6]. Palivizumab claims within 7 days of each other (21% of all claims) were considered billing artifacts (e.g., result of separate billing for drug versus administration) and treated as a single dose. Age at first dose, the total number of doses (mean, median, range), and the number and percentage of infants with first dose after November 30 th were determined. Using all available data, we also determined the number and percentage of infants with ≥1 gap (>35 days between consecutive doses), the timing of gaps in the dosing sequence, and the number of days between doses for infants with ≥1 gap (mean, median, range). We also examined the percentage of infants with<5 doses and ≥5 doses, and computed the percentage of infants in each of these two groups who had therapy gaps.
Pre-period RSV-related hospitalizations
Since infants hospitalized for RSV prior to receiving their first palivizumab administration may be clinically different from and have higher costs than other infants, in the multivariate analyses, we controlled for pre-period RSV-related hospitalizations. In sensitivity analyses, we also examined multivariate results after excluding infants who had any RSV-related hospitalization that occurred prior to the first palivizumab dose and before December 1.
RSV-related hospitalization in observational periods
For Observational Period 1, we determined the incidence of RSV-related hospitalization, the mean number of hospitalizations among infants with at least one such hospitalization, and age at first admission. We also examined the severity of RSV-related hospitalization using mean LOS, and admission to intensive care unit (ICU) or use of mechanical ventilation or supplemental oxygen. For Observational Period 2, we calculated the rate of RSVrelated hospitalizations per 100 infant seasons. The number of infants with an RSV-related hospitalization following their first palivizumab dose (numerator) was divided by the total number of person-days in the observed seasons divided by 210 days (October 1-April 30) or the length of an RSV season. This result was multiplied by 100 to set 100 infant seasons. Person-days was the total number of followup days after first dose for the group overall (censored at 210 days or end of season).
Analyses
Categorical variables were presented as the number and percentage; continuous variables were summarized by the mean and standard deviation (SD). Chi-square tests were used to evaluate the statistical significance of difference for categorical variables; t-tests and ANOVA were used for normally distributed continuous variables. Nonparametric Wilcoxon and Kruskal-Wallis tests were used for continuous variables that were not normally distributed.
Correlates of full prophylaxis were assessed using logistic regression with logit link and binomial variance function. Stepwise regression (inclusion and exclusion threshold p< 0.05) was used to select variables for the final model, results of which were used to construct propensity score-based weights for the study population. These weights were then used to balance differences in the characteristics of fully and partially prophylaxed infants in the weighted models.
Unweighted and weighted estimates for the risk of inseason RSV-related hospitalization were generated using logistic regression with logit link and binomial variance function. Covariates included demographics, comorbidities, and other potentially confounding variables, in addition to prophylaxis status. For sensitivity analysis, these models were also run to assess the risk of hospitalizations with an explicit RSV diagnosis code. All analyses were completed using SAS® software, version 9.2 (SAS Institute, Inc., Cary NC, USA).
Results
A total of 11,545 infants met the study criteria ( Figure 1). Of these infants, 8,443 were identified as high-risk based on gestational age ≤34 weeks or presence of CLDP or CHD regardless of wGA.
Demographic and clinical characteristics of infants based on palivizumab compliance
Two-thirds (5,615/8,443) of the sample were partially prophylaxed (Table 1). Compared with fully prophylaxed infants, these infants were more likely to be black or Hispanic (p< 0.001), reside in urban areas (p< 0.001), belong to capitated health plans (p< 0.001), and less likely to have blind/disabled eligibility for Medicaid (p= 0.043), be a multiplet (p< 0.001), or have NICU admission at birth (p= 0.002). Partially prophylaxed infants were also more likely to have CLDP (p< 0.001) and CHD (p< 0.001) and to experience ED visits or inpatient admissions for RSV or other causes prior to the first palivizumab dose (p< 0.001). Proportions of infants with additional specific comorbid conditions are presented in Additional file 2.
Palivizumab dosing patterns
Between birth and the end of their first RSV season, fully prophylaxed infants averaged 6.3 doses compared with 3.8 for partially prophylaxed infants (p< 0.001). Of the 5,615 partially prophylaxed infants, 3,408 (60.7%) had ≥1 gap in palivizumab dosing and 1,877 (33.4%) received the first palivizumab dose after November 30 th . The majority of dosing gaps occurred before the 3 rd dose, (36.8% of gaps occurred between the first and second doses; 25.5% between the second and third doses; 20.2% between third and fourth doses; 11.8% between fourth and fifth doses; 5.8% between fifth and sixth doses). Among partially prophylaxed infants with at least one dosing gap, an average of 56.5 days elapsed between first and second doses; 51.7 days between second and third doses; 48.0 days among third and fourth doses; 46.4 days between fourth and fifth doses; and 44.0 days between fifth and sixth doses.
The proportion of infants with partial prophylaxis was higher among African Americans (68.9%; p< 0.001) and Hispanics (75.5%; p< 0.001) compared with Caucasians (61.3%). African American and Hispanic partially prophylaxed infants received significantly (p< 0.001) fewer doses compared with Caucasians (Table 2). Dosing gaps were also longer for African Americans and Hispanics compared with Caucasians, though the difference was not significant for African Americans (p= 0.063). Finally, partial prophylaxis was more common in capitated plans Figure 1 Patient selection. *2,648 infants coded as premature, unknown gestational age; 735 infants coded as live birth gestational age unknown. †Groups are not mutually exclusive. CHD and CLDP infants are also included in premature groups<33 and 33-34 weeks gestational age. CHD: congenital heart disease; CLDP: chronic lung disease of prematurity. for African Americans and Caucasians compared with non-capitated plans. Figure 2 presents the distribution by month of the first palivizumab administration. A substantial number of partially prophylaxed infants did not receive palivizumab until long after the start of RSV season.
RSV-related hospitalization rates
In our sample, there were a total of 1,368 RSV-related hospitalizations. More than one-third (36.8%) of RSV-related hospitalizations occurred prior to the first palivizumab dose. The percentage of RSV-related hospitalizations that occurred between doses was highest early in the dosing In unadjusted analyses (Table 3) during Observational Period 1, a significantly higher percentage of partially prophylaxed infants (11.7%) were hospitalized with an RSV-related illness during the season compared to fully prophylaxed infants (7.9%) (p< 0.001). In Observational Period 2, the RSV-related hospitalization rate per 100 infant seasons was 14.5 for partially prophylaxed infants compared with 10.0 for fully prophylaxed infants (p<0.001). The frequency of RSV-related hospitalizations was higher for partially prophylaxed infants throughout the RSV season ( Figure 3). Figure 4 presents the unadjusted relative risk increase (RRI) for RSV-related hospitalizations among partially prophylaxed infants. The RRI was 48% (p< 0.001) for the partial prophylaxis cohort overall, and varied from 42% (p= 0.012) to 64% (p< 0.001) depending on gestational age or type of comorbidity. Among infants with RSV-related hospitalizations, partially prophylaxed infants had longer hospital stays and were more likely to be admitted to the ICU or to receive mechanical ventilation or supplemental oxygen compared with fully prophylaxed infants (p< 0.001 for both) ( Table 3).
Multivariate analyses
In weighted logistic regression, partially prophylaxed infants had significantly higher odds of in-season RSV-related hospitalization compared to fully prophylaxed infants [odds ratio (OR) 1.21; 95% confidence interval (CI) 1.09-1.34] ( Table 4). Results were very similar [OR 1.28; 95% CI 1.09-1.51] when the outcome was restricted to hospitalizations with an explicit RSV diagnosis code. Compared with Caucasian race, "other" race was associated with an increased risk of hospitalization. Gender (male), residence (rural), type of health coverage (capitated) and older age (>3 months versus ≤3 months) at start of RSV season were each associated with an increased risk. Odds of RSV-related hospitalization during the RSV season were also higher for
Discussion
This is the largest study to date examining the association between partial prophylaxis and RSV-related hospitalizations among Medicaid infants who received palivizumab. Two-thirds (66.5%) of the high-risk infants in our study received partial prophylaxis with palivizumab. Approximately one in every five infants failed to initiate palivizumab dosing until after November 30 th . The percentage of infants with partial prophylaxis in our study is consistent with noncompliance rates previously reported for the Medicaid population [2,3,10]. Hampp et al. analyzed palivizumab utilization and compliance in children less than 2 years of age covered under the feefor-service Florida Medicaid program. During the 2004-2005 RSV season, 67.9% of palivizumab recipients were compliant, defined by the presence of at least 4 claims for the drug from October through February [10]. Compliance decreased to 41.3% with the requirement for a minimum of 5 doses. Furthermore, approximately 33% of ≤32 wGA infants in that study received no in-season palivizumab doses, which suggests that many high risk infants are unprotected while virus circulation is highest. Diehl et al. documented a 29.8% compliance rate during the 2006-2007 RSV season based on number and timing of doses in a population of infants (59.2% Medicaid) drawn from a Pennsylvania managed care plan [3]. A review by Frogel et al. of palivizumab compliance documented variability in measurement and rates across published studies [2]. They found that compliance with palivizumab dosing was higher in home health programs compared to office settings, which translated to improvements in health outcomes among infants in the former group.
Compliance with prophylaxis was previously shown to be higher in children from nonsmoking families, those whose parents believed palivizumab would have a positive effect, and those whose parents did not report difficulty with transportation [2]. The design of our study did not allow for the evaluation of those specific factors but we did find a strong association between partial prophylaxis and capitated plan membership. According to Centers for Medicare & Medicaid services, in 2010, 54,612,393 individuals were enrolled in managed Medicaid plans. This is 71.5% of total enrollment, and a 25.8% increase over 2001 (56.8%) [14]. This trend toward managed care underscores the importance of understanding why palivizumab dosing in high-risk Medicaid infants is a particular challenge in capitated health plans.
Our study also found potential disparities in palivizumab use between racial/ethnic minorities and Caucasians, including number and timing of doses, and within each ethnic group, infants in capitated plans were more likely to be partially prophylaxed. Low-socioeconomic status, limited parental knowledge of RSV and the efficacy of RSV prophylaxis, and the quality of communication between healthcare professionals and parents of high-risk infants may potentially contribute to the observed palivizumab utilization patterns and also may potentially influence use of inpatient care.
The current study provides further insight into the risk of RSV hospitalization in high-risk infants in Medicaid. Although previously published data generally show that compliance is associated with decreased hospitalization rates, study designs and the estimated association vary [2]. Analysis of data from the Palivizumab Outcomes Registry by Frogel et al. showed a significantly lower risk for RSV hospitalization (OR 0.702, 95% CI 0.543-0.913) in patients who were compliant, defined by number of doses and dosing intervals, but found no association using a compliance definition based only on number of doses [6]. In that study, a higher risk for RSV hospitalization was also found for Medicaid versus non-Medicaid patients. By contrast, Diehl et al. found no significant differences between compliant and noncompliant infants in RSV hospitalization, but this finding may have been impacted by the small sample size (N=245) [3]. Using time-dependent exposure definitions to accommodate intermittent palivizumab dosing, Winterstein et al. in a study of Florida Medicaid children found decreases in the risk of RSV hospitalization subsequent to both the initial palivizumab dose and succeeding doses [15]. However, the reduction following the first dose [HR 0.89, 95% CI 0.71-1.12] was not statistically significant. The risk reduction associated with subsequent doses (HR 0.56, 95% CI 0.46-0.69), however, was similar to the lower range of results reported in palivizumab trials [8,9].
We found a higher rate of RSV-related hospitalization (7.9% among fully prophylaxed infants) compared to the 4.8% rate in the IMpact-RSV trial [8]. There are a number of possible explanations for this difference, including increased awareness of the risks of RSV and increased monitoring in the trial population. The background RSV incidence is likely to be greater in the Medicaid population than in the clinical trial populations. Sangare [7]. In addition, the high prevalence of comorbidities (56% of infants overall) in our study population and the use of diagnosis codes beyond simply RSV may have also contributed to the higher hospitalization rate. Our decision to use the expanded code list was driven by an acknowledgement that RSV-specific ICD-9-CM codes are underutilized in practice. Our RSV-related rates are within range of those reported by Boyce et al. who calculated RSV hospitalization rates based on a definition inclusive of RSV infection and bronchiolitis and found rates of 57 -388 per 1,000 Tennessee Medicaid children less than 1 year of age [5].
We observed that a substantial proportion of RSVrelated hospitalizations occurred prior to the first palivizumab dose. This finding suggests missed opportunities for prevention. In a subgroup analysis, omitting these infants with RSV-related hospitalizations prior to first dose did not alter the finding of increased risk of hospitalization among infants with partial prophylaxis.
Our study also found differences in the severity of RSVrelated hospitalization for fully and partially prophylaxed infants. Partially prophylaxed infants had longer RSVrelated hospital stays and a higher proportion of these infants were admitted to an ICU or received mechanical ventilation or supplemental oxygen compared with fully prophylaxed infants. Our findings are aligned with the secondary clinical efficacy endpoints from the IMpact RSV Clinical Study, which also found significant differences in length of RSV hospitalization stay and ICU admissions among the palivizumab group compared with placebo group [8]. In addition, a recent study found an average of 1.4 fewer days in the hospital among RSV-prophylaxed infants compared to infants without RSV prophylaxis [16]. Future studies should focus on the economic benefits associated with reducing both the incidence and severity of RSV disease in the hospital setting with complete palivizumab dosing.
There are several limitations to these analyses. Administrative claims are collected for payment purposes and not clinical research and therefore are subject to coding errors, which may impact identification of clinical outcomes. In addition, claims do not capture data on socioeconomic factors, distance from medical facilities and other factors that may shape utilization patterns. Owing to the nonrandomized nature of the study, demographic differences between groups such as prior hospitalization use or proportion with CLDP and CHD could impact the results. However, after multivariate adjustment and subgroup specific analyses, the treatment effect remained, suggesting that these differences may not have a major effect. Palivizumab doses administered to an infant during a hospitalization are not captured separately on the hospital claim. Therefore, it was necessary to exclude subjects born during the RSV season since there was a high likelihood that not all palivizumab doses received by these infants would appear in the data. Although this approach ensures greater accuracy for our palivizumab compliance measures, it is possible the RSV-related hospitalization risk may be underestimated. Our study may over-or underestimate severe RSV disease because we did not have RSV test results and had to rely on the diagnosis codes for RSV as well as unspecified bronchiolitis and pneumonia. We believe this is a reasonable approach given known low rates of RSV testing which stems in part from the American Academy of Pediatrics recommendations that routine testing is not required once the RSV season has started because it rarely alters clinical management [17]. Given that the MarketScan® Medicaid | 2016-05-12T22:15:10.714Z | 2014-10-13T00:00:00.000 | {
"year": 2014,
"sha1": "dfa5c2838dfafee9bfd4403ad26af5729a34057a",
"oa_license": "CCBY",
"oa_url": "https://bmcpediatr.biomedcentral.com/track/pdf/10.1186/1471-2431-14-261",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f3dca9a2182f2e3753541a651b20d60ae3fdb189",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255701796 | pes2o/s2orc | v3-fos-license | Comparisons of the core and mantle compositions of earth analogs from different terrestrial planet formation scenarios
The chemical compositions of Earth's core and mantle provide insight into the processes that led to their formation. N-body simulations, on the other hand, generally do not contain chemical information, and seek to only reproduce the masses and orbits of the terrestrial planets. These simulations can be grouped into four potentially viable scenarios of Solar System formation (Classical, Annulus, Grand Tack, and Early Instability) for which we compile a total of 433 N-body simulations. We relate the outputs of these simulations to the chemistry of Earth's core and mantle using a melt-scaling law combined with a multi-stage model of core formation. We find the compositions of Earth analogs to be largely governed by the fraction of equilibrating embryo cores and the initial embryo masses in N-body simulations. Simulation type may be important when considering magma ocean lifetimes, where Grand Tack simulations have the largest amounts of material accreted after the last giant impact. However, we cannot rule out any accretion scenarios or initial embryo masses due to the sensitivity of Earth's mantle composition to different parameters and the stochastic nature of N-body simulations. Comparing the last embryo impacts experienced by Earth analogs to specific Moon-forming scenarios, we find the characteristics of the Moon-forming impact are dependent on the initial conditions in N-body simulations where larger initial embryo masses promote larger and slower Moon-forming impactors. Mars-sized initial embryos are most consistent with the canonical hit-and-run scenario onto a solid mantle. Our results suggest that constraining the fraction of equilibrating impactor core and the initial embryo masses in N-body simulations could be significant for understanding both Earth's accretion history and characteristics of the Moon-forming impact.
embryo cores (kcore_emb) and the initial embryo masses in N-body simulations, rather than the simulation type, where higher values of kcore_emb and larger initial embryo masses correspond to higher concentrations of Ni, Co, Mo, and W in Earth analog mantles and higher concentrations of Si and O in Earth analog cores. As a result, larger initial embryo masses require smaller values of kcore_emb to match Earth's mantle composition. On the other hand, compositions of Earth analog cores are sensitive to the temperatures of equilibration and fO2 of accreting material.
Simulation type may be important when considering magma ocean lifetimes, where Grand Tack simulations have the largest amounts of material accreted after the last giant impact. However, we cannot rule out any accretion scenarios or initial embryo masses due to the sensitivity of Earth's mantle composition to different parameters and the stochastic nature of N-body simulations. We use our compiled simulations to explore the relationship between initial embryo masses and the melting history of Earth analogs, where the complex interplay between the timing between impacts, magma ocean lifetimes, and volatile delivery could affect the compositions of Earth analogs formed from different simulation types. Comparing the last embryo impacts experienced by Earth analogs to specific Moon-forming scenarios, we find the characteristics of the Moon-forming impact are dependent on the initial conditions in N-body simulations where larger initial embryo masses promote larger and slower Moon-forming impactors. Mars-sized initial embryos are most consistent with the canonical hit-and-run scenario onto a solid mantle.
Our results suggest that constraining the fraction of equilibrating impactor core (kcore) and the initial embryo masses in N-body simulations could be significant for understanding both Earth's accretion history and characteristics of the Moon-forming impact.
Introduction
The terrestrial planets formed in several stages on timescales on the order of 10-100 Myr (Righter and O'Brien, 2011). First, nebular gas condensed into dust, which accreted into planetesimals likely due to gravitational collapse triggered by the streaming instability (as reviewed in Johansen et al., 2014). This was followed by runaway growth, where the largest protoplanetary bodies grew the fastest either due to planetesimal accretion by pairwise growth (Kokubo and Ida, 1996) or pebble accretion (Lambrechts and Johansen, 2012;Levison et al., 2015). When the local surface density of planetesimals was consumed and/or the pebble flux has decreased, runaway growth transformed into oligarchic growth, creating a bimodal distribution of massive embryos and smaller planetesimals (Kokubo and Ida, 2000;Lambrechts et al., 2019).
Finally, massive collisions between embryos (i.e., giant impacts) resulted in the present-day terrestrial planets (Chambers and Wetherill, 1998). N-body simulations of late-stage terrestrial planet formation seek to reproduce the orbits and masses of the terrestrial planets by calculating the gravitational interactions between a prescribed set of bodies that are representative of the final giant impact stage of terrestrial planet formation (Chambers, 2001). Classical simulations can be split into two types-Eccentric Jupiter and Saturn (EJS) simulations begin with Jupiter and Saturn in their modern orbits in eccentric orbits while Circular Jupiter and Saturn (CJS) simulations begin with Jupiter and Saturn closer than they are now, but in circular orbits (Fischer and Ciesla, 2014;O'Brien et al., 2006;Raymond et al., 2009;Woo et al., 2022). CJS simulations generally produce Mars analogs that are too large, whereas EJS simulations do slightly better in this regard but are self-inconsistent (see Raymond et al., 2009 for details). A number of other models have been proposed to address the small-Mars problem. Here, we focus on three wellstudied scenarios which deplete mass in the Mars-forming region: 1) the "annulus" (or "lowmass asteroid belt") scenario, in which bodies are initially distributed in a narrow ring (Hansen, 2009;Kaib and Cowan, 2015;Raymond and Izidoro, 2017), 2) the Grand Tack scenario, in which Jupiter and Saturn migrate inward and then outward to their present locations (Walsh et al., 2011;Jacobson and Morbidelli, 2014;O'Brien et al., 2014), and 3) the Early Instability scenario, in which the gas giants undergo an orbital instability before Mars can grow (Clement et al., 2018(Clement et al., , 2021Liu et al., 2022). Earth analogs formed in each type of simulation are influenced by the dynamic evolution of the initial distribution of embryos and planetesimals. Different types of N-body simulations, which differ in terms of the growth rates and provenance of accreting bodies, will therefore form Earth analogs with unique accretion histories that accrete materials from distinct locations. In addition, different initial conditions prescribed between types of simulations, and even between simulations from the same suite, may result in different sizes and energies of accretionary impacts. Even though the probability of reproducing each terrestrial planet differs between simulation types, all simulation types have been shown to be plausibly capable of explaining the observed inner Solar System configuration (Raymond and Morbidelli, 2022).
Current state-of-the-art models of Earth's accretion and core formation integrate growth histories from N-body simulations with self-consistent evolution of oxygen fugacity (fO2) to determine the partitioning of elements between metal and silicate following each impact (Fischer et al., 2017;Rubie et al., 2015). Experimental metal-silicate partitioning data are parameterized to capture the effects of pressure, temperature, composition, and fO2 on the concentrations of each element in the resulting core and mantle (e.g., Fischer et al., 2015;Siebert et al., 2012). The aim of these models has been to reproduce Earth's observed mantle composition (McDonough and Sun, 1995;Palme and O'Neill, 2013) in terms of a set of major and minor elements, such as Mg, Ca, Al, Fe, Ni, Co, Nb, and Ta (Fischer et al., 2017;Rubie et al., 2011). However, previous studies have only investigated core formation under a single suite of N-body simulations, using Classical or Grand Tack simulations from a single study, which prevents comparison of different simulation types and initial conditions (Fischer et al., 2017;Rubie et al., 2015). Moreover, these studies used simplified metal-silicate equilibration parameters, where equilibration pressures (Pequil) were assumed to increase linearly such that they represented equilibration at a constant fraction of the growing core-mantle boundary pressure. While embryo masses generally increase as Earth accretes due to ongoing oligarchic growth elsewhere in the Solar System, the stochastic nature of N-body simulations means that the size and timing of impacts vary greatly between simulations. Comparing simulations from multiple studies can therefore constrain the range of possible accretional events Earth could have experienced during its growth. Other equilibration parameters, such as the fractions of terrestrial mantle and impactor core (kcore) that participate in metal-silicate equilibration, were set as a constant multiple of the impactor's mass and at a constant fraction, respectively. Simplifying these parameters ignores the amount of energy delivered to the proto-Earth by individual accretionary impacts and the volume of melting produced by each event (Abramov et al., 2012;Nakajima et al., 2021). In addition, it has been suggested that kcore varies with impactor size, where larger impactor cores merge more efficiently with Earth's core and therefore experience lower degrees of equilibration (Marchi et al., 2018;Rubie et al., 2015).
de Vries et al. (2016) used the melt-scaling law of Abramov et al. (2012) to relate the energies of individual impacts to Pequil, assuming Pequil occurred at the base of impact induced melt pools, over the course of accretion in Grand Tack simulations. They found Pequil to depend on initial conditions of N-body simulations and the lifetime of magma oceans. However, the relationship between Pequil and the resulting mantle compositions of Earth analogs remains unclear due to the absence of a core formation model. Furthermore, they only used Grand Tack simulations, yet it is possible that accretion histories of Earth analogs differ between different scenarios of Solar System formation. Here we build upon the study of de Vries et al. (2016) by compiling simulations from four different scenarios of Solar System formation and using a meltscaling law based on hydrodynamic simulations to determine Pequil and the fraction of the proto-Earth's mantle melted by each impact (fmelt). Our results are integrated with state-of-the-art models of core formation (Fischer et al., 2017;Rubie et al., 2015) to explore the effects of varying simulation type and initial conditions within these different simulation types on the chemistry of Earth analogs formed. The compiled simulations and the results from our model are then used to compare the relationship between different N-body simulations, the initial conditions used within them, and volatile delivery and melting histories of Earth analogs.
N-body simulations
N-body simulations from different Solar System formation scenarios were compiled to use as inputs in our core formation model. A total of 48 Classical (CEJS) (O'Brien et al., 2006;Raymond et al., 2009), 110 Annulus (ANN) (Kaib and Cowan, 2015;Raymond and Izidoro, 2017), 142 Grand Tack (GT) (Jacobson and Morbidelli, 2014;O'Brien et al., 2014;Walsh et al., 2011), and 133 Early Instability (EI) (Clement et al., 2021) simulations were assembled to give a total of 433 simulations. Abbreviations of individual simulation names are given in Table 1 and will be used hereafter. Rows that contain simulation names in parentheses depict all simulations from a given study (e.g., CEJS-O06). Classical simulations from both studies were also split into CJS and EJS simulations to compare the two dynamical scenarios. GT simulations were split by both initial embryo to planetesimal mass ratio (e.g., GT 1:1) and initial embryo mass (e.g., GT-0.025) to compare different initial conditions. Earth analogs were defined as the body at the end of each simulation closest to 1 Earth mass and 1 AU within the ranges 0.8-1.25 Earth masses and 0.8-1.2 AU for a total of 109 Earth analogs. Such narrow ranges of masses and orbital radii were chosen to minimize the effects of planet mass and orbital radii of accreting material on Earth analog compositions (Fischer et al., 2017;Kaib and Cowan, 2015). We do not use the mass and orbit of Mars analogs as constraints because Mars' accretion history is unrelated to its final orbital parameters (Brennan et al., 2022). Table 1 also lists the initial conditions and number of Earth analogs produced from each suite of simulations. Initial embryo masses range from 0.005-0.48 Earth masses, with both the distributions and ranges of initial embryo masses varying between studies (Table 1). Most simulations used a bimodal distribution of larger embryos and smaller planetesimals, except for the ANN simulations from Kaib and Cowan (2015), in which simulations were run with equal-massed embryos only.
For each simulation that formed an Earth analog, the impact histories of bodies that eventually accreted to form the Earth analog were tracked. All bodies were assigned an initial composition based on relative non-volatile elemental abundances in CI chondrite. The abundances of refractory elements were enriched, and each body was equilibrated at a given fO2 depending on its initial semi-major axis . The initial fO2 distribution followed a simple step-function with more reduced materials within the fO2 step, as in previous studies (Fischer et al., 2017;Rubie et al., 2015). The fO2 step was set to 2 AU with the outer fO2 set to 1.5 log units below the iron-wüstite buffer (∆IW-1.5), consistent with Mars' accretion (Brennan et al., 2022), while the inner fO2 was varied. Planetesimals were equilibrated at 0.1 GPa and 2000 K. To address differing initial embryo masses between simulation suites, we used a simple shell model to determine the initial pressure of equilibration in embryos. Here, we assumed a core mass fraction of 0.3, corresponding to equilibration of a CI chondrite composition without volatile elements at an fO2 of ~∆IW -3. Even though the core mass fraction is dependent on the bulk composition and fO2 of a differentiating body, we tested a case where embryos from beyond the fO2 cutoff were equilibrated at a higher Pequil, corresponding to a smaller core mass fraction, resulting in differences in mantle composition, on average, of <1%. The total mass, density, and pressure of mantle layers were calculated from outside in until mass in the silicate reached 70% of the embryo's mass, resulting in a core-mantle boundary pressure. Pequil was then set to be half of the core-mantle boundary pressure, which is consistent with the interpreted Pequil on Mars (Brennan et al., 2020;Rai and van Westrenen, 2013;Righter and Chabot, 2011), since Mars may be a stranded embryo itself (Dauphas and Pourmand, 2011). The resulting parameterization for Pequil of embryos was Pequil = 112.06*Memb + 0.37 GPa, where Memb was the mass of the embryo in Earth masses.
Metal-silicate equilibration
For impactors >0.01 Earth masses (embryos), the impactor and target masses, impact velocity, and impact angle of each collision were taken from the outputs of each simulation and were used as inputs in the melt-scaling law of Nakajima et al. (2021) (Fig. 1). The melt-scaling law parameterizes outputs from hydrodynamic simulations to determine the volume and geometry of melting in the target's mantle, along with the pressure at the base of the melt pool.
Impact angles were rounded to the nearest angle for which results are available (0°, 30°, 60°, or 90°) (e.g., for a 44° impact angle, results for 30° were used). We note impact angles in all simulations show a uniform distribution centered around 45 degrees (Supplementary Fig. 1) and that rounding in this way could result in additional uncertainties in the determined melt fraction.
The surface entropy was set to 1100 J/K/kg, corresponding to surface temperatures of ~300 K.
Assuming metal-silicate equilibration occurred at the base of the melt pool, Pequil was set to the pressure at the base of the melt pool and fmelt was set to the fraction of the mantle that was melted. We introduce a parameter, kmantle_melt, to describe the fraction of the melted mantle that equilibrates with the impactor's core. A schematic representation of embryo equilibration is shown in Fig. 1. For the sake of simplicity, we assumed instantaneous crystallization of magma oceans, but discuss the possible effects of longer magma ocean lifetimes in Section 4.1.
Following each impact, the mantle was homogenized such that portions of the mantle that did not melt were assumed to fully mix with the melted portions. Embryo cores were also assumed to sink and merge with the proto-Earth's existing core before the next embryo impact and remained isolated from further equilibration. Even though the actual physics of metal-silicate mixing and equilibration during Earth's accretion are more complex (Deguen et al., 2014(Deguen et al., , 2011Landeau et al., 2021), these simplifying assumptions allow us to relate impact energies to the conditions of core formation.
Smaller bodies <0.01 Earth masses (planetesimals) were below the threshold of masses compatible with the melt-scaling law of Nakajima et al. (2021) and were small enough that they may have been stranded in the proto-Earth's mantle following an impact (de Vries et al., 2016).
The time it takes for a planetesimal's core to sink through the solid mantle and merge with Earth's core exceeds the time between embryo impacts such that planetesimals were assumed to equilibrate with the subsequent embryo impact (Fleck et al., 2018). Planetesimals that accreted after the last embryo impact were equilibrated at an assigned low pressure (Pequil_ptsml) and targetto-impactor ratio of equilibrating silicate (Mmelt_ptsml) before the planetesimal's core merged with Earth's core and the equilibrated silicate mixed with Earth's mantle. Some embryos in CEJS simulations from Raymond et al. (2009) and ANN simulations from Kaib and Cowan (2015) were small enough that they were considered planetesimals during core formation (Table 1).
Embryos in all other simulations were large enough that they were also considered embryos by the melt-scaling law. All embryos that eventually formed Earth analogs from CEJS simulations from Raymond et al. (2009) were >0.01 Earth masses, making this distinction only relevant for embryos from ANN simulations from Kaib and Cowan (2015). However, this discrepancy is only significant for small impactors that accrete after the last large impact, which our model is not very sensitive to (Table 2). We note that "planetesimals" in the context of core formation are all bodies <0.01 Earth masses, whereas it only apply to bodies <0.0025 Earth masses in the context of N-body simulations.
In contrast to Pequil and fmelt, kcore cannot be easily constrained from melt-scaling laws, especially for large embryo impacts. Instead, we assume constant reference values for kcore, but used different values for embryos (kcore_emb = 0.3) and planetesimals (kcore_ptsml = 0.7) to reflect the dependence of kcore on the size of the impactor (Kendall and Melosh, 2016;Marchi et al., 2018). We note that kcore_ptsml is not set to 1 because it is possible that planetesimal cores remain in the proto-Earth's mantle and equilibrate during the next large impact. In this scenario, it is difficult to constrain the extent to which the original small impactor's core equilibrates. A compilation of equilibration parameters used, the ranges we tested, and the sensitivity of core and mantle compositions of Earth analogs are given in Table 2.
We followed the methodology detailed in Supplementary S2 of Rubie et al. (2011), as revised by Fischer et al. (2017) and Brennan et al. (2020) to evolve fO2 self-consistently and calculate the composition of Earth's core and mantle following each equilibration event. The entire impactor mantle, along with a portion of the melted fraction of Earth's mantle (fmelt*kmantle_melt), participated in equilibration, where the fraction of the impactor's core that equilibrated was defined by kcore. In addition, material from planetesimals that impacted since the last embryo impact were added into the equilibrating mass. The core formation code of Brennan et al. (2020) was modified to incorporate impactor information from N-body simulations and was benchmarked against the results from Fischer et al. (2017). Parametrizations for Pequil and fmelt for each impactor were then incorporated according to the processes described above in place of the simple assumptions used in Fischer et al. (2017). The metal-silicate partitioning of elements were described by fits to experimental data as: for each element i, where T is the temperature in Kelvin, P is pressure in GPa, and ai, bi, and ci are fitting parameters. These parameters are detailed in Supplementary Table S1 and include the significant changes in partitioning at ~5 GPa for Si, O, Ni, and Co (Fischer et al., 2017 and references therein). We note that there may be more up-to-date partitioning parameterizations for Ni which have slightly different fitting coefficients (Huang and Badro, 2018). However, our results would not be affected significantly by incorporating these values. We also included fitting parameters for Nb, Ta, Mo, and W from Huang et al. (2020) and Huang et al. (2021) while Mg, Al, and Ca were assumed to be perfectly lithophile. KD is an exchange coefficient, defined in terms of partition coefficients (D), or in terms of the mole fractions (X) of elements and their oxides in the metal (met) and silicate (sil) as: where n is the valence of element i. Pressures of equilibration were determined for each impactor as described above, and the temperature of equilibration was described by a polynomial fit to the liquidus of Andrault et al. (2011). First, Fe Si, Ni, and O were partitioned, where the concentrations of Fe, Si, Ni, and O in the core-forming liquid were described as a function of the moles of FeO in the mantle. The moles of FeO in the mantle were then iterated until the moles of FeO in the mantle and the moles of Fe and O in the core-forming liquid were self-consistent.
This was followed by the partitioning of the trace elements (Co, Nb, Ta, Mo, and W), where partitioning of the trace elements was iterated to ensure self-consistent description of molar abundances. The compositions of the proto-Earth's core and mantle were updated after each equilibration event, and these steps were repeated until the last embryo impact, after which planetesimals that accreted were equilibrated at a pressure defined by Pequil_ptsml, with a portion of the mantle defined by Mmelt_ptsml, and a fraction of equilibrating core defined by kcore_ptsml.
Accretion histories and equilibration parameters
A comparison of the accretion histories from our different simulation suites shows the fast formation times of Earth analogs in GT and ANN simulations (Fig. 2, Supplementary Fig. S2). The time it takes for Earth analogs to reach 90% of their final mass (t90) are given in Table 1. Despite large variations, these timescales are all consistent with the 182 W anomaly of Earth's mantle due to the large effect of varying kcore on permissible timescales (Fischer and Nimmo, 2018). The distribution of impactor masses for each type of simulation are governed by initial embryo masses near 1 AU. As expected, simulations that begin with the largest embryos also have the largest median embryo masses regardless of simulation type ( Fig. 2c- which have more frequent large embryo collisions, are also able to reach the highest Pequil on average. However, regardless of initial embryo mass, simulations with longer accretion timescales (CEJS and EI) reach higher maximum average Pequil than those with shorter accretion timescales (ANN and GT). It is important to note that the curves shown in Fig. 3 are averaged over multiple Earth analogs and therefore do not fully capture the stochastic nature of the late stages of accretion, which results in large differences in Pequil between Earth analogs, and even between Earth analogs from the same simulations suite. Here, GT simulations are split by their initial masses because there is no dependence on the initial embryo to planetesimal mass ratio ( Supplementary Fig. S3).
Compositions of Earth analog cores and mantles
The core and mantle compositions of Earth analogs can be determined by combining the equilibration parameters and impact parameters determined from N-body simulations with our model of core formation. The metal-silicate partitioning of elements, as described by KD, are sensitive to the partitioning of Fe between silicate and metal, or the fO2 of equilibrating material (relative to the iron-wüstite (IW) buffer). For a constant set of equilibration parameters, average FeO concentrations vary greatly between Earth analogs from different simulations because of differences in the initial semi-major axis of accreting material (Fig. 4). Overall, Earth analogs produced by simulations from the same suite have similar mass-weighted average semi-major axes, regardless of initial embryo mass (Table 1, Supplementary Fig. S4). We adjust the Ni, Co, Nb, Ta, Mo, and W are siderophile elements that are either moderately refractory or refractory and have been used to trace core formation processes (Fischer et al., 2017;Huang et al., 2021Huang et al., , 2020Jennings et al., 2021;Rubie et al., 2015). The partitioning behavior of each element is given in Supplementary Table S1 combined make up ~2-7 wt% of the core, within the ranges allowed from geophysical and geochemical constraints (Fischer et al., , 2011. Other light elements, such as H, C, and S, could also contribute to the density deficit of Earth's core (Blanchard et al., 2022;Fischer et al., 2020;Suer et al., 2017;Tagawa et al., 2021). (2015) (smallest embryos) and those from EI simulations (largest embryos). These differences result from the larger average Pequil that correspond to large initial embryo masses (Fig. 3). The effects of kcore_emb on Ni and Co concentrations remedy the discrepancy in mantle composition between many Earth analogs and Earth. For example, larger values of kcore_emb would be required for Earth analogs with lower average Pequil. Mo and W, on the other hand, are systematically higher than Earth's mantle composition, even when considering the sensitivity of different parameters. It is possible that including the effects of C on Mo and W partitioning could make both elements more siderophile, reducing their mantle abundances (Jennings et al., 2021). In addition, the mantle compositions of Mo and W are highly uncertain (Liang et al., 2017). In contrast to mantle compositions, core compositions are more sensitive to certain core formation parameters (temperature for O and fO2 for Si). The large deviations in core compositions indicate that Earth's core composition could vary significantly depending on the chosen conditions of core formation and could be more difficult to constrain.
We emphasize that our goal is not to find the best set of parameters to match Earth's composition but show the compositions of Earth analogs as evidence that our model can reproduce Earth's mantle composition reasonably well, in addition to producing plausible core Si and O concentrations.
Another compositional effect not shown in Table 2 or Fig. 6 comes from planetesimals that accrete after the last embryo impact. A large fraction of this material must equilibrate, or else Earth's mantle siderophile element composition would greatly exceed mass estimates of the late veneer (Holzheid et al., 2000). Simulations with large percentages of material accreting after the last large impact have lower average Pequil due to the equilibration of these planetesimals at low pressures. When looking specifically at GT simulations, it is difficult to distinguish any trend in composition with initial embryo mass due to larger initial embryo masses correlating with large percentages of material accreted after the last large impact (Fig. 5a-b and Table 1). In contrast, smaller percentages of material accreting after the last large impact in simulations with larger initial embryo to planetesimal mass ratios causes mantle NiO and CoO concentrations to increase (Table 1 and Supplementary Fig. S6). Nevertheless, the effects of material accreting after the last large impact, which are dependent on the type of N-body simulation, are not as significant as varying initial embryo masses.
Discussion
We have compiled N-body simulations and combined them with models of core formation in which Pequil and fmelt are parameterized using a melt-scaling law. We find Earth's mantle composition to be most sensitive to the initial embryo masses in N-body simulations and the chosen value of kcore. The sensitivity of Earth's mantle composition to these parameters allows Earth's mantle composition to be reproduced for different scenarios of Solar System formation and different initial conditions within these scenarios. Below, we explore the effects of assuming the crystallization timescale of magma oceans and potential implications of different accretion histories for the Moon-forming impact.
Magma ocean lifetimes and Earth's melting history
The results presented above assume instant magma ocean crystallization and planetesimals equilibrating with the next embryo impact. To test the effects of long-lived magma oceans, we use the opposite endmember scenario of infinite magma ocean lifetimes. Here all planetesimals equilibrate with magma oceans generated by the previous embryo impact.
Planetesimals that accrete before the first embryo impact are equilibrated with the initial embryo that grew into the Earth analog. Following an embryo impact, the melt pool will isostatically adjust to form a global magma ocean on the order of 10 2 -10 5 years (Reese and Solomatov, 2006). Therefore, subsequent planetesimals will equilibrate at the base of the global magma ocean rather than with the melt pool. In contrast to the instant crystallization scenario, surface entropy in the long-lived magma ocean case was set to 3160 J/K/kg, corresponding to surface temperatures of ~2000 K. Table 2 Compared to the two endmember scenarios we explored, realistic magma ocean lifetimes depend on the timing of embryo impacts and the efficiency of heat loss from the proto-Earth's interior to space. The presence or absence of an atmosphere, which hinges on the complex interplay between volatile delivery and atmosphere erosion, would therefore strongly influence the equilibration of planetesimals during accretion and could affect the compositions of Earth analogs formed in certain simulation types (Elkins-Tanton, 2008;Lebrun et al., 2013;Sakuraba et al., 2021). Assuming persistent atmospheres throughout accretion, the different timescales of Earth's accretion between different simulation types could be related to varying proportions of magma oceans that persist until the following embryo impact (Fig. 7). Specifically, fast accretion timescales in Grand Tack and Annulus result in 70.1% and 76.9% of embryos impacting within 2 Myrs of each other. These fast accretion timescales could promote persistent magma oceans and planetesimal impacts onto existing magma oceans (de Vries et al., 2016). In contrast, CEJS and EI simulations have less frequent embryo impacts, with only 30.0% and 5.3% of impacts occurring within <2Myrs apart. Despite these correlations, the stochastic nature of N-body simulations complicates the interpretation of these data. For example, small amounts of mass originating from large semi-major axes could contribute a significant quantity of Earth's volatiles at specific times during accretion. Furthermore, magma ocean lifetimes are dependent on the mass and composition of existing atmospheres, which could be related to the delivery of different volatile species and differences in their solubilities (Gaillard et al., 2022;Lichtenberg et al., 2021). Whether the proto-Earth could sustain an atmosphere during the giant impact stage of accretion and its relationship to the composition of Earth's core and mantle still needs to be explored.
The masses assigned to embryos at the start of the giant impact stage of accretion define the initial conditions of N-body simulations. Jacobson and Morbidelli (2014) predicted that Mars-sized embryos would best match the Solar System's architecture. It is often assumed in Nbody simulations that all embryos begin with equal masses. Recent advances in simulating the formation of embryos suggests that the presence of a dissipating gas disk promotes formation of the largest embryos inside 1 AU, with the largest embryos reaching up to ~10-50% of Earth's mass (Clement et al., 2020;Walsh and Levison, 2019;Woo et al., 2021). Our results suggest that all initial conditions can match Earth's mantle composition due to the unconstrained value of kcore. However, the number of embryo impacts is independent of simulation type and decreases with the average initial embryo mass from which Earth analogs form (Fig. 8). Within GT simulations, increasing the initial embryo to planetesimal mass ratio increases the number of embryo impacts, because planetesimals supply less mass during Earth's formation. For CEJS and EI simulations, where magma oceans are more likely to crystallize before the next embryo impact, the number of embryo impacts is also more likely to be representative of the number of magma oceans experienced by Earth analogs. For GT and ANN simulations, which are more likely to have persistent magma oceans, the number of embryo impacts corresponds to the maximum number of magma oceans Earth analogs would have had. Earth analogs in EI simulations, which use initial conditions from the outputs of an embryo growth model with ~1:1 embryo to planetesimal mass ratios, likely experienced between 2-5 magma ocean events.
Geochemical estimates suggest that Earth experienced at least two magma oceans during its accretion (Tucker and Mukhopadhyay, 2014). We show how the initial embryo masses and embryo to planetesimal mass ratios can be used to place constraints on the maximum number of magma oceans and outgassing events Earth experienced. Future constraints on the geochemical consequence of magma oceans may therefore place constraints on the masses of embryos in the early Solar System.
Implications for Moon formation
The likelihood of specific Moon-forming impact scenarios and the resulting melting of Earth's mantle can be evaluated by focusing on the last embryo impact, or sequence of embryo impacts, experienced by each Earth analog ( Fig. 9) (Jacobson and Morbidelli, 2014). Increasing initial embryo masses results in larger last embryo impacts that melt a large fraction (>90%) of Earth's mantle (fmelt). Equal-massed embryos of 0.005 Earth masses used in ANN-KC15 simulations result in Moon-forming impactors that are too small to match any Moon-forming scenario. The most probable Moon-forming scenarios based on impactor masses are the canonical hit-and-run and rapidly rotating Earth scenarios (Canup and Asphaug, 2001;Ćuk and Stewart, 2012;Reufer et al., 2012). Simulations that begin with Mars-sized embryos are most consistent with the canonical hit-and-run scenario, whereas those with smaller embryos have masses most consistent with the rapidly rotating Earth scenario. On the other hand, equal size impactors are rarely achieved, although the probability of such a scenario could be increased with larger initial embryos, or during pebble accretion (Canup, 2012;Johansen et al., 2021). By also considering impact velocities, it becomes difficult to simultaneously match the scaled impactor masses and high impact velocities required by the rapidly rotating Earth scenario (Kaib and Cowan, 2015). However, we do note that smaller initial embryo masses correspond to higher likelihoods of fast (vrel > 2) last embryo impacts ( Fig. 9e-g). Our results thus suggest that the probability of each Moon-forming scenario is dependent on the initial conditions in N-body simulations, where larger initial embryo masses promote larger and slower impactors. Mars-sized initial embryos are most consistent with the canonical hit-and-run scenario. Constraining the initial conditions in N-body simulations will therefore aid in understanding the likelihood of last embryo impacts that fall within the range allowed by each Moon-forming scenario.
Recent theories of Moon formation have emerged that add to the range of possible scenarios presented above. A canonical impact onto an existing magma ocean aids in matching the compositional similarities between the Earth and the Moon (Hosono et al., 2019). Even though impactor masses and impact velocities allowed in such a scenario are similar to the canonical hit-and-run, the probability that the last embryo impact occurs onto an existing magma ocean depends on the time between the last two embryo impacts. We find this probability to be <5% for CEJS and EI simulations, and <15% for GT and ANN simulations, assuming a maximum magma ocean lifetime of 2 Myrs (Supplementary Fig. S8). When focusing only on GT simulations, this probability increases to 22.5%. It is also a possibility that the Moon formed from a series of impacts throughout Earth's accretion (Rufu et al., 2017). Interestingly, 99 out of 109 (90.8%) of Earth analogs experience complete mantle melting at some point during accretion. Therefore, Earth analogs that don't experience large Moon-forming impacts are still likely to have experienced complete mantle melting from a prior large embryo impact. Such impacts could aid in the formation of moonlets. Evaluating the likelihood of Moon formation from multiple impacts is beyond the scope of the current work but should be investigated by future studies that incorporate realistic impact histories from N-body simulations with hydrodynamic impact simulations.
Conclusions
We have compiled N-body simulations covering four models of Solar System formation.
Building upon previous models of accretion and core formation (Fischer et al., 2017;Rubie et al., 2011), we incorporate the melt-scaling law of Nakajima et al. (2021) Kaib (1846388). We thank M. Nakajima for providing the publicly available melt-scaling law and assistance with its use in our model. We also thank J.
Dong and H. Fu for helpful discussions and advice throughout the duration of this project.
Finally, we thank D. Rubie and an anonymous reviewer for their suggestions that have substantially improved the manuscript. Pequil is the pressure at the base of the melt pool, fmelt is the fraction of the target's mantle that is melted, kmantle_melt is the fraction of the melted mantle that equilibrates with the impactor's core (such that fmelt*kmantle_melt = fraction of the whole mantle that equilibrates), and kcore is the fraction of the impactor's core that equilibrates. Table 2 (expressed as new value minus reference divided by reference). For each set of parameters, the compositions of all Earth analogs are averaged and compared to the average core and mantle compositions from the reference model. Signs ("+" and "-") indicate the direction the parameter is varied to result in the percent change shown. Parameters without signs are those with contrasting effects depending on the element of interest. See Table 2 for the sensitivities of all elements to all parameters. Disk surface density is defined as defined as Σ = Σ 0 r -⍺ where, alpha is the value shown in the column.
2 Embryo-to-planetesimal mass ratio is the total mass of embryos to the total mass of planetesimals. Parameters derived from the melt-scaling law of Nakajima et al. (2021).
3
Inner f O 2 was set for each simulation based on the average FeO of Earth analogs (Fig. S5).
4
Endmember simulations are those with the lowest and highest average P equil in Fig. 5 the averaged mantle and core compositions of Earth analogs when one parameter is varied. | 2023-01-12T17:25:28.338Z | 2023-02-21T00:00:00.000 | {
"year": 2023,
"sha1": "1b8e2c4d335e5ac300318d24beee4f82072054d8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "af6389e3d095295fe3039c2d956aa8f37381e72a",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
} |
55984912 | pes2o/s2orc | v3-fos-license | Electroluminescence induced by Ge nanocrystals obtained by hot ion implantation into SiO 2
Commonly, electroluminescence (EL) from Ge nanocrystals (Ge NCs) has been obtained by room temperature (RT) Ge implantation into a SiO2 matrix followed by a high temperature anneal. In the present work, we have used a novel experimental approach: we have performed the Ge implantation at high temperature (Ti) and subsequently a high temperature anneal at 900 °C in order to grow the Ge NCs. By performing the implantation at Ti=350 °C, the electrical stability of the MOSLEDs were enhanced, as compared to the ones obtained from RT implantation. Moreover, by changing the implantation fluence from Φ=0.5×1016 and 1.0×1016 Ge/cm2 we have observed a blueshift in the EL emission peak. The results show that the electrical stability of the hot implanted devices is higher than the ones obtained by RT implantation.
I. INTRODUCTION
Since the discovery of photoluminescence ͑PL͒ in porous Si, 1 a large number of studies concerning the properties of Si or Ge nanoclusters ͑NCs͒ have been reported.Several techniques have been used in order to produce the NCs embedded in the matrix.3][4] In addition to the compatibility with the microelectronic technology, it is very precise in controlling the amount and depth of the excess ions introduced in the matrix, thus presenting great reproducibility.
First experiments using ion implantation as a technique to produce Si or Ge NCs were already reported in the early 1990s [2][3][4] and their promising results were followed by an intense research activity, as illustrated by the review of Rebohle et al. 5 However, in all the cases, the Ge implantation was performed at room temperature ͑RT͒, followed by a high temperature anneal.
Recently we have used a different experimental approach.Instead of performing Ge implantation into the SiO 2 layer at RT, we have done it keeping the substrate at 350 °C and then anneal the samples at 900 °C.As consequence, the 390 nm band increased its PL yield by a factor of almost 4 as compared with the RT implantation.Moreover, by finding the proper Ge implanted concentration we were able to further increase the PL yield of the 390 nm band by another factor of 3. 6 The main goal of the present paper is to study the electroluminescence ͑EL͒ emitted by metal-oxide-semiconductor ͑MOS͒ devices made with the Ge NCs obtained by hot implantation and compare the results with those produced by RT implantation.
II. EXPERIMENTAL PROCEDURE
A 195-nm-thick SiO 2 layer, thermally grown onto a n-type Si ͗100͘ wafer by dry oxidation at 1050 °C, was implanted with 120 keV Ge ions keeping constant the substrate temperature at RT and 350 °C, respectively.The implantations were done at fluences of 0.5 and 1.0ϫ 10 16 Ge/ cm 2 , corresponding to a Gaussian-like depth profile with a peak concentration of about 1.5 and 3 at.%, respectively, 90 nm far from the SiO 2 surface.Subsequently, the as-implanted samples were submitted to a furnace anneal at 900 °C for 30 min in flowing N 2 .Then, a SiON layer with a thickness of 100 nm was deposited onto the SiO 2 layer by plasmaenhanced chemical vapor deposition in order to enhance the electrical stability of the device, 7 followed by the same annealing process.MOS dot structures for EL studies were prepared using sputtered layers of indium tin oxide and Al as front and rear electrodes, with a thickness of 100 and 150 nm, respectively.Photolithography was used to make a dot matrix pattern with a dot diameter of 200 m.Finally, an annealing procedure of 400 °C for 30 min was performed to improve the Ohmic behavior of the contacts.A sketch of the device is shown in Fig. 1.
The EL measurements were performed at RT utilizing a Triax 320 spectrometer with a R928 Hamamatsu photomultiplier.Current injection was done by a Keithley 2410 sourcemeter with a positive voltage applied to the gate.This feature corresponds to an electron injection from the Si substrate into the SiO 2 layer.
Structural characterization of the samples were performed by transmission electron microscopy ͑TEM͒, using a 200 keV JEOL microscope with the samples prepared in a cross sectional mode by mechanical polishing and ion milling techniques.
A. TEM Results
The TEM measurements reveal the formation of crystalline Ge NCs in both RT and hot implanted samples after the 900 °C anneal, as shown in Figs.2͑a͒ and 2͑b͒, respectively.For the RT implanted sample ͓Fig.2͑a͔͒ we have found a Gaussian-like NCs size distribution with a mean diameter size of 4.2 nm.
Concerning the hot implantation ͓Fig.2͑b͔͒, the mean size and size distribution differ significantly from the one observed when the Ge implantation is done at RT.In fact, the Ge NCs distribution presents a positive gradient profile of crystal sizes along depth.The shallow region shows quite small nanocrystals having about 2 to 3 nm in diameter.The intermediate one contains medium size Ge NCs of around 3 to 5 nm and in the deepest region, it is possible to observe larger NCs ranging from 5 to even 9 nm in diameter.
B. Electroluminescence measurements
In this set of experiments, the electrical properties of the MOS light emitting devices ͑MOSLEDs͒ were analyzed.Figure 3 shows the EL spectra of the devices, under a constant current injection density ͑J͒ of 320 A / cm 2 , for the hot and RT samples implanted at the two different fluences.The first observed feature is that the intensities of the EL peaks of the hot implanted samples are around 30% lower than the ones corresponding to the RT implanted ones.Another characteristic present in this spectra is the blueshift in the EL peak position, of 8 nm, in the devices implanted with the higher fluence ͑1 ϫ 10 16 Ge/ cm 2 ͒, as compared to the ones implanted with the lower fluence ͑0.5ϫ 10 16 Ge/ cm 2 ͒.It should be mentioned that the 310 nm band seen in Ref. 6 could not be detected in the present experiment because non UV-transparent optics were utilized.
Figure 4 displays the EL intensity as a function of the injected carriers under constant current density of 320 A / cm 2 with the spectrometer centered on the wavelength of the EL peak of each device.The experiments were done for the hot and the RT implanted samples, and for both implanted fluences.It was verified that the hot implanted samples show an electrical stability that is around three times larger than the one obtained by the RT implants.
IV. DISCUSSION AND CONCLUSIONS
As mentioned before, in previous works [4][5][6] the Ge NCs embedded in SiO 2 matrix were obtained by RT implantation followed by a high temperature anneal.When excited at 5.1 eV, two PL bands were obtained, one at 310 nm and the other, with much higher yield, at 390 nm.The origin of the PL bands was attributed to radiative defects present at the Ge NCs/matrix interface, specifically, neutral oxygen vacancies ͑NOVs͒ such as ϵGe-Siϵ and/or ϵGe-Geϵ defects generated by the local deficiency of oxygen and the incorporation of Ge into the SiO 2 network surrounding the NCs. 5,8,9elated to the EL measurements, the lower EL intensity of the hot implanted samples-see Fig. 3-can be explained based on the results of the TEM observations.The hot implanted samples have significant larger NCs at the deepest region of the implantation profile, as compared to the RT implanted ones.These larger NCs act as scattering centers for the electrons during the injection process as illustrated by Fig. 1 causing a kinetic energy loss and thus decreasing the corresponding EL cross section, producing a less intense emission.
The blueshift in the EL spectra observed for the highest implantation fluence ͑see Fig. 3͒ can qualitatively be explained as follows: The EL induced by the Ge NCs is due to NOV-type radiative defects, such as ϵGe-Siϵ and/or ϵGe-Geϵ, with emission energies of 2.92 and 3.1 eV, respectively. 7Both of them are among the ones that contribute the most to the observed EL band.A higher Ge implantation fluence produces larger NCs, after the thermal anneal, due to the higher Ge concentration in the matrix.Consequently, the ϵGe-Geϵ to ϵGe-Siϵ ratio increases, pro-ducing a more intense emission of the 3.1 eV component and a corresponding reduction of the 2.92 eV component of the EL band, resulting in a slight blueshift, as observed in Fig. 3.
Figure 4 indicates that the hot implanted samples can sustain an approximately three times larger number of injected charges before the breakdown device ͑Q BD ͒ occurs, as compared with the RT implanted ones.Since the Q BD is depending, among other factors, on the injected current density and operation time, this means that a MOSLED made utilizing hot implantation can sustain a current density three times higher, giving a higher EL intensity, or, for the same current density, results in a three times higher operation time.As the breakdown is a statistical event, the improvement factor may vary, however the general tendency is clear.
The above feature in principle can be attributed to the fact that the hot implantation produces less damage in the SiO 2 layer during the implantation process and a higher quality of the SiO 2 / Si interface, thus resulting in lower number of nonradiative defects present in the oxide and in the interface.
In order to compare the PL and EL emissions, all the implantation and annealing parameters were the same ones reported in Ref. 6, which gave place to the maximum PL emission.It is possible that the optimal conditions for the EL emission are not the same as the PL ones.
In summary, in the present communication we have found that devices based on Ge NCs produced by hot implantation have greater electrical stability, as compared with the ones produced by RT implantation.Concerning the EL yield, as mentioned above, it is possible that the best conditions for the EL emission are not the same as compared with the PL ones.It is necessary to perform further optimizations aiming to increase the EL emission of the MOS devices.In this sense, work is on the way.106103-3
FIG. 1 .FIG. 3 .
FIG. 1. Schematic diagram of the MOSLED ͑not drawn to scale͒.In detail: representation of the Ge NCs for ͑a͒ RT implantation and ͑b͒ high temperature implantation. | 2018-12-08T15:45:15.487Z | 2009-11-24T00:00:00.000 | {
"year": 2009,
"sha1": "270dcf148e44a76388a8e9d008c67e46537064a2",
"oa_license": "CCBYNCSA",
"oa_url": "https://lume.ufrgs.br/bitstream/10183/96129/1/000731445.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "270dcf148e44a76388a8e9d008c67e46537064a2",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
152042573 | pes2o/s2orc | v3-fos-license | “Government in India and Japan is different from government in Europe”: Asian Jesuits on Infrastructure, Administrative Space, and the Possibilities for a Global Management of Power
This paper investigates the influence of geographical distance on the practices and concepts of Jesuit administration in the early modern period. It discusses in particular select letters by Alessandro Valignano from East Asia, to demonstrate how loyal Jesuits in the Far East asked for administrative adjustments in order to overcome the enormous infrastructural difficulties involved in upholding constant epistolary communication with Rome. Valignano over and again stressed both the difference and the distance between Asia and Europe and thought that both factors necessitated an accommodation of the order’s organizational framework. This case study thus helps address the broader questions of how the members of the Society of Jesus conceived of global space. It becomes clear that, while they hoped for institutional unity and insisted frequently on procedural uniformity, they also openly acknowledged that due to distance and cultural differences there never could exist an entirely homogeneous, single global Jesuit space.
In 1609, the great Jesuit theologian Francisco Suárez (1548-1617) declared the entire Society of Jesus ideally to consist of only "one nation and one province." He seemed to suggest that the field of Jesuit activities should be considered as one seamless, globally integrated, homogeneous space. If there were differences and distinctions, they should not affect Jesuit organization. Suárez was of course aware of the fact that the actual Jesuit order existed within a more fragmentized space. There was more than just one world-province. The spatial breakdown of the order's administration into distinctive units (provinces) had begun just a few years after the founding of the order in 1540. According to Suárez, however, the creation of provinces had been done for pragmatic reasons alone. It did not imply relinquishing a global or universal perspective. Regionalizing the Society's governmental structure was necessary due only to the "distance between [Jesuit] locations."1 Suárez's statement highlights the relevance of spatial concepts for Jesuit self-understanding. Consciously, the order had incorporated different or even conflicting notions of geography and space into its own identity. On the one hand, official Jesuit documents time and again stressed the global nature of the order. The Constitutions proudly mention that the order is spread across the entire globe.2 Luke Clossey has recently produced a great deal of evidence detailing the global integration of Jesuit thought and activities.3 Such efforts were, in fact, so prominent that the possibility of overstretching missionary activities was a recurrent theme of Jesuit discourse. On the other hand, a contradictory approach to space was equally important for the order. "Being local" was another prominent strand within Jesuit thought. The Jesuits' willingness to localize their behavior has always been noted and even in its earliest days received considerable (often unwanted) attention. Their favorable stance attempts to come to terms with global administration.6 To evaluate the ensuing discourse about governance, space, and power, this article draws mainly on Jesuit sources from India, Japan, and Southeast Asia. In particular, I will examine the letters of the well-known Jesuit Alessandro Valignano (1539-1606), long-time visitor and provincial in the Asian missions between 1579 and 1606.7 It will become clear from this analysis that the Jesuits in Asia consciously and cautiously explored the distinctive administrative needs of extra-European provinces without ever abandoning their commitment to global administrative uniformity.8 From a modern perspective, informed by theoretical and empirical sociologies of organization, Jesuit thinking about the impact of space and infrastructure upon institutions and organization might seem conceptually humble. Nevertheless, these debates should not be considered insignificant only because they did not lead to systematic theoretical and organizational thinking. For in essence we glimpse here some of the first European attempts to conceptualize globalization from an administrative vantage point. The Jesuits, like the Spanish and Portuguese empires before them, when trying to control the vast spaces of the newly "discovered" territories, at first intuitively relied on elements of social thought that had emerged over centuries, with the much smaller dimensions of European states and commonwealths in mind. Yet it soon become questionable if and to what degree European notions of social and political realities could still be meaningfully applied when the geographical spaces were altered so significantly. Indeed, Valignano and the other Jesuits that we will encounter-and many others besides them-were struggling almost daily 6 The highly critical memorialistas of Spain and Italy are discussed in Markus Friedrich, "Governing the Early Modern Society Portugal, Its Empire, and Beyond, 1540-1750(Stanford: Stanford University Press, 1996. Liam Matthew Brockey, The Visitor: André Palmeiro and the Jesuits in Asia (Cambridge, ma: Harvard University Press, 2014) deals with a slightly later period, yet his insights are also helpful for earlier developments. I think, however, that Brockey's very harsh critique of recent evaluations (particularly by Clossey, Salvation) of the Jesuits as "first global players" (428-30) is going too far. to decide if and how such key concepts as uniformity, social coherence, obedience, compliance, or control could make sense if you were far away from Europe in Japan, India, or China. It needs to be stressed that these were issues that the Jesuits discussed initially not for their conceptual or theoretical importance but because their daily experience with the many practical difficulties in applying European concepts to an increasingly global organization forced them to.9 Their semi-explicit efforts to assess the structural difficulties of globalization were part of Europe's trial-and-error approach to developing technologies for managing global integration. Herein lies the historical significance of the cautious negotiation of administrative structures between Europe and Asia.
India Is Different from Europe
Jesuit administrators in the Far East regularly stressed how different India, Japan, or China were from Europe.10 Their letters expressed constant amazement about the incomprehensibility of the East from a purely European vantage point. The "diversity between these provinces and Europe" was a constant refrain in the correspondence. When Alessandro Valignano wrote a long survey of the Indian missions in 1580, he started by mentioning that "this province of India is very different from all other European provinces, not only regarding climate and people, but also regarding religion, customs, and ways of living, and also with respect to our own lifestyle and pious ministries and establishments." The difference at times was so large, he continued, that "certain ways of acting that prove helpful in India are dismissed as inopportune in Europe. During his thirty-two years of service in the region, Valignano continually insisted on the differences between the East Indies and Europe. Soon, however, he also began to highlight the fact that there were multiple cultures and sub-regions even within the vast Jesuit province of India.12 Just as India and Europe were hard to compare, so were Japan and India. In 1583, for instance, he produced a long supplement to his 1580 report about India. This new memorandum dealt extensively with Japan, which Valignano had visited two years earlier. Now he focused on distinguishing Japan from both Europe and India. His reasoning followed the same logic as earlier: Japan is so distinct that "it is impossible [from afar] to understand the situation there and the necessary nature of the government that is required."13 The cultural fragmentation of the region made it impossible even for Jesuits living in Asia to understand more than one region properly. In 1585, for instance, he declared Jesuits in Malacca incompetent in regard to the specific needs and context of the mission in the Moluccas.14 Time and again, Valignano emphasized that Europe and India, India and Japan, Japan and the Moluccas were hardly comparable. His abundant correspondence with the Jesuit curia in Rome repeatedly drove this point home. Juxtaposing Europe and India and insisting on their partial incommensurability became a key element of his reporting.
The differences both within Asia and between Asia and Europe had, according to Valignano and other leading Jesuits of the region, significant consequences for Jesuit administration. India was not like Europe and therefore "our government in India and Japan cannot be the same as in Europe."15 European diferente de todas las otras provinçias de Europa, no solamente quanto al clima y gente mas tambien quanto a la religion, costumbres, modo de proceder y capacidad della, y finalmente quanto a los ministerios, casas y residencias y modo de vivir que aca tiene nuestra Compañia que puede facilmiente causar admiracion en los animos de aquellos que no tienen experiencia destas partes, por lo qual a las vezes puede nacer que lo que lo se haze muy açertado en la Yndia se juzgue en Europa por no tan bien acertado." Üçerler, "Valignano," 352-59 discusses Valignano's descriptions of Asian peoples, though he neglects to take into account the visitor's critical assessment of his own writings. 12 Valignano, "Sumario," 474. 13 Alessandro Valignano, Les jésuites au Japon: Relation missionnaire (1583) laws and norms could not be followed or implemented there.16 The cultural and environmental fragmentation of the globe called for a parallel regionalization of governmental practices and routines. The specific administrative needs of India or Japan were, in fact, described by Valignano as being the very opposite of European requirements. Administrators who were thought to be reliable and apt in Europe were often found to be entirely useless in India and vice versa.17 Obviously, for Valignano, Jesuit superiors required varying qualities and abilities depending on where they were stationed. For some Jesuits differences between Europe and Asia even appeared to invalidate the customary means for establishing a working relationship between Jesuit administrators in the field and their superiors. According to Jesuit administrative thinking, the order's headquarters in Rome and local or regional superiors were connected through regular correspondence. Ideally standardized letters went back and forth, keeping Rome up-to-date about the situation on the ground. This system of "lettered governance" required an enormous amount of trust in the written word.18 It relied on the assumption that letters were indeed able to convey relevant information and that writing could indeed adequately describe "the world." The order's headquarters in Rome thus "outsourced" direct observation and inspection to local agents and still claimed to be able to make meaningful decisions based on the observers' written descriptions.
From Valignano's vantage point, however, this basic assumption was somewhat naïve. He constantly wrote and described India for European Jesuits, yet his trust in the efficacy of his own letters and descriptions was sometimes shattered: "My frequent writing notwithstanding, many things about Japan cannot 16 Valignano be made understandable by letters."19 Although he tried as hard as he could to describe the wonders of Asia in his writings, he confessed that "occasionally I lack confidence in my ability to do so."20 Speaking convincingly about Asia to someone who had no direct personal experience with the region was hard enough, but writing convincingly about Asia was, according to Valignano, particularly difficult. Occasionally, Valignano was unsure if he could make himself properly understood in Rome when portraying far-away lands.21 "Describing" seemed to be less a simple act executed by the writer of a letter but rather implied a lengthy process of conversation between Rome and Goa in order to determine the correct meaning of information: "Because of the difference and the distance it is impossible [for Rome] to take in everything completely with the first sip," he wrote once, and went on to correct Roman misunderstandings of his earlier writings.22 This was a far cry from the simple assumption undergirding the Jesuit system of correspondence that letters were unambiguously able to inform and describe.
Stressing the indescribable nature of India and Japan was occasionally a tactical move. On several occasions, Valignano wished to return to Europeand the ostensible necessity for personal conversations with the Jesuit general, the pope, or the Spanish-Portuguese king in order to inform them about India presented a convenient argument. But there is more to Valignano's nuanced assessment of the value of written information. His partial skepticism towards the potential of "lettered governance" is connected to a broader debate in early modern Europe about the relative status of oral and written, first-hand and second-hand information. The history of early modern production of knowledge and management of information can be at least partially described as a struggle to assess, clarify, and secure the epistemological status of written information. This process was particularly acute in the case of information that came from far-away lands and was not backed by personal contact and oral conversation. Because the Society of Jesus is generally-and 19 Doc. Ind., 12:833: "Porque por mucho que se escrive no se puede bien dar a entender." 20 Doc. Ind., 14:93 (Valignano to Acquaviva, December 17, 1585): "Porque en la verdad senpre deseé mucho assí con V.P., como con la santa memoria del Pe. Everardo, de dar perfecta noticia de las cosas desta Provincia. Y aunque he trabayado y trabayaré mucho en esso, alas vezes me vyene huna manera de desconfiança de poder alcançar lo que deseo, por seren las materias muchas y muy diversas de las de Europa, y que se entienden bien de muy pocos, quantos menos se dexarán bien entender por cartas." 21 Valignano, Jésuites au Japon, 55, 116. 22 Doc. Ind., 14:209: "Mas porque por la gran destancia de los lugares y deversidades de las cosas, no se pueden todas en un golpe bien apurar, trataré en nésta por parecer de la consulta acerca de algunas pocas que nos hazen aquí ahún dudar." journal of jesuit studies 4 (2017) 1-27 rightly-considered to be a champion of written communication, it needs to be stressed all the more that a certain uneasiness with the generals' seemingly limitless trust in written representations of far-away events and circumstances was occasionally criticized even by their most trusted exponents.23
India Is Distant from Europe
While Jesuit governance in India required special attention because of general differences between Asia and Europe, the geographic distance between the two areas was an even greater factor.24 Valignano and his associates in Asia stressed that the slow movement of letters made tight control of India by Rome entirely impossible.25 If communication between India and Europe took months and years, this clearly had an impact on the practicalities of government. The Jesuits were not shy to acknowledge the inverse relationship between geographic distance and administrative efficiency: "Experience shows that this province cannot be governed by letters and requests sent from India and answered by Rome, because it takes too long and a myriad of circumstances and vicissitudes alter the status quo before we have an answer," as one prominent Jesuit stated.26 The constraints imposed by early modern infrastructure was of course most obvious from a global perspective. Yet even within Europe Jesuits acknowledged that geographic distance and infrastructural impediments limited the possibilities for Rome to manage local affairs. The infrastructural complications of Indian government differed from the intra-European problems in scale Experience is now showing us that it is impossible to make provision from here [Rome] for many important things. This is partly because one cannot write and let us know everything (not everything can be confided in writing), and partly because often the time for making a decision runs out while people are asking our opinion here and we are sending a reply.27 Thus, even within Europe the practical limitations that infrastructure placed on governance were considerable. Centralization of power in Rome could not mean total micro-management from headquarters, as Ignatius openly acknowledged. To the contrary, the Jesuits consciously adjusted their general preference for a strong center in Rome with accommodating it to infrastructural realities.
One of the most telling consequences of this realistic insight into the infrastructural impossibility of "total centralization" was a careful scaling of the rhythm of correspondence according to geographical distance from Romethe closer to Rome, the more frequent and more intense communication was to be, the further away from Rome, the more infrequently (but still regularly) should letters be sent.28 The architects of the Jesuits' communicative network understood from early on that high-frequency communication was meaningful only where the time of transmission was short-which in pre-modern times meant that distances had to be short. Where distances were long and communication took months or years, however, fast-paced epistolary attempts at central control were self-defeating. Valignano and the Jesuits in the missionary field understood this clearly. But what needs to be stressed even more is that the Jesuits in Rome, starting from Ignatius and Polanco, also understood this very well. The generals and their aides freely acknowledged that Roman micro-management of local affairs was most often simply impossible. Centralization was thus attenuated from the start by the acceptance of infrastructural shortcomings. journal of jesuit studies 4 (2017) 1-27
Adjusting Government for Difference and Distance
The construction of Asia as both different and distant from Europe ultimately led to demands for administrative adjustments.29 Asian Jesuits tried to remedy the administrative impasses by requesting procedural and institutional alterations on several levels. At times, their ideas went in surprising directions. Strikingly un-modern, for instance, is the occasional tendency towards conscious administrative self-marginalization. Strident insistence on the special circumstances of India led to voluntary disenfranchisement. This is particularly obvious in the debate of 1586 whether India should send an envoy to Rome every three years to participate in the congregation of procurators. In theory each Jesuit province dispatched two of its members to Rome for the occasion to discuss points of common business and to inform headquarters about the current state of affairs in the provinces. The meetings of the procurators played an important part in Jesuit life, not least because they had the power to call extraordinary general congregations. Valignano, however, did not see much merit in the participation of the Jesuits from the Indian province: "Since this province is so far from Europe, in no way can we understand whether the current situation calls for a general congregation or not."30 The informational asymmetry between Asia and Europe was thus reciprocal: Europe knew little about Asia, and Asia knew little about Europe. Accordingly, Valignano was content with a peripheral position for India, at least for the moment. He indulged in a form of self-isolation-and clearly missed the point: one of the ideas behind the congregation of procurators was that all provinces should state if they, "due to reasons and causes from their own provinces," thought a general congregation necessary.31 The idea that the Indian province of its own accord might have any interest in such a general assembly did not cross Valignano's mind. By stressing the distance and difference between Europe and Asia, he thus withdrew India from the order's decision-making process. Valignano probably lacked interest in the congregation of procurators because of practical considerations. Sparing trustworthy missionaries for the trip to Rome was a tough choice for an administrator who constantly complained about the lack of manpower. Maybe he was also simply realistic in predicting little more than a marginal role for Indian procurators in Rome. Nevertheless, it is remarkable that Valignano was willing to give up one of the very few possibilities to influence Roman decision-making. He seemed to have been largely content with focusing on the tasks ahead within the region. Providing India with institutional influence in the Society of Jesus was certainly not his top priority in 1586.32 Valignano's reluctance to send procurators, however, does more than reveal his skeptical resignation concerning India's influence in global Jesuit decisionmaking. From a more regional perspective, this was just another step towards adjusting the Jesuit institutional and governmental framework to situations different from Europe. Valignano's dislike of the election of Indian procurators (which in theory required summoning a provincial congregation) followed logically from his customary skepticism towards Jesuit assemblies in Asia in general. Given the Indian province's vastness, he claimed, even provincial congregations could and should have a different role than in Europe.33 Normally, the most important Jesuits of a province would meet every three years to discuss current affairs and elect the procurator going to Rome for the congregation of procurators. But in India (and in Japan after it became a vice-province in 1583) summoning missionaries was neither easy nor advantageous, according to Valignano. A first difficulty lay in the enormous dangers of travel in Asia. Going to Goa, for instance, was complicated and, in Valignano's eyes, a provincial congregation was not necessarily worth the effort. More fundamentally, the distance from Rome and the delay in communications seemed to prevent the provincial meetings from fulfilling most of their basic functions. Many decisions taken by the congregation required Roman approval. Yet as approval from Rome would be received only after several years of waiting, the congregation's decisions and discussions would and could never have any practical impact and were consequently superfluous.34 While provincial congregations could not be entirely abrogated, Valignano preferred other means of decision-making. His fellow Jesuits in India mostly agreed. In 1583, the second provincial congregation of India asked Rome to abolish the requirement of triennial meetings and instead allow for a much more casually organized "consultation."35 In 1586, it was suggested that a provincial congregation be held every six years, a period later extended to nine years.36 The slowness of communications required still other adjustments to the mechanisms of Jesuit government. Since it was usually impractical to wait for detailed decisions from Rome, Indian Jesuits requested an extended range of authority under canon law (facultates) for their regional and local superiors. Concerning the Japanese vice-provincial, for instance, Valignano thought "that the pope should give him authority to dispense from all obligations under canon law, to publish whatever he deems acceptable."37 The Indian provincial Rui Vicente (1523-87) asked in 1581 for the right to allow all local superiors the consecration of chalices and altars so that they need not have recourse to bishops or to him. After some negotiation, the general eventually granted Vicente the extended facultas, just as he followed the recommendation to broaden the privileges of superiors in India.38 The range of provincial authority was thus expanded in reaction to the impossibility of direct and detailed Roman control. Valignano's awareness of infrastructural limitations to Roman governance in India also surfaces in his rather nonchalant attitude towards instructions sent from Rome. He did of course accept that the general's more-or-less absolute authority extended to Asia. Yet, the relative lack of knowledge and understanding of India in Europe, combined with difficult and time-consuming patterns of communication, seemed to suggest and even require a rather liberal approach to instructions from Rome in daily life. The possibilities for a "creative reading" of Roman instructions were generally much enhanced by the slowness of communications.39 More importantly, even open disregard for Roman instructions came to be considered an acceptable practice in Asia. To be sure, even in Europe, where communications were considerably faster, Roman instructions were often made conditionally-that is, they specified that they did not have to be followed if local circumstances had changed to such a degree that the basis for the decision from Rome was no longer valid.40 On similar grounds, though with greater frequency, Indian superiors questioned the generals' orders. Valignano and other superiors in India at times openly disregarded explicit instructions. On December 12, 1583, for instance, Valignano wrote an entire "report on things that we have done differently from the orders given by "understood what was going on, it would have changed its mind."42 Valignano's claim for partial regional autonomy was therefore somewhat tautological: since he was unable to properly inform Rome, given the insufficiency of letters and the slowness of transport, Rome was likely to misapprehend the situation on the ground and thus could not really expect its orders to be taken literally. Informational inaccessibility, infrastructural isolation, and semi-autonomous decision-making were closely related and mutually self-enforcing. Nonetheless they were not cast as an open attack on the established structures of power. The tendency to call for considerable independence is particularly obvious in India's request for a "second general" in the region.43 Safeguarding a functional administrative hierarchy in India meant shortening communications. In order to do so, some Jesuit office-holder in Asia should take over most of the general's tasks. In 1581, for instance, Nonnius Rodrigues (1539-1604), rector in Goa, suggested that Acquaviva send a trusted aide and invest him "with all your spirit and power."44 In 1590, Valignano also thought that "small generals" were necessary and should take over supreme authority in India.45 This was a highly ambivalent idea, however. On the one hand, strong regional authorities ("commissaries") had played a key role in the early years of the Society of Jesus.46 Valignano's request could therefore claim some historical pedigree. On the other hand, by the late sixteenth century the idea of adding strong regional leaders had acquired anti-Roman overtones. Spanish Jesuits who were unsatisfied with Acquaviva's tenure and opposed the broad shift of the order from a Spanish to a more Italian outlook adopted the idea of installing a Spanish commissary. He was quasi independent from Rome in order to advance Iberian perspectives and interests.47 Given this context, it is highly significant 42 Doc. Ind., 13:590 (Valignano to Acquaviva, December 12, 1584): "Quanto a la facultad que V.P. da al Provincial de la India, de poder suspender o mudar los órdenes que de Roma vinieren con la consideración que V.P. encomienda, espero que quanto a lo que a mí toca no avrá desviar de la intención de V.P., sino quando la cosa obligare de tal manera que se entienda que V.P., se supiesse lo que passa, mudaría parecer. that Acquaviva largely acquiesced to Valignano's request. This shows, yet again, that the Society of Jesus was highly sensitive to the needs to accommodate its organizational structure to infrastructural necessities. The details of this "second general" idea proved contentious. There were by that time two institutional contenders, the provincial and the visitor. While the former was an ordinary administrative position within the Society of Jesus, the latter normally was an extraordinary envoy sent for a short time by Rome to inspect local policies. In India and the Far East, however, and especially under Valignano's tenure, the position of visitor became quasi perpetual. This development aroused considerable tensions in Goa. Over the years, an articulate opposition formed against Valignano. The debate about adequate institutional arrangements for India thus became entangled with a more personalized conflict about Valignano's style of governance and personality.48 Nevertheless, the institutional alternatives clearly emerge from the sources. Valignano, on the one hand, lobbied for a strong and quasi-perpetual visitor. In his long treatise on India from 1580, he devoted a separate chapter to this point and asked that the visitor be given extraordinary faculties, including the right to remove and replace provincials.49 The second Indian provincial congregation of 1583 followed his ideas and asked that a visitor should always be present in Asia.50 Later in his career, too, Valignano was still recommending a "quasiordinary" visitor.51 Indian Jesuits critical of Valignano, however, stressed how unusual and detrimental a perpetual visitor could be. Francisco de Monclaro (1531-95), one of his most outspoken opponents, insisted on this point in a bitter letter to Acquaviva from October 1593. Acquaviva's willingness to allow the visitor of India (i.e., Valignano) to become a second general was, according to Monclaro, "scandalizing" the Jesuits, because they understood this unusual institutional arrangement as a show of mistrust. Monclaro summarized his opposition to Valignano and to perpetual tenure of visitors when he declared to Acquaviva: "We only want you as our general and no one else and all our orders should 48 He displayed some haughty behavior and seemed openly to consider himself of particular importance; see the accusations by Jerónimo Rebelo, Doc. journal of jesuit studies 4 (2017) 1-27 henceforth come from Rome."52 Others followed suit and-in what seems to have been a well-organized campaign-sent similar complaints to Acquaviva.53 Monclaro and his followers thus articulated their criticism of Valignano by reassessing the power relations between India and Rome. In order to end the current visitor's tenure, they aligned themselves with a more traditional and more Rome-centered interpretation of Jesuit governance. The infrastructural and practical considerations undergirding the claims for a perpetual visitor were, in their presentation, trumped by a more literal application of the order's institutional framework.
There was yet another dimension to the debate about the merits of having a quasi-perpetual visitor. Besides standing in for the distant general, a perpetual visitor could also help overcome the problems caused by the vastness of the Indian province. As was obvious to all Jesuits in Asia, the provincial residing in Goa was in no position to visit far-flung outposts personally. Personal inspection, however, was normally considered one of the most important aspects of the office. While the order's central government in Rome relied almost exclusively on lettered governance and declined all calls for first-hand inspection, the office of provincial was founded on the very opposite principle: provincials were thought to rely particularly on their own inspection in their governance. Office holders were thus usually required to tour their province and visit all Jesuit establishments annually. Responsible for the entire region of modern-day India, Indonesia, Japan, and China, however, such an undertaking was altogether impossible for the provincial in Goa. Contrary to standard Jesuit administrative logic, thus, even the internal government of the Indian province had to rely on lettered governance.54 52 Ibid., 14:186: "Estaa esta Provincia mui sentida y de un cierto modo mui escandalizada de V.P., en la querer governar por tan differente modo de lo que govierna las otras Provincias de la Compañía y tan extraordinario […]solamente queremos a V.P. por nuestro General y no a otro, y las reglas, avisos y regimientos que vienen de Roma […]." Monclaro was very critical already in 1583, see Doc. Ind., See e.g. Doc. Ind., 16:407 (Gomes Vaz, Goa, to Acquaviva, November 21, 1593). In this letter, Vaz accuses Acquaviva of not trusting the Indian province. See ibid. 641-46 the documents of the third Indian provincial congregation that asked to recall Valignano. Acquaviva, however, mostly stood by his trusted visitor and reprimanded the Indian Jesuits accordingly. The confusion became even greater when Valignano had to resign his office of visitor for India in 1595 and it was unclear which position he should hold from then on-was he the vice-provincial of Japan or "visitor for Japan and China"? See Üçerler, "Valignano," 350 with more details. 54 Several statements by Valignano describe how much the provincial of India was forced to govern by letters and how this differed from what provincials were normally One solution of this dilemma was to delegate the inspection of Jesuit establishments to the visitor. The visitor, according to Valignano, helped localize governance. He provides a forceful description of how Jesuit governance at the regional level was meant to function and how this could be achieved in India: For this province to be well governed it seems to be necessary that regular visits should occur from the governing head. He should himself observe major differences and the special circumstances of all places and of all Jesuits with his own eyes. Since neither the provincial nor any other superior can do this, however, they will never have proper and truthful experience of their province and will never be able to know what is going on in all the different parts of it. All of them will always only know what happens in their vicinity and will constantly be concerned only about their own benefit. Because of all this it seems expedient that a visitor or commissary should be established to gain the required experience.55 This passage clearly states that the largely European institutional logic evident in the Constitutions and subsequent norms could not be simply transferred to India. Accordingly, Valignano suggested a creative application of the visitor's office to overcome the administrative impasse. Yet there were more options available to make the vast region of Asia more governable. The most obvious solution, besides establishing a visitor, was some sort of division of labor. In 1585, for instance, the provincial of India was granted an exception from the duty to visit every establishment personally. Acquaviva ruled that the supposed to do. Valignano consequently also stressed how ill equipped Goa was for this huge bureaucratic operation. On several occasions he helped institutionalize a provincial "secretariate" that was seen as a somewhat unique and unusual feature specific to Indian government. See Doc. Ind.,16:51. See also Doc. Mal.,. A hostile thrust against Valignano based on his reliance on letters by Monclaro can be found in Doc. Ind.,16:192. 55 Valignano, "Sumario," 555-56: "Para ser aquella provincia bien governada pareçe necessario que la cabeça principal que la govierna la visite y tenga verdadero concepto de vista de toda ella conociendo la differencia y las qualidades de todos los lugares y de los subjectos que ay en ella, y como esto no lo puede hazer el provincial y los otros superiores, aunque visiten las partes que estan debaxo de su jurisdicion, no saben lo que passa en las otras partes, ni ellos pueden tener verdadera y universal informacion de la provinçia ni la pueden dar a su provincial, porque como cada uno dellos no sabe por experiençia sino solo lo que passa en las partes que govierna y es movido del proprio objecto que vé y de la carga que tiene […] y para tener este conocimiento universal, parece que es neçessario un visitador o comisario para tener este conocimiento el qual sea ordinario." journal of jesuit studies 4 (2017) 1-27 provincial in Goa could very well ask someone else (and emphatically not the visitor) to travel in his stead to distant regions.56 Fragmenting the Indian province into several smaller administrative entities of equal standing was yet another possibility. Breaking up "India" into two or three (vice-)provinces might help with governance.57 There had been considerable scheming as to how this might best be done. Valignano suggested in 1577 the establishment of a vice-province for the Moluccas and Japan together.58 As he came to know the region better, however, he realized that Japan and the islands of Southeast Asia could not be governed together by one person. A more nuanced structure would be necessary.59 Consequently a Japanese viceprovince was created in 1583 that also covered China but no regions further south.60 Although the relationship between the vice-province and the provincial of India remained ambivalent for some time,61 this ultimately proved to be an effective arrangement.
More complicated was the situation further to the south in the Moluccas. Equally hard to reach from Goa as Japan, the administrative challenges here were huge. Should not another vice-province be created for Malacca and the Moluccas? Valignano, who was initially supportive, soon expressed reluctance towards creating a semi-autonomous southern administrative region. Rome agreed.62 Jesuits in the field, however, continued to demand a superior capable of making important decisions closer to home than Goa. At the very least the superior in nearby Malacca should be granted some authority in urgent matters:63 "Otherwise it is impossible to receive answers to important 56 Doc. Ind., 16:153 (Acquaviva to Valignano, December 24, 1585). 57 A very brief but helpful exposé can be found in O'Neill and Domínguez, Diccionario, 2:200-1. 58 Doc. Mal., 2:4-6. 59 Ibid., 2:81-82. See e.g. Valignano's comment to Acquaviva that the vice-province of Japan could not be governed by India except "con una manera de superintendencia muy superficial," Doc. Ind., 16:93 (January 12, 1593). This vague statement highlights how complicated it was to combine regional autonomy and institutional hierarchies. 62 Doc. Mal., Ibid., 2:147 (Letter from Bernardo Ferrari, Ambon, to Acquaviva, May 14, 1584). requests within two years, or even three or four years if ships get lost."64 No such reorganization of southeast Asian administrative space was forthcoming, however, and Jesuits in the Islands of Malacca complained repeatedly about the lack of a nearby superior who had direct knowledge of local circumstances. Letters were again considered an inadequate basis for decision-making whether in Goa or Rome. Letters were only "dead reporters," one Jesuit claimed, and not able to replace first-hand knowledge.65 Once more, Jesuits in Asia articulated explicit skepticism towards the powers of written descriptions as the sole basis of government. Such complaints eventually enticed Valignano to dispatch a special envoy, Antonio Marta (d.1609), who served as "visitor." Marta himself soon openly endorsed the Moluccas Jesuits' quest for a local authority with the title of vice provincial.66 Further debate ensued before the province of Malabar was finally established in 1605. There is no need to follow the twists of this discussion herethe basic points are obvious from the glimpses offered so far. It was thought that fragmentation of the vast Indian province would increase the efficiency of Jesuit government. The Jesuits thus not only contemplated adjusting the order's institutional framework to space and geography, they also considered adjusting their administrative geography to fit their customary institutional framework. Making personal visitation by provincial, vice provincials, or other "universal superiors" a practical possibility was a major concern for Jesuit thought about effective administration across geographic space.67 64 Doc. Mal., 2:164 (P. Nunes, Tidore, to Acquaviva, April 27, 1585): "Porque de otra manera no podemos tener respuesta de lo que es necessario sino en espacio de dos años, y si la nave arriba o se pierde, como de ordinario acaesce, después de 3 y 4 años." A similar letter went out to the Portuguese assistant in Rome, Melchior Rodrigues, Doc. Mal.,2:168. 65 This striking expression occurs in a strong-worded letter by Pero Nunes, Ambon, to Acquaviva, June 5, 1587, Doc. Mal., 2:211: "Cuya causa total pienço yo ser no se visitar nunqua por los Provinciales de la Yndia ni Visitadores que V.P. de Europa ynbía, y terse como cosa desmenbrada, de lo que se sigue no sólo la christiandad ser la que es, may aynda entre los nuestros aver grandes naufragios, por se veren tan remotos y desenparados de la providencia del Superior. Porque los que vienen a visitar por mandado del e se queden acá. Y por más que se escriva, nada provecha[,] por la carta ser relatador muerto y que no exeprime al bivo [sc. vivo] lo que honbre siente, y portanto mueve tan poco y se iscreve con tan pouco fructo." 66 Doc. Mal.,[2][3][4][5][6][7][8][9][10][11][12][13][14] There were limits to pragmatism, however. From a purely geographical perspective, Malacca would best have been associated with the growing Jesuit community in Manila.
Since the Philippines belonged to Spain and India to Portugal, however, no such connection was possible. Factors beyond expediency and Jesuit practicalities clearly determined the actual outcome of Jesuit governmental structures to a great degree. In fact, the journal of jesuit studies 4 (2017) 1-27
Personalizing the Problem
Ann M. Carlos and Stephen Nicholas have discussed similar problems of early modern trading companies in a series of important papers.68 According to their studies, the companies designed their institutional framework especially to "control managers at a distance" in "situations of incomplete information and uncertainty." Like the trading companies, the Society of Jesus invented a great many technologies to control the administrative performance of local and regional superiors. Correspondence, for instance, was solicited from a multitude of local agents in order to compare different accounts and thus more effectively control the situation on the ground. A strict regime of control, however, was not the only means applied by the Society of Jesus to solve its "agency problem." Lacking the trading companies' option to use financial incentives to solicit compliance, the Jesuit relied on a different strategy: they personalized the problem of securing local compliance by tying it to the organization's overall interest.69 Besides perfecting mechanisms of control, the Jesuits wished to "perfect" their local agents. If the right people governed, the Jesuits claimed, the "agency problem" would be more easily confronted. Being able to trust local agents, therefore, was of the utmost importance.70 Assessing the superior's personal qualities was key to this approach. The Jesuits thus developed a moral and spiritual solution for the organizational problem. Raising moral, religious, and intellectual standards was considered the most successful way to guarantee that local agents governed in the light of the order's overall interest. If local superiors had to have substantial leeway due to the distance and difference between Europe and Asia, their ongoing compliance with broader Jesuit goals set by Rome could best be ensured by focusing on their personal qualities. Moral integrity and administrative acumen would be the most effective barriers against the potential for abuse of geographical isolation.
Valignano returned to this point time and again. His argument was simple. Since the administrative situation in Asia was extraordinary, so too should superiors have extraordinary qualities: Since the distance and the type of duties here make it impossible to wait for answers from Rome and since it is thus inescapable that the provincial will have to make important decisions on his own (even those that elsewhere are the sole responsibility of Father General) and since he thus has to have wide-ranging faculties (which are much more extensive than the faculties ever granted to European provincials) it follows that the Indian superior will require much more virtue, prudence and experience in matters of Jesuit governance than all other provincials. This is because he is so distant from the general and thus holds extraordinary powers.71 Securing personnel of the highest quality was a general concern for the Society of Jesus. Yet, according to the Jesuits from the Indian province, the best of the crop should always be deployed to Asia given the special challenges of governing in the region. Precisely what qualities were needed, however, was not always spelled out with sufficient clarity. Valignano was often vague in his China. Brockey points out that due to the lack of personnel the Chinese missionaries usually lived isolated and without proper supervision by any superior at all. 71 Valignano, "Sumario," 561: "Porque la distancia y los negocios no dan lugar a que se espere respuesta de Roma, y assi es necessario que el provincial por si mismo concluya cosas gravissimas que en todas las otras provincias san reservadas al padre general, y assi necessariamente ha-de tener facultad de hazer y mandar a su modo todos los superiores consultores etc. y de encorporar y escluyr los subjectos del cuerpo de la Compañia y fundar casas y colegios [etc....] y finalmente ha de tener mucho mayor y mas ampla facultad de todos los provinciales de Europa, de donde se sigue que ha-de tener mayor virtud, prudencia y experiencia de las cosas de la Compañia y de su govierno que los otros provinciales, porque esta tan lexos de su general y tiene una facultad tan suprema." journal of jesuit studies 4 (2017) 1-27 comments,72 usually content to call for the "best" Jesuits who had "more" of every quality than those working in Europe or elsewhere.73 Occasionally, however, Asian Jesuits stated more clearly what was required of superiors in India. Antonio Marta was particularly explicit. Up to now, he said, higher education and university training had been considered unimportant for Jesuits working in Asia.74 Marta disagreed on the basis of his own "experience." Well-educated people (letrados) were in fact especially necessary in the Moluccas. Superiors here constantly had to "distinguish correctly between right and wrong, important and unimportant." This he considered an extraordinary challenge. Not least of all, education was necessary to know "when to dissimulate issues and when not." Such decisions had to be made every day by superiors in the Moluccas "because we are far from India and cannot always expect orders from there." If local superiors were personally incapable of making these decisions, however, "we are without guidance and people will say we are governed by those lacking experience and will lose confidence in us."75 Marta thus described much more clearly than Valignano what kind of decisions had to be taken and how Asian decision-making was particularly challenging and differed from decision-making in Europe and elsewhere.
Marta's description of decision-making in Asia would have sounded familiar to almost every Jesuit, wherever he lived. Distinguishing right and wrong, determining what best to do-this was clearly a reference to the most basic and typical Jesuit virtue: discernment (discretio or ἐπιείκεια [epieikeia]). This was the virtue of correctly assessing specific circumstances, weighing the different alternative options, and choosing the course of action best suited to promote the spiritual goals of one's personal life or the entire order. Ignatius of Loyola had devoted considerable attention to this topic in his Spiritual Exercises. 72 Other Jesuits usually also produced only vague comments, e.g. Marta's allusion to Ignatian discretio specified, but also controlled and sanctioned his call for increased regional independence. And it significantly raised the spiritual bar for potential candidates.76 How to guarantee the quality of Jesuit personnel in Asia was an entirely different question-and a much more practical one. Valignano suggested that he himself should go back to Europe to select appropriate men. He did not trust the order's standard bureaucratic procedure of personnel selection.77 Other Asian Jesuits suggested the opposite and insisted that Rome alone should make the decision. Some expressed a degree of mistrust in regard to provincials in Portugal and Spain, who were suspected of withholding their best men from the missions.78 It should also be noted that the selection of personnel for the overseas missions occasionally became tainted by national antagonism. Portuguese Jesuits commonly mistrusted Italian or Spanish members of the order and vice versa.79 If the Jesuits' trust in personnel seems somewhat naïve, it nevertheless represents a serious attempt to solve a more general problem facing multi-local institutions. Valignano and his colleagues were very explicit that the careful selection of personnel would help mitigate the difficulties and dangers of governance in distant provinces such as India. They thought this a useful mechanism for calibrating the practical need for local autonomy with the ideological commitment to Rome as the unquestioned center of power in the order.
Conclusion
In 1593, Valignano summarized the situation for Acquaviva anew: Even though they [i.e., the Jesuits] in Japan are forced to accommodate to local circumstances and to local habits, and have to adjust their way of living and their offices according to the ways of this country, and though their perpetual peregrinations force them to adopt customs and methods of ministry not used in Europe, they nevertheless follow the same spirit [as all other Jesuits] and work for the same goal, that is the glory of God, their own salvation, and the salvation and betterment of the people.80 Adjusting to local circumstances without departing from the one "Jesuit way": these were the two pressing, albeit potentially contradictory, imperatives for Jesuits in Asia. This ambiguity did not only apply to religious and cultural matters but also-and one might say especially-to the domain of institutions, style of government, and power relations. Navigating between the Scylla of local autonomy and the Charybdis of all-out Roman centralization was as difficult as it was necessary for all early modern Jesuits. The dilemma posed itself everywhere in Jesuit governance, even in the relative vicinity of Rome. Indeed, the limits of central governance and the necessity for some degree of local agency have to be dealt with by all multi-locational social organizations, at all times. The problem, at its heart, is organizational, not infrastructural, although the available infrastructure certainly dictates the range of possible solutions and perhaps also the degree to which the problem makes itself apparent.
There is no doubt that the unique infrastructural difficulties and the unusual cultural challenges of the early phases of globalization were particularly acute. Experiences in Asia certainly brought a degree of urgency to the larger questions of Jesuit governance. It should have become clear by now that the unified and homogeneous vision of Jesuit administrative space offered by Francisco Suárez was entirely unrealistic. Distance and difference were the main reasons why institutional and administrative adjustments were necessary. Stressing the gap between Rome and Goa and between Goa and Ambon or Nagasaki, say, meant fragmenting the homogeneity of Jesuit administrative space. Asian Jesuits sought 80 Doc. Ind., 14:65 (January 1, 1593): "Y aunque por la qualidad y costumbres de la tierra, a los quales necesariamente son forçados acomodarse, y por los oficios y cargos que tienen de la conversión y christiandad de Japón, y por el modo de bivir, espargidos y solos en una continua peligrinación por las residencias, son forçados a tener otra manera de vida y otras ayudas y ministros que en Europa no tienen, todavía proceden con el mismo spíritu y pretenden el mismo fin a gloria de nuestro Señor, de la propria salvación y perfeción, y de la salvación y perfeción de los próximos." to put themselves on the order's administrative mental map by stressing the particularities of their situation. The fluid and nuanced concept of institutional uniformity that developed within the Society of Jesus as a consequence is remarkable. It would seem that Jesuits in India consciously distinguished between different degrees of adherence to the European model. If Asia in its entirety, even from the perspective of Jesuits in Asia, was a somewhat peripheral corner of the Jesuit world, there were nevertheless centers and peripheries within the periphery itself. Difference from and conformity to European standards were carefully calibrated along a sliding scale, even by Rome.81 The order's administrative blueprint became applicable on a global scale only by allowing for myriad variations.
It is important to note just how pragmatic and open to adjustments both Valignano in Asia and Acquaviva in Rome actually were. Jesuits in the Indian province tampered with the order's administrative framework on numerous occasions. They debated the role that Rome could play meaningfully in such far-away regions. And Rome, it needs to be highlighted, generally kept an open ear for such requests. A great deal of autonomy was granted and the Jesuit headquarters in Rome frequently granted requests for highly flexible applications of universal standards. On the rather rare occasion that the general attempted to micromanage Indian affairs, he was normally careful to acknowledge his limited knowledge and competence.82 Even the most basic assumptions about the Jesuits' administrative practices thus came under scrutiny. Indeed the idea of "lettered governance" itself was openly questioned, though never abandoned outright. Valignano and most other Asian Jesuits, when asking for more autonomy and an alteration of "normal" procedures, never considered adjustments as a critique of centralized governance as such. Though calling for substantial administrative changes, they presented their alternative ideas as approaches within the standard Jesuit 81 See especially Acquaviva to the Indian provincials, Doc. Ind., 12:687: At least the "principal" colleges in Goa and Cochin should comply more strictly to the European standards than those establishments further away. 82 An illustration is provided by Aquaviva's letter to Provincial Pedro Martins, January 12, 1589, in Doc. Ind., 15:246. Acquaviva discusses several points and voices his own preferences or concerns, but usually closes his statement with a limiting qualification such as "V.R. lo verá de más cerca y lo consulterá." It would appear that some of his requests were more symbolic than practical, meant to display more his interest in India than suggesting concrete paths of action, e.g.: "En el collegio de Goa se entiende ha avido muchos enfermos; deseo que se halle remedio. V.R. lo consulte allá." journal of jesuit studies 4 (2017) 1-27 system of governance.83 Consequently, the call from Asian Jesuits to modify Jesuit administrative practice was not considered by Rome to threaten the order's social coherence. 83 At least occasionally, Asian Jesuits reflected on the implications of administrative pluralism for social and organizational unity; see Valignano's statement in Doc. Ind., 14:100-1: "Quanto a las tres puertas que escreví a V.P. que deseava mucho que se enserrassen para el bien y unión de la Compañía, scilicet, que hoviesse uniformidad en las opiniones de las sciencias y en la manera o espírito de orar, y en el modo de governar, no le escreví tanto por esta Provincia como por las otras, porque aunque en el modo de governar ha aquí diversidad de pareceres algunas vezes, todavía en lo que toca a la opinión en las letras ha mucha uniformidad, y en lo que toca a la manera de orar no ha aquí dicención, porque la tierra y los negocios naturalmente más nos llevan a la destración que a gastar demasiado tienpo en orar." See also his letter to Acquaviva (December 20, 1586) in Doc. Ind., 14:436-47, although from a somewhat more limited perspective. Significant is also Doc. Ind., 15:294-95, where he first defends his accommodation of rules to local circumstances and then sums up his work in these words: "Y después que ellos se hizieron, se vio claramente aver uniformidad en el govierno de la christianidad." | 2019-05-10T13:09:38.684Z | 2017-11-30T00:00:00.000 | {
"year": 2017,
"sha1": "264680ff0e7d830bac7e1f90c805ba296ee351db",
"oa_license": "CCBYNC",
"oa_url": "https://brill.com/downloadpdf/journals/jjs/4/1/article-p1_1.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3b7f948cb9a9e90fe60cb1fd671c7e01779a3e72",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"History"
]
} |
28296950 | pes2o/s2orc | v3-fos-license | An anti-CAPN5 intracellular antibody acts as an inhibitor of CAPN5-mediated neuronal degeneration
CAPN5 has been linked to autosomal dominant neovascular inflammatory vitreoretinopathy (ADNIV). Activation of CAPN5 may increase proteolysis and degradation of a wide range of substrates to induce degeneration in the retina and the nerve system. Thus, we developed an inhibitory intracellular single chain variable fragment (scFv) against CAPN5 as a potential way to rescue degeneration in ADNIV disease or in neuronal degeneration. We report that overexpression CAPN5 increases the levels of the auto-inflammatory factors toll like receptor 4 (TLR4), interleukin 1 alpha (IL1alpha), tumor necrosis factor alpha (TNFalpha) and activated caspase 3 in 661W photoreceptor-like cells and SHSY5Y neuronal-like cells. Both C4 and C8 scFvs specifically recognize human/mouse CAPN5 in 661W cells and SHSY5Y cells, moreover, both the C4 and C8 scFvs protected cells from CAPN5-induced apoptosis by reducing the levels of activated caspase 3 and caspase 9. The cellular expression C4 scFv reduced levels of the pro-inflammatory factor IL1-alpha activated caspase 3 in cells after CAPN5 overexpression. We suggest that CAPN5 expression has important functional consequences in auto-inflammatory processes, and apoptosis in photoreceptor like cells and neural-like cells. Importantly, the specific intracellular targeting of antibody fragments blocking activation of CAPN5 act as inhibitors of CAPN5 functions in neural like cells, thus, our data provides a novel potential tool for therapy in CAPN5-mediated ADNIV or neurodegenerative diseases.
Because activating mutations of CAPN5 play pivotal roles and have a significant effect on degeneration of photoreceptor cells at an early stage in human ADNIV patients [3][4][5][6], we generated intracellularly expressed single chain antibody fragments against CAPN5 to block possible active-CAPN5 substrate-mediated cell damage including apoptosis, autoimmune-activation, and retinal photoreceptor cell degeneration. This may be a possible way to treat of activated-CAPN5 induced photoreceptor cell and neuronal cell degeneration in ADNIV and neurodegenerative diseases.
Overexpression of CAPN5 induces apoptosis and expression of pro-inflammatory factors in neuronal cells
It has been shown that CAPN5 activation may induce degeneration of photoreceptor cells in the eye and neuronal cell death in the nerve system [6,9]. To characterize the roles of CAPN5 in photoreceptor cells and neuronal-like cells, we transfected plasmids (CAPN5 wt and CAPN5 R289W ) into 661W cells, N2A cells and SHSY5Y cells, respectively. After 24, 48, 72 hours transfections in 661W and N2A cells, the cell viability of 661W and N2A were both strongly reduced by CAPN5 and CAPN5 R289W overexpression in a time-transfection dependent manner ( Figure 1A, 1B). Moreover, The CAPN5 mutant R289W overexpression decreased the more viability of cells when compared to CAPN5 wt transfections in both 661W and N2A cell lines. After 60 hours post-transfection, both the CAPN5 and CAPN5 mutant R289W vectors transfection increased the mRNA levels of TLR4/6, IL1alpha and TNFalpha when compared to empty vector transfection, and this was especially pronounced for the mutant CAPN5 R289W expression which increased both caspase 3 activation and IL1alpha levels when compared to CAPN5 wt transfection in both 661W and SHSY5Y Cell lines ( Figure 1C, 1D Figure 1E, 1F, 1G, 1H). These data indicate that CAPN5 exerts its effects on TLR4/IL1/TNF alpha expression at the transcriptional level, and overexpression of CAPN5 increased caspase 3 activation and expression of pro-inflammatory proteins as TLR4, IL1alpha and TNF alpha. It is interesting that the mutant CAPN5 R289W shows the stronger effects on caspase 3 activation and protein levels of the TLR4 pathway when compared to wild-type CAPN5 overexpression in neuronal cells. We also have been found that overexpression of CAPN5 vectors shortened neurite lengths in mouse neuroblastoma N2A cells when compared to empty vector control transfection (data not shown). Together, these results suggest that overexpression CAPN5 induces activation of auto-inflammation and apoptosis in neural-like cells.
Screening and selection of specific CAPN5 scFvs by phage display
Activating CAPN5 mutations have been shown accelerate the degeneration of photoreceptor cells, autoimmunity inflammatory in retinal cells, and neuronal cell death in the nervous system [27,28]. To design an inhibitor of CAPN5 to protect cells, we used phage display to screen scFvs against human CAPN5. We purified and selected three scFvs we called C4 and C8, C20 ( Figure 2A) binds to the purified CAPN5 by direct ELISA. Both C4 and C8 scFvs did not bind to normal mouse IgG or BSA ( Figure 2B, 2C). We detected C20 scFv non-specific binding characteristics by ELISA ( Figure 2D). To confirm whether C4 and C8 scFvs antibodies reacted with wild type CAPN5 protein in cells, we incubated C4 scFv and C8 scFv with total proteins from 661W cell lysates which separated and transferred onto PVDF membrane, then incubated membrane with anti-His tag and detected the signals by secondary antibody. In immunoblot assay, we found the specific 75 kDa band was detected by both C4 and C8 scFvs, the bands at 75 kDa were seem as the band detected by a commercial anti-CAPN5 antibody ( Figure 2E). We also pre-incubated the living cells with C4/C8 scFvs for 2 hours, then fixed cells, incubated cells with anti-His tag/ a commercial goat anti-CAPN5 antibody respectively, following with secondary antibodies. We found that scFvs bind to cytoplasmic CAPN5, the immunofluorescence signal co-localized with signals detected by the commercial antibody against CAPN5 in 661W cells and human SHSY5Y cells ( Figure 2F). These data suggest that we successfully produced specific scFvs against human CAPN5 by phage display, and moreover, the antibody fragments also react with mouse CAPN5 in vitro. Both C4 scFv and C8 scFv could be entered into the human living SHSY5Y cells and mouse living 661W cells.
CAPN5 scFvs protect 661W cells and SHSY5Y cells from H 2 O 2 and CAPN5 induced apoptosis
To analyze whether the scFvs against CAPN5 could protect cells from CAPN5-induced apoptosis, we used H 2 O 2 to induce the death of 661W cells. We found that the protein levels of CAPN5 were dramatically increased by H 2 O 2 treatment. After 2 hours exposure to H 2 O 2 , the viability of 661W cells was enhanced by 24 hours ScFv pre-treatment at the indicated concentrations when compared to H 2 O 2 exposure in the control group ( Figure 3A, 3B). We also found that the C4 scFv significantly reduced the numbers of TUNEL-positive 661W (-32.1% of control group) and SHSY5Y cells (-41% of control group) compared with H 2 O 2 exposed controls (-78.9% in 661 cells, -87.3% in SHSY5Y cells) ( Figure 3C). These data suggested that CAPN5 scFvs protect cells from H 2 O 2 -induced apoptosis, and neutralize the functions of CAPN5 in response to H 2 O 2 treatment. To further determine whether CAPN5 ScFvs act as inhibitors of CAPN5 induced apoptosis, we added scFvs alongside overexpression of CAPN5 in 661W cells and SHSY5Y C20, scFv-phages were infected into HB2151 strain E. coli. cells and induced by IPTG overnight, purified and detected by SDS-PAGE respectively. (A) Purification of scFvs. The arrow denotes that the approximate molecular weight of scFvs at 30kDa. (B) Values represent mean±SEM OD 450nm for binding of C4 scFv to recombinant CAPN5, normal mouse IgG, and BSA proteins from three independent experiments. The ninety-six well plates were coated with recombinant CAPN5, normal mouse IgG and BSA at the indicated concentrations, and binding capability of scFvs was detected by an anti-c-myc monoclonal antibody followed by goat anti-mouse HRP secondary antibody with ELISA. (C) Binding capability of C8 scFv to indicated recombinant proteins. (D) C20 scFv binding characteristics measured by ELISA. (E) 661W cells were lysed and immunoblotted by anti-CAPN5 C4/C8 scFv and monoclonal anti-CAPN5 antibody respectively. The arrows denote the specific molecular weight of CAPN5 at 75 kDa. (F) C4 and C8 scFvs bound to living 661W cells and SH-SY5Y cells. C4 or C8 ScFvs (green) and goat anti-CAPN5 (red) were incubated with living 661W cells and SH-SY5Y cells at 10μg/ml in PBS, for 1 h, cells were then fixed and detected by immunofluorescence. Bar 10μm is shown in the lower photo for all panels. cells. After 48 hours of CAPN5 overexpression, scFvs were added to the culture medium. Activated caspase 3 was measured by immunoflourescence ( Figure 4A) and caspase 9 levels were decreased by C4 and C8 scFv treatments when compared to controls. Similar trends were detected in 661W cells ( Figure 4B, 4C) and SHSY5Y cells ( Figure 4D, 4E). These data confirm that scFvs rescue the apoptosis of neuronal cells by targeting CAPN5.
Generation of intracellular antibody fragments against CAPN5
To generate stable and durable intracellular CAPN5 scFvs in cells we constructed four pSin scFv plasmids. After seventy-two hours post-transfection with pSin C4ScFv, pSin C8ScFv, pSin C10ScFv, and pSin C20ScFv detected all scFv proteins highly expressed in 661W cells and SHSY5Y cell lysates, in an immunoblot assay ( Figure 5A). We also selected and transfected the pSin C4scFv into 661W cells and SHSY5Y cells, after forty-eight hours post-transfection, we found co-localization of intracellular C4 ScFv with endogenous CAPN5 in 661W cells and SHSY5Y cells by immunofluorescence ( Figure 5B). These data suggest that we successfully expressed intracellular specific C4 scFv antibody directly against endogenous CAPN5 in 661W photoreceptor like cell and SHSY5Y neural-like cells.
Intracellular antibody against CAPN5 decreased the levels of IL1alpha and activated caspase-3 in cells with CAPN5 overexpression
To confirm whether the intracellular antibody against CAPN5 could block CAPN5-mediated inflammation and apoptosis, we co-transfected CAPN5 plasmids with pSin scFv plasmids into 661W and SHSY5Y cells, respectively. Sixty hours posttransfection, the secretion of IL1alpha in cultured medium as measured by ELISA was increased after CAPN5wt plasmid (Kruskal-Wallis test, p<0.001, 51±6.7 pg/ml vs 23±5.14 pg/ml control in 661W cells; p<0.05, 135±16.7 pg/ml vs 53±7. 15 Figure 6A, 6B). The expression levels of IL1alpha and activated caspase-3 were also decreased in co-transfected cells compared with CAPN5 plasmid transfection alone ( Figure 6C, 6D). The selected C20 ScFv did not specifically recognize CAPN5, and was used as a negative control ( Figure 6). This did not alter the levels of IL1alpha and activated caspase-3 in cells with CAPN5 transfection. These data thus suggest that the intracellular C4scFv specific against CAPN5, blocked CAPN5-induced secretion of IL1alpha, as well as protein levels of IL1alpha, and activated caspase 3 in 661W photoreceptor like cells and SHSY5Y neural-like cells. Previously, we have been found LPS induced highly expression of CAPN5 and IL1 alpha in 661W cells (data not shown). Here, we also found that intracellular C4 scFv inhibited IL1alpha secretion from LPS-stimulated 661W cells ( Figure 6E). Taken together, these data demonstrate that the Immunofluorescence of C4 scFvs colocalized with CAPN5 in 661W cells and SHSY5Y cells. His/myc tagged C4 scFv was detected by anti-c-myc 9E10 antibody and secondary green 488nm donkey anti-mouse antibody. Endogenous CAPN5 was detected by goat anti-CAPN5 antibody followed by with red secondary Alexafluor 555nm antibody. Images were taken at 400×magnification. www.impactjournals.com/oncotarget CAPN5-specific intracellular antibody inhibited CAPN5 functions when CAPN5 was overexpressed in neuronal cells or after LPS induced inflammation.
DISCUSSION
In this study we initially showed that overexpression of CAPN5 decreased viability of neural-like cells, the expression levels of TLR4 and caspase 3 activation were enhanced by CAPN5 overexpression, showing that CAPN5 is either directly or indirectly involved in increased levels of TLR4 and activated caspase 3. Moreover, we also saw increased levels of IL1alpha which is downstream of TLR4 signaling, as a downstream mechanism of autoimmunity pro-inflammation [31]. The enhanced secretion of IL1alpha, and caspase 3 activation result in degeneration of photoreceptor-like cells and neural-like cells by CAPN5 overexpression. Interesting, overexpression of the mutant CAPN5 R289W was decreased viability of neural-like cells and enhanced the higher levels of caspase 3/9 and TLR4/ IL1alpha when compared with wild type CAPN5 overexpression. In agreement with our findings, activated CAPN5 induced increased inflammatory factors through TLR4/6 autoimmunity inflammation pathways in retinal degeneration in ADNIV patients [5,6]. Previously, we have observed a novel CAPN5 R289W mutant affect ADNIV patients in a Chinese family. We speculated this mutant stabilized catalytic core domain II and enhanced CAPN5 catalytic activation in construct conformation. There is also an evidence have been reported that the mutant R243L increased activation of CAPN5 catalytic core domain to stimulate TLR4 autoimmunity inflammation in CAPN5 R243L transgenic mice retina [28]. Thus, we considered that the mutant CAPN5 R289W increased degeneration of photoreceptor cells by activated caspase 3 and TLR4 auto-inflammatory pathway.
It also has been found that in neurodegenerative Huntington's disease, CAPN5 is abnormally activated regulator for proteolytic of htt protein in neural cell death [8]. The evidence presented here suggests that activation of CAPN5 leads to neural degeneration by increased activated caspase 3/9 in cells. Thus, we screened and selected specific scFvs which block CAPN5, we successfully constructed intracellular antibody fragment expression plasmids to express specific scFvs against endogenous CAPN5 in cells. The CAPN5 intracellular antibody fragments inhibited secretion of IL1alpha induced by LPS or CAPN5 overexpression. The intracellular antibody fragment neutralized activation of CAPN5 by overexpression in neural-like cells and protected cells from activated CAPN5-incuded proinflammation and cell apoptosis. Taken together, these findings strongly suggest that CAPN5 activation causes retinal degeneration and is also involved in neuronal degeneration. CAPN5 is a member of the calpain family, but lacks the EF calcium binding domain, it may has similar characteristics and substrates as others calpains, like classical calpain1/2 in the nerve system and retina [8]. Although calpains have vast numbers of proteases and proteolytic complexes in nerve systems, calpains are very few that are directly enzymatic activated dependent by Ca 2+ in signal transduction. In addition, calpains are modulator proteases that perform proteolysis to modulate rather than abolish the function of their substrate [32]. Since, the Ca 2+ signaling and proteolysis of CAPN5 need to be further explored to fit CAPN5 into our subject of interest. Moreover, in his study, the precise scFv binding/ blocking domain(s)/epitopes of CAPN5 were still unknown. We have been found purified CAPN5 activity by cleavage of a common calpains' substrate AC-LLY-AFC dependent Ca 2+ concentration. The C4 scFv antibody inhibited CAPN5 activity of cleavage for AC-LLY-AFC (data not shown). Therefore, the screening of scFv against the specific catalytic domain of CAPN5 and substrates of CAPN5 could be strategies to investigate molecular mechanism underlying its related neuronal degenerative diseases.
Recently, many inhibitors against calpain 1/2 have been generated to inhibit neuronal degeneration and ophthalmic diseases in vitro and in vivo [32][33][34]. It has been report that calpain-1 is hyperactivated in the AD brain [35], and calpain inhibitors can improve memory and synaptic function in mice APP overexpressing AD model [36]. Calpain 1/2 inhibitors, ALLNal and SNJ1945 are therapeutically beneficial in LIS1-related lissencephaly [37]. The inhibition of calpain 2 is also benefit to relieving photoreceptor degeneration in retinitis pigmentosa [38]. However, these inhibitors were still not sufficiently specific to distinguish calpains from other proteases in almost all cells. Thus, we firstly screened and generated the specific intracellular anti-CAPN5 scFv to inhibit CAPN5 overexpressing in photoreceptor cells and neuronal like cells. ScFv offer small size and low immunogenicity and can be used in gene delivery system to target proteins and neutralized harmful protein activations [32]. The intracellular expressed and targeting antibody (intrabody) has been used in therapeutically aims in neurodegenerative diseases [39]. Here, we generated pSin vector encoding scFv to target intracellular CAPN5 protein, and block CAPN5-induced inflammation and apoptosis in neuronal degenerative processes. We could also be further exploring adeno associated virus (AAV) delivery CAPN5 scFv as a treatment strategy for CAPN5 mutations-linked ADNIV in eye and activated CAPN5 related-neurodegenerative diseases in central nerve system.
In summary, our observations support the view that activated CAPN5 induced activation of TLR4 and caspase 3 pathways in auto-inflammation and apoptosis, thereby leading to degeneration of photoreceptors and neural degeneration. We developed intracellular antibody fragments against cellular CAPN5 and inhibited the process of auto-inflammation and cell death by activated CAPN5 induced. These potential intracellular antibody fragments could be further used for therapy in ADNIV mice model and neurodegenerative diseases.
Protein purification
For purification of the human CAPN5 protein, 100 μl of an overnight culture of Escherichia coli. (BL21 strain), harboring the plasmid pET28a encoding the Histagged CAPN5, was used to inoculate 100 mL of fresh LB-Kan broth, and shaken at 37°C for 2 h. Isopropyl β-D-1-thiogalactopyranoside was then added to a final concentration of 5 mM, and incubation was continued for a further 10 h. The Flag-tagged CAPN5 was purified using an Ni+ affinity column.
For purification of the scFvs, phagemid clones were amplified and phages were extracted as described [29]. For production of soluble ScFv proteins, 1 ml inocula of E. coli HB 2151 strain was infected with a glycerol stock of an individual phage-ScFv clone and transferred into culture flasks for expression of the scFv cassette was induced by isopropyl β-D-1-thiogalactopyranoside, which was added to give a final concentration of 1mM isopropyl β-D-1-thiogalactopyranoside. Shaking was continued overnight. ScFvs were secreted into the culture supernatant and the E. coli periplasm were harvested after osmotic shock. Supernatants were then centrifuged at 10,000×g at 4°C for 30 min and clarified by filtration through 0.22 μm filters (PALL, Port Washington, NY, USA). Finally, all clarified protein fractions (supernatant and periplasmic fraction) were pooled and passed through a Ni+ affinity column and dialyzed against PBS. Purity of the eluted soluble scFvs was evaluated by SDS-PAGE on 10% gels. The concentration of the purified scFvs was determined by the BCA technique (Beyotime).
Selection and generation of intracellular scFv
The selection method of scFv binding to human CAPN5 was essentially as described [29]. The Tomlinson I and J libraries (Geneservice, Nottingham, United Kingdom) and recombinant CAPN5 were used for screening. Briefly, 100 μl of 22.5 μg/ml CAPN5 in PBS pH 7.4 were coated overnight at 4°C onto a 96-well tissue culture dish (Jet, biofial, Beijing, China). Wells were then blocked with 3% BSA (fatty acid free, Merck, Whitehouse Station, NJ, USA) in PBS at room temperature for 1 h. After washing the wells twice with PBS, 10 13 phagemid particles in 0.5% BSA in PBS were added to the wells. After incubation for 40 min at room temperature, wells were washed eight times with PBS containing 0.1%, 0.3%, or 0.5% Tween-20 and then rinsed twice with PBS, for 5 min each. Bound phages in each well were released by incubation with 100 μl trypsin (Beyotime, Hai Men, China) (10 μg/ml in PBS) for 1 hour at room temperature and collected. For amplification, phages were used to infect the E. coli strain TG1. Bacteria were grown at 37°C overnight on TYE plates containing 100 μg/ml ampicillin and 1% glucose. After three rounds of panning, individual phage clones were selected for ELISA. For phage ELISA, each well of a 96-well plate was coated overnight at 4°C with 100 μl of 10 μg/ml CAPN5 in PBS, and blocked with 3% BSA in PBS for 1 hour at room temperature. Supernatants from individual clones were added to the wells, incubated at room temperature for 40 min and washed three times with PBST (PBS, 0.1% Tween 20). Wells were then incubated with a 1:3,000 dilution of the monoclonal mouse anti-M13 horseradish peroxidase (HRP) conjugated antibody (GE Healthcare) in 3% BSA in PBS for 1 hour at room temperature and washed three times with PBST. Binding of phages was detected using TMB (3, 3′, 5, 5′-tetramethylbenzidine; Beyotime). For selection of scFvs, 96 well plates were coated overnight at 4°C with 100 μl purified recombinant CAPN5 in PBS over a concentration range of 0-10 nM. Wells were blocked with 3% BSA in PBS for 1 hour at room temperature. Individual scFvs (100 μl, 100 ng/ml in PBS containing 3% BSA) were added to the wells, incubated at room temperature for 40 min, and washed with 0.1% PBST three times. Wells were then incubated with biotin-conjugated mouse anti-c-myc monoclonal antibody 9E10 for 1.5 hours at room temperature, washed three times with 0.1% PBST, and then incubated with ExtrAvidin-HRP (Sigma-Aldrich) for 1 hour. Wells were washed, and binding was detected using TMB as a substrate.
The sequences of selected clones were determined with the primer LMB (5′-CAG GAA ACA GCT ATG AC-3′) by the dideoxy chain terminating method. Sequencing was repeated three times for verification. For constructs of intracellular expressed scFvs, we subcloned c-myc and His-tagged scFvs into the pSin Puro vector from the pSin EGFP Puro plasmid using BamH I and EcoR I.
RT-PCR
The 661W and SHSY5Y cells were transfected with CAPN5 plasmids, the cells were lysed and total RNA was extracted using TRIzol reagent (Invitrogen, USA) respectively, and cDNA was synthesized using reverse transcriptase (TIANGEN, Beijing, China). The www.impactjournals.com/oncotarget RNA (1%) was reverse transcribed to complementary deoxyribonucleic acid, and 20 ng of complementary DNA was used as the template for RT-PCR. The amplification cycling reactions (35 cycles) were performed as follows: 2 mins at 95°C, 30 seconds at 60°C and 1 mins at 72°C. The primers' information were used in this study (as shown in Table 2). The mRNA levels of TLR4, TLR6, IL1alpha, TNFalpha and GAPDH were determined for each experiment.
Immunostaining
For living cells immunostaining, the living cells on coverslips were incubated with scFvs which tagged with His and C-myc at 10μg/ml in PBS for 2 hours at 37°C incubator, then discard scFvs and fixed cells with 4%PFA, incubated cells with mouse anti-his tag antibody (1:500) and goat anti-CAPN5 antibody (1:700) overnight at 4°Covernight, after PBS washing, incubated coverslips with secondary Alexa 488nm donkey anti-mouse IgG and Alexa 555nm donkey anti-goat IgG antibodies for 1 hour at room temperature. For normal immunostaining, 661W and SH-SY5Y cells were cultured for 40 hours on PDLcoated glass slides and fixed in 4% paraformaldehyde (PFA) as described [30]. After 48 hours pSin C4 scFv plasmid transfection, the cells on coverslips were fixed and incubated with mouse anti-C-myc antibody (1:500) and goat anti-CAPN5 antibody (1:700) at 4°Covernight, then following incubated with secondary Alexa 488nm donkey anti-mouse IgG and Alexa 555nm donkey antigoat IgG antibodies for 1h at room temperature. DAPI was used to stain nuclei. Images were captured in digital format using a Zeiss microscope (Carl Zeiss, Chester, VA).
Western blot analysis
Western blot analysis was performed as described [30]. For CAPN5 or scFvs overexpression experiments and scFv protein treatment experiments, Cells were transfected with plasmids for 60 h, or treated with scFvs at 10 μg/ml in DMEM with 0.1% FBS for 24 hours, after transfections and treatments, cells were collected and lysed. The total cell proteins were separated by SDS-PAGE and transferred to PVDF membrane, the membranes were incubated with rabbit anti-activated caspase 3, caspase 9 antibodies, goat anti-CAPN5, monoclonal anti-IL1alpha, anti-His, or anti-C-myc antibodies in 5% milk which diluted in PBS at 4°Covernight respectively, after washing with 0.1%TBST, the membrane incubated with donkey anti-mouse/goat IgG HRP secondary antibodies for 1 hour at room temperature.
For H 2 O 2 exposure experiments, cells were pretreated with 10 μM H2O2 for 2, 6, 12, 24 hours, then the cells were collected and lysed for immunoblot analysis as above. Bands were visualized with an enhanced chemiluminescence system kit (Beyotime). Signals were detected and quantified using Image J software (National Institutes of Health, Bethesda, MD, USA).
Determination of cell viability
Cells were suspended in DMEM/F-12 medium to a concentration of 5 ×10 4 cells/ml, and 100 μl was added to each well of a 96-well plate. After treatment with transfection of CAPN5 plasmids, scFvs, or normal control plasmids for 24, 48, or 60 h, cells were incubated separately with 10 μl MTT (500 μg/mL) for 4 h. The culture medium was then removed, and 100 μl dimethylsulfoxide (DMSO) was added to each well, followed by a 30 min incubation period at 25°C. Absorbance was measured spectrophotometrically at 540 nm.
Statistics
Statistical analyses were performed with SPSS 13.0 software (IBM Corporation, Armonk, NY, USA). All data are presented as means±SEM unless otherwise specified. Student's t-test was used for comparisons in experiments with only two groups. In experiments with more than two groups, ANOVA was performed followed by Tukey's post hoc test for pairwise comparisons among three and greater than three groups. For analysis of more than two groups of non-parametric data, the Kruskal-Wallis test was used.
Author contributions
W.Y. designed and carried out experiments, F.G. designed experiment and discussed. X.Z. and Z.M.S discussed. W.Y., F.G. wrote manuscript. | 2018-04-03T03:45:43.647Z | 2017-11-01T00:00:00.000 | {
"year": 2017,
"sha1": "92d78c2ad5778fc29b550b8ec40db60d4db1ce9c",
"oa_license": "CCBY",
"oa_url": "https://www.oncotarget.com/article/22221/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "92d78c2ad5778fc29b550b8ec40db60d4db1ce9c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
231690358 | pes2o/s2orc | v3-fos-license | Communication between healthcare providers and medical cannabis patients regarding referral and medication substitution
Background People report using cannabis as a substitute for prescription medications but may be doing so without the knowledge of their primary health care providers (PCPs). This lack of integration creates serious concerns, e.g., using cannabis to treat medical conditions that have established treatment options. Methods We conducted an anonymous, cross-sectional online survey among patrons of a medical cannabis dispensary in Michigan (n = 275) to examine aspects of their relationship with their PCP and their perceptions of PCP knowledge related to cannabis. Results Overall, 64% of participants initiated medical cannabis use based on their own experiences vs. 24% citing advice from their PCP. Although 80% reported that their PCP knew they currently used medical cannabis, 41% reported that their PCP had not always known. Only 14% obtained their medical cannabis authorization from their PCP. Only 18% of participants rated their PCP’s knowledge about medical cannabis as very good or excellent and only 21% were very or completely confident in their PCP’s ability to integrate medical cannabis into their treatment. Although 86% had substituted cannabis for pharmaceutical medications, 69% (n = 134) of those who substituted reported some gap in their PCP’s knowledge of their substitution, and 44% (n = 86) reported that their PCP was currently unaware of their substitution. Conclusions Patients frequently substitute cannabis for prescription drugs, often without PCP knowledge. Although most participants disclosed cannabis use to their PCP, their perceptions of PCP knowledge ranged widely and many obtained medical cannabis licensure from an outside physician. Our results highlight the need for standardized physician education around appropriate medical cannabis use. Supplementary Information The online version contains supplementary material available at 10.1186/s42238-021-00058-0.
Introduction
Thirty-five states in the USA have enacted medical cannabis programs. Despite being designated a Schedule I drug under the 1970 Controlled Substances Act in the US (indicating a high potential for abuse and no accepted therapeutic use), a recent National Academies of Sciences, Engineering, and Medicine report found evidence supporting the therapeutic value of cannabinoids (active compounds in cannabis) for chemotherapy induced nausea and vomiting, chronic pain, and multiple sclerosis-related spasticity (National Academies of Sciences, Engineering, and Medicine 2017). However, the evidence for most conditions allowed by state medical laws (e.g., depression) was insufficient (National Academies of Sciences E, and Medicine 2017; Boehnke et al. 2019a). Complicating this mismatch are scientific and news reports of individuals substituting cannabis for opioids and other prescription medications (Boehnke et al. 2016;Boehnke et al. 2019b;Lucas et al. 2016;Lucas and Walsh 2017;Lucas et al. 2019;Reiman et al. 2017;Piper et al. 2017;Corroon Jr. et al. 2017;Rod 2019)-including for conditions for which there is limited evidence that cannabis has therapeutic value (e.g., anxiety). Similarly, many individuals using cannabis believe that cannabis is useful for medical conditions with no evidence base (e.g., cancer treatment) (Kruger et al. 2020). Taken together, these findings highlight the need for a strong healthcare provider presence in conversations about safe cannabis use in the context of medication substitution.
Whether this substitution occurs with oversight from healthcare providers remains unknown, but healthcare providers consistently express a lack of knowledge about medical cannabis, demonstrated by studies showing that only 9% of medical schools cover medical cannabis (Evanoff et al. 2017) and~80% of physicians reported needing additional cannabis education (Kondrad and Reid 2013). Further, when physicians are approached for medical cannabis recommendations, there are no formal guidelines for appropriate medical use. Among patients, those using cannabis may not approach healthcare providers for fear of stigma or legal trouble. Indeed, some institutional policies prevent physicians from recommending medical cannabis (Carlini et al. 2017), and patients may lose employment due to a positive drug screen even if they have a medical cannabis license (Kulig 2017). As such, many people may use cannabis without the knowledge of or input from their primary healthcare providers (PCPs) (Kruger et al. 2020), emphasizing the lack of integration of medical cannabis into mainstream healthcare settings.
In the current study, we further explored this lack of integration by surveying individuals using medical cannabis in Michigan, where cannabis is legal for medical and adult use (since 2009 and 2018, respectively). We hypothesized that although many participants would report substituting cannabis for medications, most would do so without PCP guidance. We also hypothesized that participants would report low PCP comfort and knowledge regarding medical cannabis.
Setting and participants
We invited patrons of Om of Medicine-a medical cannabis dispensary in Ann Arbor, Michigan-to complete an anonymous, online survey (administered via Qualtrics) through flyers, emails, and social media between April 2019 and February 2020. Emails with the survey link as well as other information (e.g., product specials) were sent out each month to the client database (~5000 people) approximately once per month, and flyers were located around the facility. Social media notices were made publicly without additional advertising. Dispensary staff informed patrons about the research study but otherwise had no involvement with their participation, and no special privileges or attention were given to individuals who chose to participate in the research. In Michigan, patient registry licenses are valid for two years and individuals can obtain licensure both from their PCP or from an outside provider who must be an MD or DO licensed in Michigan (Marijuana Regulatory Agency LaRA 2020).
Participants were > 18 years old and currently used cannabis for medical purposes. Participants answered questions on demographic information (sex, ethnicity, age, education), medical cannabis use and related substitution behaviors, and healthcare provider knowledge and attitudes toward medical cannabis. All procedures and surveys were approved as an exempt study by the Institutional Review Board at the University of Michigan under protocol HUM00165859. Participants freely consented to participate and were not compensated. Most respondents completed the survey (n = 275), 30 cases with incomplete data were not included.
Measures
Measures were adapted from several other studies of medical cannabis use and cannabis substitution (Boehnke et al. 2019b;Lucas and Walsh 2017;Kruger et al. 2020;Kruger and Kruger 2019).
Reasons for cannabis use
Participants selected their primary condition for using medical cannabis from an extensive list of options which we have used in other surveys (Kruger et al. 2020;Kruger and Kruger 2019). Participants indicated why they used medical cannabis from the following list of reasons: my own experiences; advice from my primary health/medical care provider; advice from my medical marijuana caregiver/dispensary; advice from other individual(s); and other source of information (Kruger et al. 2020;Kruger and Kruger 2019).
Patient disclosure of medical cannabis use to PCP
Participants were asked: "Does your primary care provider (PCP) know that you use medical marijuana?" and "Are you seeing (or did you see) your PCP for the health issue that you use medical marijuana to help treat?" Participants whose PCPs knew about their use were asked: "How did your PCP find out that you use medical marijuana?" and "Was there a time when your PCP did not know that you used medical marijuana?" Participants whose PCPs did not know about their use were asked: "Did your PCP ever ask you about medical marijuana?" and "Is there a reason why you did not tell your PCP about your medical marijuana use?", with an open-ended text box to explain. Reported reasons were categorized by reoccurring themes.
Medical cannabis license authorization
Participants were asked whether their PCP authorized their cannabis license, the number of doctors visited to obtain their license, and whether their PCP was in contact with the physician who authorized their license. Those whose PCP did not authorize their license were asked how they found their authorizing physician, with response options: referred by my PCP; referred by a friend or family member; in a newspaper (Metrotimes, etc.); Internet search; and other: with an open-ended text box. These participants were also asked whether the doctor authorizing the license is still involved in their health care, and if they ever saw the authorizing physician again.
Perceptions of PCP knowledge and support for medical cannabis
Participants rated their PCP's knowledge of medical cannabis as poor, fair, good, very good, and excellent; confidence in their PCP's ability to integrate medical cannabis into their treatment as not at all confident, somewhat confident, moderately confident, very confident, and completely confident; and perceptions of their PCP's support of medical cannabis: not at all supportive, somewhat supportive, moderately supportive, very supportive, and completely supportive. All 5-point Likert-type scales were converted to continuous values (1-5) for statistical analyses.
Substitution measures
Participants were asked, "Have you ever or are you currently using or taking..." with response options for a wide range of drug classes used in other studies (Kruger and Kruger 2019). Participants who indicated using a drug class were asked, "Have you reduced your use of or stopped using [drug] because of medical marijuana?" (Boehnke et al. 2019b). Those who responded affirmatively selected the reasons why from the following: my own experiences; advice from my primary health/medical care provider; advice from my medical marijuana caregiver/dispensary; advice from other individual(s); and other source of information. These participants were also asked "Did your PCP know that you reduced or stopped your use of [drug] because of medical marijuana?" with response options of "Yes, immediately", "Yes, but not immediately", and "No".
Statistical analyses
Descriptive analyses included frequencies and mean scores, with selections of subgroups as appropriate. Independent samples t-tests compared perceptions of PCP knowledge about medical cannabis, confidence in PCP's ability to integrate medical cannabis into their treatment, and perceived PCP level of support of medical cannabis by whether or not their PCP knew that they used medical cannabis; whether their PCP had delayed knowledge of their medical cannabis use; whether their PCP knew that they substituted medical cannabis for pharmaceutical drugs; whether their PCP had delayed knowledge of their substitution, and whether their PCP had authorized their medical cannabis card. Chi-square analyses tested whether the distribution of participants whose PCP had authorized their medical cannabis license (compared to those who had not) was different with regards to gaps in knowledge of substitution and whether participants substituted cannabis for pharmaceutical drugs. All analyses were conducted in SPSS (IBM, Armonk, NY).
Disclosure of medical cannabis use to PCP
All participants indicated having a PCP, and 81% reported that their PCP knew they used medical cannabis.
Of the latter, 93% informed their PCP, 2% were asked about medical cannabis use by their PCP, 2% reported that their PCP recommended they use cannabis, and 3% said their PCPs found out another way (e.g., urine screen). Of participants whose PCP knew they used medical cannabis, 38% reported that their PCP had not always known. Of those whose PCP did not know they used medical cannabis, 81% (n = 39) did not tell their PCP for reasons including perceived stigma, PCPs having unfavorable attitudes towards medical cannabis, not wanting cannabis on their health records, fear of losing licensure or employment, and fear of being denied insurance or medical care.
Cannabis licensure
Although 78% of participants saw their PCP for the health issue that they used cannabis to treat, only 14% obtained medical cannabis authorization directly from their PCP, compared to physicians identified through internet searches (40%), friends or family members (37%), dispensaries (9%), other medical personnel (5%), newspaper (4%), and other (3%) methods. Older participants were more likely to have their cards authorized by their PCP, r(270) = .180, p < .001. Only four participants (1.7%) whose medical cannabis was authorized by another physician were referred to that physician by their PCP. Most (74%) participants reported that the physician authorizing their medical cannabis license had no further involvement in their current healthcare, only 24% ever saw them again, and 16% reported that the authorizing physician was currently involved in their health care. Only 9% of participants reported that their PCP was in contact with the authorizing physician. Most participants visited one physician to authorize their license, although some visited as many as six ( Table 2).
Perceptions of PCP knowledge of medical cannabis
Participants reported generally low confidence in their PCP's ability to integrate medical cannabis into treatment, perceived their PCP's knowledge about medical cannabis as poor or fair, and thought that PCPs were not at all, somewhat, or moderately supportive of medical cannabis (Table 2). Compared to participants whose PCP did not know about their medical cannabis use, those whose PCP did know rated their PCP as more knowledgeable about medical cannabis t(248) = 3.87, p < .001, d = .62, were more confident in their PCP's ability to integrate medical cannabis into their treatment, t(248) = 4.34, p < .001, d = .70, and perceived their PCP as more supportive of medical cannabis, t(248) = 6.73, p < .001, d = 1.08. Similar trends were found for participants who delayed telling their PCP (d = 0.33-0.60, all p's < .021) compared to those whose PCP always knew about their cannabis use. Non-Whites rated their PCP as more knowledgeable about medical cannabis, t(269) = 2.51, p = .013, d = 0.49, and more able to integrate it into their treatment, t(269) = 2.04, p = .042, d = 0.40, but not more supportive of medical cannabis, t(269) = 1.14, p = .254, d = 0.23. However, the sample size for non-whites was quite small (n = 33).
Substitution for pharmaceuticals-with and without PCP knowledge or authorization
Overall, 86% (n = 235) reported using pharmaceutical drugs, 82% (n = 194) of whom reported reducing or stopping use of a drug because of their medical cannabis use. Substitution rates ranged from 36% for antihistamines to 88% for sedatives (Table 3). Most (87% on average) reported that their substitution decision was based on their own experiences compared with 18% citing PCP advice. Only 31% (n = 60) immediately reported substitution to their PCP, while 69% (n = 134) reported some gap in their PCP's knowledge of their substitution, with 44% reporting that their PCP was not currently aware of this substitution. Compared to participants whose PCP always knew about their substitution, those who reported delayed PCP substitution knowledge rated their PCP as less knowledgeable about medical cannabis t(192) = 3.71, p < .001, d = .57, were less confident in their PCP's ability to integrate medical cannabis into their treatment t(192) = 4.50, p < .001, d = .70, and perceived their PCP as less supportive of medical cannabis t(192) = 3.18, p = .002, d = .49. Identical trends were found for participants whose PCP was not aware of their pharmaceutical substitution (d's = 0.54-0.60, all p's < .001) compared to participants whose PCP currently knew about their substitution.
Conversely, participants whose PCP authorized their medical cannabis card (n = 37) rated their PCP as more knowledgeable about medical cannabis t(269) = 5.14, p < .001, d = .91 were more confident in their PCP's ability to integrate medical cannabis into their treatment t(269) = 5.00, p < .001, d = .88, and perceived their PCP as more supportive of medical cannabis t(269) = 4.33, p < .001, d = .77, than participants whose PCP had not authorized their medical cannabis card (Fig. 1). The distribution of PCP gaps in knowledge around participants substitutions were significantly different, χ 2 (1) = 4.35, p = .037, with more of those whose PCP had authorized their cannabis license reporting current PCP knowledge of this substitution. There was no difference in pharmaceutical substitution rates based on who had authorized participants' medical cannabis cards, χ 2 (1) = 0.09, p = .760.
Discussion
In this study, we show that although many participants reported substituting cannabis for medications, the majority did not have their medical cannabis license authorized by their PCP and reported substituting cannabis for medications based on their personal experiences rather than advice from their PCP. Further, many delayed telling their PCP about this medication substitution. Substitution patterns align with results from ecological (Bradford and Bradford 2016;Bradford and Bradford 2017;Bradford et al. 2018) and individual-level studies (Boehnke et al. 2016;Boehnke et al. 2019b;Lucas et al. 2016;Lucas and Walsh 2017;Lucas et al. 2019;Reiman et al. 2017;Piper et al. 2017;Corroon Jr. et al. 2017;Rod 2019) describing medication-sparing effects of medical cannabis legislation and use. To our knowledge, however, this is the first account of how substitution fits into mainstream medical care-or in this case, how far outside this context it occurs. This finding is consistent with our hypotheses and aligns with participant perceptions of minimal PCP knowledge of medical cannabis and low confidence in PCPs integrating cannabis into treatment. This substitution finding also raises safety concerns: although substituting cannabis for medications may be an appropriate harm reduction strategy in some cases (e.g., cannabis for opioids in the context of chronic pain) (Lucas 2017), doing so without PCP oversight may harm patients-e.g., disease recurrence if substituting for disease modifying drugs. Our findings highlight this latter point, as some participants reported substituting cannabis for stimulants, which are not used for any conditions (e.g., weight loss, narcolepsy) for which cannabis has known therapeutic value. These findings emphasize the need for education on cannabis so PCPs can provide appropriate counseling on safety and harm-reduction for patients who use medical cannabis. Given potential legal repercussions for cannabis use due to its status as a Schedule I drug, it is unsurprising that medication substitution often occurs without PCP oversight and that many patients go to outside physicians to obtain cannabis licensure. Further, many PCPs may be uncomfortable recommending cannabis to patients given their lack of relevant education regarding medical cannabis use and cannabis legality (Kondrad and Reid 2013;Carlini et al. 2017). Indeed, a recent systematic review of healthcare provider attitudes towards medical cannabis confirmed that although healthcare professionals showed modest support for using medical cannabis in clinical practice, this support was tempered by "a lack of confidence, a lack of self-reported competence, and concerns for associated risks" (Gardiner et al. 2019). These concerns are supported by the lack of physician knowledge around cannabis's legal status, demonstrated by a recent survey of physicians (n = 371) which reported only 34% knew that cannabis was a Schedule I drug, 68% knew it was federally illegal, and 65% could correctly identify the legality of cannabis in their state of residence (Takakuwa et al. 2020). In addition, among a survey of n = 494 family or internal medicine physicians in Washington State, the most common places for physicians to obtain medical cannabisrelated information were patients, fellow providers, news media, and medical journals, rather than formal training programs (Carlini et al. 2017).
As shown by our findings and other reports in the scientific literature, however, conversations about medical cannabis between patients and their PCPs appear to result in better exchange of information that could keep patients safe. Indeed, adults ≥ 60 years old reported positive outcomes with medical cannabis and preferred to discuss their cannabis use with their healthcare provider (Bobitt et al. 2019). In a recent nationally representative survey, the 24% of respondents who indicated that healthcare providers were their most influential source of information were less likely to endorse incorrect beliefs about cannabis (e.g., cannabis is not at all addictive) (Ishida et al. 2020). However, cannabis's illicit status may get in the way of building strong patient-PCP relationships, as some adults prefer to use cannabis illicitly because of concerns about their "name going on a list" or that their careers may be negatively impacted by disclosure of cannabis use (Lau et al. 2015).
Implications
Our study highlights the need for better integration between medical cannabis and mainstream healthcare, including enhancing PCP education on cannabis, the endocannabinoid system, and the benefits, risks, and harms of cannabis in relevant therapeutic contexts. As medical cannabis policy allows cannabis to be used for many conditions for which there is no known therapeutic benefit (Boehnke et al. 2019a), education efforts should also focus on harm-reduction strategies aligned with current practical dosing guidance (e.g., slowly titrating doses and using multiple administration routes) (Savage et al. 2016;MacCallum and Russo 2018;Boehnke and Clauw 2019). Applying this guidance in clinical practice would give PCPs better tools to assess safety and effectiveness of currently available cannabinoid products and could quickly feed back into clinical practice. As numerous clinical trials are underway, consistently updating this practical guidance based on the most recent data is critical.
Future directions
Future research should focus on developing actionable strategies to improve patient-PCP relationships and enhance shared decision-making in the context of medical cannabis and medication substitution. Examining and identifying the most challenging barriers to patient-PCP communication about cannabis could inform these studies. In addition, it would be worth investigating patient satisfaction with different medications they are currently taking to gauge which medication classes are the most likely targets for substitution with cannabis, as well as patient interest in and/or rationale for substituting. For example, while there are three currently approved medications for fibromyalgia, consumer surveys typically do not rank them as being very helpful for symptom management and frequently discontinue use (Hauser et al. 2012;Wolfe et al. 2013). We would thus expect there to be higher rates of substitution among people with fibromyalgia, a finding borne out by current observational studies of medical cannabis use (Sagy et al. 2019). Identifying similar clinical situations-especially those in which there is evidence of cannabis' therapeutic valuecould thus provide ideal research settings for piloting interventions focused on enhancing joint patient-PCP decision making.
Limitations
Our study was limited in several ways. First, our results on medication substitutions and PCP attitudes are subject to recall bias and we do not have objective measures showing whether the reported substitution actually occurred. Second, our results reflect a mostly White population who obtained medical cannabis licenses, so they may not be generalizable to all people who use cannabis medically. Third, our results may be influenced by selection bias, as we do not know the total number of individuals who had the opportunity to take this survey nor do we know by which methods participants were most likely to be recruited (e.g., email, flyer). Fourth, our sampling was limited to a single dispensary in Michigan, which has had a medical cannabis law since 2008, as well as an adult use cannabis law during the sampling period. Michigan also does not have a strict regulatory process in place that mandates cannabis-specific training for physicians, either for those who write recommendations or broadly. Thus, the experiences of our study population may not translate to individuals in states with different medical cannabis infrastructure. Fifth, the perceptions of PCP knowledge and comfort around cannabis may inaccurately represent the reality of those specific care providers. However, given the widespread discomfort voiced by healthcare providers around cannabis, we believe this study adds an important facet to the scientific literature.
Conclusion
While patients substitute cannabis for other medications, many do not disclose this substitution to their PCPs and perceptions of PCP expertise with cannabis and ability to integrate cannabis into medical care range widely. Similarly, although many medical cannabis patients tell their PCP about their use of medical cannabis, their license was typically authorized by an outside physician who had no current role in the patient's healthcare. Our results show the poor integration between medical cannabis and mainstream healthcare, suggesting a need for improved physician education around appropriate cannabis use. | 2021-01-24T14:38:20.910Z | 2021-01-24T00:00:00.000 | {
"year": 2021,
"sha1": "0771df6e345f953b26cf2c8510c688cfa687cd14",
"oa_license": "CCBY",
"oa_url": "https://jcannabisresearch.biomedcentral.com/track/pdf/10.1186/s42238-021-00058-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0771df6e345f953b26cf2c8510c688cfa687cd14",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16468343 | pes2o/s2orc | v3-fos-license | Theoretical status of epsilon'/epsilon
We review the theory of epsilon'/epsilon and present an updated phenomenological analysis using hadronic matrix elements from lattice QCD. The present status of the computation of epsilon'/epsilon, considering various approaches to the matrix-element evaluation, is critically discussed.
Introduction
The latest-generation experiments, aiming to obtain ε ′ /ε with a 10 −4 accuracy, measured up to now By combining these results with previous measurements, the latest world average reads [2] which is definitely in the 10 −3 range. Given the differences in the results of eq. (1), the quoted error is, however, debatable [3].
On the other hand, theoretical estimates in the Standard-Model typically correspond to central values in the 10 −4 range although, given the large theoretical uncertainties, values of the order of 10 −3 are not excluded. The explanation of the difference between SM predictions and experimental values calls either for some missing dynamical effect in the hadronic parameters or for physics beyond the Standard Model. In the last few months, several studies exploring both possibilities have been published.
In this paper, the theoretical status of ε ′ /ε is reviewed and updated results obtained by using (whenever possible) hadronic matrix elements computed with lattice QCD are presented. Other theoretical approaches and recent attempts to "improve" the accuracy in the determination of the hadronic matrix elements (mostly to improve the agreement between theoretical estimates and measurements) are also discussed.
Basic formulae
Direct CP violation, occurring in K 0 decays, is parametrized by ε ′ . In terms of weak-Hamiltonian matrix elements, this quantity is defined as where the ππ(I)| is the isospin I two-pion out-state and are the eigenstates of the CPT-conserving Hamiltonian describing the K 0 -K 0 system, namely We introduce the isospin amplitudes where, in virtue of Watson's theorem, the δ I s are the strong-interactions phase shifts of ππ scattering. In the approximation ImA 0 ≪ ReA 0 , ImA 2 ≪ ReA 2 and ω = ReA 2 /ReA 0 ≪ 1 (the latter coming from the ∆I = 1/2 enhancement in kaon decays), one finds ε ≃ε + i ImA 0 ReA 0 ≃ e iπ/4 √ 2 Using the experimental value [4] Arg one finally gets where the last expression includes isospin breaking contributions due to π-η mixing encoded in Ω IB (A ′ 2 = A 2 − ωΩ IB A 0 ) [5]. In the prediction of ε ′ /ε, ω and ReA 0 are taken from experiments, whereas ImA 0,2 are the computed quantities.
The calculation of the real part of the amplitudes, and hence of ω, is one of the longest-standing problems in particle physics: in spite of several decades of efforts, nobody succeeded so far to explain the ∆I = 1/2 rule in a convincing and quantitative way. The calculation of ImA 0 and ImA 2 is of comparable difficulty. Since the imaginary parts entering ε ′ /ε, however, are not directly related to the real ones, and the operators of the effective Hamiltonian contribute with different weights in the two cases, it is conceivable that ImA 0 and ImA 2 be computed in spite of the difficulties encountered in calculations of the ∆I = 1/2 rule. On the other hand, as discussed below, one cannot exclude some common dynamical enhancement mechanism which produces large values of both ReA 0 and ε ′ /ε.
ε ′ /ε in the Standard Model
The natural theoretical framework in dealing with weak hadronic decays is provided by the effective Hamiltonian formalism. Indeed the operator product expansion allows the separation of short-and long-distance scales and reduce the problem to the computation of Wilson coefficients, performed in perturbation theory, and to the calculation, with non-perturbative techniques, of local-operator matrix elements.
At the next-to-leading order (NLO) in the renormalization-group improved expansion, the 4-active-flavour (m b > µ > m c ) ∆S = 1 effective Hamiltonian, relevant for ε ′ /ε, can be written as where G F is the Fermi constant, λ q = V qd V ⋆ qs and τ = −λ t /λ u (V q i q j being the CKM matrix elements). The CP -conserving and CP -violating contributions are easily separated, the latter being proportional to τ .
Neglecting electro-and chromo-magnetic dipole transitions, the operator basis includes eleven independent local four-fermion operators. They are given by α and β are colour indices, and the sum index q runs over {d, u, s, c}. The operator Q 2 appears in the Fermi Hamiltonian at tree level. The operators Q 3 -Q 6 are generated by the insertion of Q 2 into the strong penguin diagrams, whereas Q 7 -Q 9 come from the electromagnetic penguin diagrams. Both classes of operators are relevant for ε ′ /ε. Further details on the NLO ∆S = 1 effective Hamiltonian can be found in refs. [6].
Using eq. (10), one can readily express A 0 and A 2 in terms of matrix elements of the operators in eq. (11). It is customary to write In the previous equation, the relevant matrix elements are given in terms of B-parameters defined as where the subscript V IA means that the matrix elements are calculated in the vacuum insertion approximation. V IA matrix elements are given in terms of the three quantities Contrary to X and Z, Y does not vanish in the chiral limit, as a consequence of the different chiral properties of the operators Q 7 and Q 8 . Moreover, whereas X is expressed in terms of measurable quantities, both Z and Y depend on the quark masses which must be taken from theoretical estimates. Note that some V IA matrix elements seems to show a quadratic dependence on the strange quark mass m s through Y and Z. This is true as long as one fixes the kaon mass to its experimental value and neglect the m s dependence of the B parameters. The apparent quadratic dependence of the matrix elements on m s has been exploited in ref. [7] to claim that large values of ε ′ /ε can be obtained with suitably "small" strange quark masses. The actual dependence of the full matrix elements on m s is, however, very different. Indeed the ratio M 2 K /m s is essentially independent of m s (up to small chiral-symmetry-breaking terms) since it corresponds to the value of quark condensate. This is explicitly verified in lattice calculations, where a strong correlation between the value of the strange quark mass used in the V IA matrix elements and the value of the corresponding B parameters is observed, so that the m s dependence in the physical matrix element almost cancels out [8]. This is why one should always use B-parameters and m s consistently computed together (e.g. in the same simulation on the lattice) or, even better, matrix elements given in physical units without any reference to quark masses [8,9].
Hadronic matrix elements calculation
Any prediction of ε ′ /ε must undergo the non-perturbative calculation of the relevant hadronic matrix elements. Theoretically, this calculations has to meet two requirements 1. to be applicable up to perturbative energy scales; 2. to keep under control the definition of the renormalized operators and their consistent matching to the Wilson coefficients.
Failure to meet these requirements indicates that the method cannot achieve the necessary NLO accuracy, as often the case with phenomenological models. Presently, however (or may be for this reason), experimental data are more easily accommodated by such models than by methods based on first principles, as lattice QCD. With this caveat in mind, we list and briefly comment on various approaches that have been used in the literature. Various predictions of ε ′ /ε, obtained using different methods, are shown in the compilation of fig. 1, together with the present experimental world average.
Lattice QCD [10]
In principle, Lattice QCD is the non-perturbative method for computing matrix elements. Being a regularized version of the fundamental theory, it allows a complete control over the definition of the renormalized operators, both at the perturbative and non-perturbative level. In addition, present simulations use inverse lattice spacings of 2 GeV or larger and therefore the perturbative matching with the Wilson coefficients can be safely performed. Indeed, by using non-perturbative renormalization techniques, the matching scale in lattice calculations could be pushed to values as large as ∼ 10 GeV. Although, so far, these methods have only been implemented in the calculation of the strong coupling constant [11] and of the quark masses [12], they will certainly be extended to the four-fermion operators of the weak effective Hamiltonian. When this will be the case, the error in the matching procedure will become negligible. For many years, a general no-go theorem [13] of Euclidean field theory has prevented the direct extraction, in numerical simulations, of the physical matrix elements with more than one particle in the final state. For this reason, present lattice determinations of ππ|Q i |K are obtained from π|Q i |K using lowest-order chiral relations (i.e. soft-pion theorems). This means that final-state interactions are not taken into account and that large chiral corrections may be present [14] 1 . In addition, only some of the matrix elements needed for computing ε ′ /ε are presently available. In particular, the Q 6 matrix element, which is expected to give the most important contribution to ε ′ /ε, has not been successfully computed yet.
Several theoretical progresses have opened a window of opportunity this year. In the past, a proposal to circumvent the no-go theorem of ref. [13] was made. The main idea was to extract the relevant matrix elements by studying suitable Euclidean Green functions at small time distances [15]. The weakness of this method, however, was that it relied on model-dependent smoothness assumptions which could lead to uncontrolled systematic errors.
A big progress has been made in ref. [16], where it was rigorously proven how to relate the matrix elements extracted on a finite volume in lattice simulations to the physical ππ|Q i |K amplitudes. Moreover, it has been shown that the smoothness hypothesis of ref. [15] is unnecessary and that the physical K → ππ matrix elements can be, at least in principle, extracted from Euclidean correlation functions at finite-time distances [17]. Although it will take some time before these approaches will be implemented in practice, they certainly open new perspectives to lattice calculations. More details on the present status of lattice matrix elements, and possible developments in the near future, will be given in the discussion of the phenomenological analysis in the next section.
Phenomenological Approach [18] In this approach, one basically attempts to extract information on the matrix elements relevant for ε ′ /ε by combining the measured values of the CPconserving amplitudes with relations among different operators that can be established below the charm threshold under very mild assumptions (for details see ref. [19]). This procedure can be performed consistently at the NLO, allowing the extraction of matrix elements of well-defined renormalized operators.
Unfortunately, the leading contributions to ε ′ /ε, namely the matrix elements of Q 6 and Q 3/2 8 , cannot be fixed in this approach. Moreover, the method only works below the charm threshold where higher-order perturbative and power corrections (in 1/m c ) may be large. In practice, for these matrix elements, Buras and collaborators have always used inputs coming from other theoretical sources, in particular lowest-order 1/N expansion or lattice calculations.
Chiral+1/N Expansion [20]
This method relies on the non-perturbative technique originally proposed by Bardeen, Buras and Gérard [21]. In principle, the approach can be derived from QCD and allows the computation of all the matrix elements needed for calculating ε ′ /ε in a consistent theoretical scheme. In this framework, the Dortmund group computed the relevant matrix elements including the subleading corrections in both the chiral and the 1/N expansion 2 .
This approach suffers, however, from the presence of quadratic divergences in the cutoff that must be introduced, beyond the leading order, in the effective chiral Lagrangian. The quadratic cutoff dependence, which appears in non-factorizable contributions, makes it impossible a consistent matching between the operator matrix elements and the corresponding Wilson coefficients, which depend only logarithmically on the cutoff. One may argue that the quadratic divergences will be cured and replaced by some hadronic scale in the full theory, which includes excitations heavier than the pseudoscalar mesons. In practice, since it is impossible to include the effects of highermass hadronic states, the cutoff is replaced with a scale of the order of 1 GeV, which is an arbitrary, although reasonable, choice. Since the divergent terms gives very large contributions to the matrix elements entering ∆I = 1/2 transitions and ε ′ /ε, this introduces an uncontrolled numerical uncertainty in the final results.
Chiral Quark Model [22]
The χQM can be derived in the framework of the extended Nambu-Jona-Lasinio model of chiral symmetry breaking [22]. It contains an effective interaction between the u, d, s quarks and the pseudo-scalar meson octet with three free parameters, two of which can be fixed using CP -conserving amplitudes. The Trieste group computed the O(p 4 ) corrections to the relevant operators and found a correlation between the CP -conserving and CP -violating amplitudes so that, once the parameters of the model are fixed to provide the required octet enhancement, it is possible to predict ε ′ /ε. The nice feature of this model is that, to some extent, it accounts for higher-order chiral effects, which are not easily included, for instance, in lattice calculations. The disadvantage is that the model dependence of the results can hardly be evaluated or corrected.
Theoretically, this approach shares some of the problems of the 1/N expansion mainly those related to the presence of quadratic divergences. These do not appear explicitly in the calculations of the Trieste group simply because dimensional regularization is used. It remains true, however, that the scale and scheme dependence of the renormalized operators is not under control at NLO. In order to deal with this problem, a third parameter of the model is fixed by imposing a sort of numerical "γ 5 "-independence to the physical amplitudes. This recipe, while suggesting that some degree of "effective" renormalization-scheme independence can be achieved, has unfortunately no sound theoretical basis. Finally the correlation between the ∆I = 1/2 amplitude and ε ′ /ε is subject to potentially large uncertainties for the following reason. The parameters necessary to estimate the matrix element of Q 6 are fixed by fitting the ∆I = 1/2 amplitude. For this quantity, the contribution of Q 6 is rather marginal, whereas Q 1 and Q 2 dominate. Thus any small uncertainty in the dominant terms, due for example to unknown O(p 6 ) corrections (O(p 4 ) corrections to the ∆I = 1/2 amplitude are of O(100%)), may change drastically the determination of the matrix element of Q 6 , which is the dominant term for ε ′ /ε.
Extended Nambu-Jona-Lasinio Model [23]
An extended Nambu-Jona-Lasino model has been used in ref. [23] to compute the ∆I = 1/2 K → ππ matrix elements and ε ′ /ε. The remarkable feature of this computation is the high order in the momentum expansion reached by the Dubna group. All matrix elements have been computed to O(p 6 ) and a good stability of the results has been found. In this respect, this approach is safer than the χQM. However it shares with the χQM all the other theoretical flaws mentioned above, and particularly the problem of matching the short-distance calculations, since it is unclear which renormalized operators the amplitudes computed with the Dubna superpropagator regularization method correspond to.
Generalized Factorization [24]
Generalized factorization has been introduced in the framework of non-leptonic B decays in order to parametrize the hadronic matrix elements without apriori assumptions [25]. The basic idea is to extract from experimental data as much information as possible on the non-factorizable parameters. When needed, the number of independent parameters can be reduced using flavour symmetries, dynamical assumptions, etc. In ref. [24] the procedure has been applied to K → ππ matrix elements. Unfortunately, in this case, the number of independent channels that one can use to fix the parameters is small (essentially only the two CP -conserving amplitudes). For his predictions, the author of ref. [24] was forced then to reduce the number of parameters by several "simplifying" assumptions, which are, however, questionable. Many parameters related to different operator matrix elements and to different colour structures were arbitrarily assumed to be equal. In such a way, a correlation between CP -conserving and CP -violating amplitudes was obtained, but the final results crucially depends on the assumptions, which are hardly justifiable theoretically, and cannot be tested phenomenologically in processes different from ε ′ /ε. σ Models [7,26] A possible mechanism to enhance the ∆I = 1/2 amplitude is the exchange of a scalar I = 0 meson [27]. It also leads to an enhancement of ε ′ /ε, as recently studied in the framework of the linear [7] and non-linear [26] σ models. While unable to achieve NLO accuracy, these models can produce the required correlation between the ∆I = 1/2 rule and ε ′ /ε, at least for some choice of the free parameters, such as the σ mass. Also in this case, however, it is not easy to estimate the uncertainties and the model dependence of the theoretical predictions.
Other theoretical developments
The marginal agreement between the SM predictions and the measured value of ε ′ /ε stimulated various attempts to "improve" the determination of the operator matrix elements, by including effects that were not considered previously. In particular, new studies have been devoted to the calculation of isospin-breaking and final-state interaction effects.
In most of the approaches isospin-breaking corrections are not included because they are beyond reach for these methods. These effects can be evaluated, however, a posteriori and included in the predictions. The leading effect in the chiral expansion is expected to come from π-η-η ′ mixing, which can be computed following ref. [5]. The resulting isospin-breaking effect is accounted for by the parameter Ω IB which appears in eq. (9). Recently the calculation of Ω IB has been updated by including the effect of π 0 − η mixing at O(p 4 ) [28]. In addition, it has been pointed out in ref. [29] that new sources of isospin breaking appear, beyond the leading order, in the chiral Lagrangian. These terms may give large corrections to Ω IB . Unfortunately the calculation of the corrections is strongly model dependent and can only been taken as a warning on the potential importance of these effects.
The problem of including final-state interactions is particularly relevant for lattice or lowest-order 1/N calculations, where rescattering effects are missing. It has recently been suggested that these effects could be included by using the measured ππ phase shifts and dispersive techniques [14]. The resulting A 0 amplitude would be enhanced by the inclusion of final state interactions, giving for ε ′ /ε a prediction much closer to the experimental value. These approach have been subject to several criticisms [30]. On the one hand the analytic structure of the considered amplitudes is unclear and the corresponding dispersion relations questionable. On the other, the computation of the dispersive correction factors, as derived in ref. [14], is plagued by an irreducible ambiguity of the same order of the dispersive factors themselves. This uncertainty depends on the choice of the initial conditions which, as shown in [30], were arbitrarily chosen. For this reason, whereas final-state interactions are likely to give qualitatively a certain enhancement, as argued in ref. [14], the quantitative estimate of these effect is subject to very large uncertainties. As discussed in [30], lattice calculations could help in this respect by fixing the initial conditions in a unambiguous way.
Some short, provocative comment is necessary at this point. If one could implement in the same calculation all the corrections which have been suggested to improve the accuracy in the determination of the matrix elements (low strange-quark mass, isospin-breaking effects, final state interactions, etc.), one would probably end up with a prediction of ε ′ /ε much larger than its experimental value. It is also quite astonishing that the effects which were not considered before, or those which have been revised in recent studies, all increase the theoretical prediction for this quantity and no one goes in the opposite direction. Finally, if the ∆I = 1/2 rule and the large value of ε ′ /ε are a consequence of many effects which are all necessary in a conspiracy to give a large enhancement, it seems very unlikely that any of the existing theoretical approaches (including the lattice one), will ever be able to take them into account simultaneously at the necessary level of accuracy.
During the completion of this paper, several new calculations of ε ′ /ε appeared: i) a new estimate of the Q 6 and Q 3/2 8 matrix elements using QCD sum rules has been presented [31], with results for ε ′ /ε close to the experimental average; ii) within big uncertainties, very large values of ε ′ /ε have also been obtained using the 1/N and chiral expansions within the context of the extended Nambu-Jona-Lasinio model. In ref. [32], the proposal for controlling the scale and scheme dependence of renormalized operators using an intermediate X-boson has been implemented in the calculation of ε ′ /ε. We refer the reader to the original publications for more details.
5 Results for ε ′ /ε using Lattice QCD In order to compute ε ′ /ε, besides hadronic matrix elements one needs the value of the relevant combination of CKM matrix elements, namely ImV ⋆ ts V td . This is constrained by using the experimental information on |V cb |, |V ub |, B d,s -B d,s mixings and ε combined with lattice results. Nowadays this has become a standard way of determining the CKM-matrix parameters within the Standard Model, described for instance in refs. [10,33]. It is worth noting that the linear dependence of ε ′ /ε on ImV ⋆ ts V td is strongly reduced by the constraint on the CKM parameters enforced by the measured value of ε. In the analysis reported in this paper, the same input parameters as in ref. [10], with the exception of Ω IB = 0.16±0.03 which is now taken from [28], have been used.
The discussion in ref. [10] about the current status of the lattice computation of the main matrix elements ππ|Q 6 |K and ππ|Q 3/2 8 |K can be summarized as follows: • At present, the matrix element ππ|Q 6 |K is not reliably known from lattice QCD. The results with staggered fermions are plagued by huge corrections appearing in the operator renormalized using lattice perturbation theory [34]. Other attempts using Wilson fermions or domainwall fermions were unsuccessful so far.
• The matrix element ππ|Q 3/2 8 |K has been computed by several groups using different formulations of the lattice fermion action and different lattice spacing. A substantial agreement of the different determinations was found within 20% uncertainty. We use the value Using some reasonable assumptions for the less important contributions due to other operators, given that the largest uncertainty stems from our ignorance of ππ|Q 6 |K , a useful way of presenting the results is given by where the matrix element of Q 6 is considered as a free parameter. Notice that the the two terms in this equation are correlated and should not be varied independently. In order to compare eq. (16) with experiments, we have to make some assumption on the value of ππ|Q 6 |K . We take the central value suggested by the V IA (or equivalently by the lowest-order 1/N expansion), namely B 6 = 1, with an uncertainty of 100%. This introduce a renormalizationscheme ambiguity, since V IA does not allow a proper definition of the renormalized operators. For this reason, results obtained by taking two different central values for ππ|Q 6 |K (corresponding either to B HV 6 (2 GeV)=1 or to B NDR 6 (2 GeV)=1 are presented, namely In the two cases, we obtain The difference of the two results, contrary to what is often stated in the literature, is not the uncertainty associated to the renormalization-scheme dependence, but to a different choice of the value of the matrix element in a given scheme (the HV scheme in the example of eq. (17)). At the NLO, the scheme dependence comes from higher-order corrections only and its effect is estimated by the second error given in eq. (18). The two figures of eq. (18) correspond really to two different choices of the unknown value of ππ|Q 6 |K at a given scale (µ = 2 GeV) and in a well-defined scheme (MS-HV). On the contrary, the two distributions of values for ε ′ /ε in fig. 2 include, for the same choice of ππ|Q 6 |K , two different ways to match Wilson coefficients and matrix elements for estimating the real scheme dependence due to higherorder terms. Both distributions refer to the case B 6 (2 GeV, MS-HV) = 1 ± 1. The large error on the matrix elements of Q 6 obviously dominates the final uncertainty on ε ′ /ε and flattens these distributions. In ref. [18] a more optimistic error for the B parameter (B 6 (2 GeV, MS-HV) = 1 ± 0.2) was assumed.
Comparison with data
Many of the Standard Model predictions shown in fig. 1 are below the present experimental world average. What does this imply? There are three legitimate answers: 1. There is nothing wrong! For some specific choice of the input parameters, all the different approaches are able to reproduce to some extent the experimental data. In some cases the agreement seems to arise naturally from the calculation [31,32]. In other cases, this requires the adjustment of a few parameters [22] or a wise choice of several of them (often at the edge of the allowed range of values) [18,10]. It is puzzling that most of the approaches, which suffer from intrinsic and irreducible uncertainties coming from the model dependence of the results, are in good agreement with the data. In the case of refs. [10,18], instead, this requires that all the quantities on which we have a poor control conspire in the direction to increase the theoretical value of ε ′ /ε. Thus, although unlikely in our opinion, the possibility that there is nothing wrong is not excluded. It may also well be that some of the models are indeed able to describe the underlying strong dynamics.
2.
There is something missing in the computation of the matrix elements.
The long-standing problem of explaining the ∆I = 1/2 rule suggests that some important dynamical effect is at work in K → ππ I = 0 decays. Unfortunately, contrary to some old claim, there is no simple relation between the CP -conserving and CP -violating decays, which could explain the large value of ε ′ /ε on the basis of the enhancement of the ∆I = 1/2 amplitude. Indeed, it would be very interesting if a common dynamical mechanism could explain both of them. In terms of Wick contractions in the matrix elements, such a mechanism could be possibly provided by a large contribution from eye diagrams (aka penguin contractions) [35]. From the lattice estimates, by taking B 6 as a free parameter, we can reproduce the experimental ε ′ /ε with B HV 6 (2 GeV) ∼ 2.4. As we have seen, all non-perturbative methods are affected by theoretical and/or computational problems which limit their accuracy. Among them, the models based on the chiral expansion also support the existence of some correlation between the ∆I = 1/2 rule and ε ′ /ε which is at least in qualitative agreement with the observations. A possible exception is that of ref. [23]. Thus we conclude that a real quantitative explanation is still to come.
3. Hadronic matrix elements are fine. New physics is at work. If the theoretical calculations which gives low values for ε ′ /ε are correct, there is room for new physics effects. It is not difficult to imagine new sources of CP violation. In supersymmetry, for example, there are even too many. The problem is that we must find a model for new physics such as to obtain a sizeable contribution to ε ′ /ε while remaining within the stringent constraints imposed by ε and by other measured quantities. This problem can be circumvented, so that for instance supersymmetry is potentially able (with some special assumptions) to produce the required effect on ε ′ /ε still fulfilling the other phenomenological constraints [36].
At present, our preferred answer is the second one. Hopefully, improvements in non-perturbative techniques and a further insight in kaon phenomenology will clarify the mechanism responsible for the "large" value of ε ′ /ε and its connections with the ∆I = 1/2 rule. | 2019-04-14T02:31:28.408Z | 2000-06-06T00:00:00.000 | {
"year": 2000,
"sha1": "babfd5b1d81dadb45c9521c3f016e6eaa59497b6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0006056",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "babfd5b1d81dadb45c9521c3f016e6eaa59497b6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
15887691 | pes2o/s2orc | v3-fos-license | Origins of the baryon spectrum
I begin with a key problem of light and strange baryon spectroscopy which suggests a clue for our understanding of underlying dynamics. Then I discuss spontaneous breaking of chiral symmetry in QCD, which implies that at low momenta there must be quasiparticles - constituent quarks with dynamical mass, which should be coupled to other quasiparticles - Goldstone bosons. Then it is natural to assume that in the low-energy regime the underlying dynamics in baryons is due to Goldstone boson exchange (GBE) between constituent quarks. Using as a prototype of the microscopical quark-gluon degrees of freedom the instanton-induced 't Hooft interaction I show why the GBE is so important. When the 't Hooft interaction is iterated in the qq t-channel it inevitably leads to a pole which corresponds to GBE. This is a typical antiscreening behavior: the interaction is represented by a bare vertex at large momenta, but it blows up at small momenta in the channel with GBE quantum numbers, explaining thus a distinguished role of the latter interaction in the low-energy regime. I show how the explicitly flavour-dependent short-range part of the GBE interaction between quarks, perhaps in combination with the vector-meson exchange interaction, solves a key problem of baryon spectroscopy and present spectra obtained in a simple analytical calculation as well as in exact semirelativistic three-body approach.
1 Where is a key problem?
If one considers a model with an effective confining interaction between quarks in light and strange baryons, which is flavour-and spin -independent 1 and assuming that there are no residual interactions, then the spectrum of lowest baryons should be arranged into successive bands of positive and negative parity, see Fig. 1. In Nature, however, the lowest levels in the spectra of nucleon, ∆-resonance and Λ-hyperon, which are shown on Fig. 2, look pretty different. One can immediately conclude that a picture, where all other possible interactions are treated as only residual and weak is certainly wrong. 1 The Thomas precession, which is a kinematical effect, and which produces a strong spin-orbit force, certainly presents in heavy quark systems, where the heavy quark constantly sits on the end of the string. A relativistic rotation of the string implies the Thomas precession. In the light quark systems, where it costs no energy to break a string and the light quark permanently fluctuates into other quark and the quark-antiquark pair, this kinematical effect should be strongly suppressed. That is why there are no strong spin-orbit splittings in light baryon and meson spectra. Typically models pay an attention to the octet-decuplet splittings. Within a quark picture one needs a spin-spin force between valence quarks with a proper sign. Then, adjusting a strength of this spin-spin force one can explain why ∆ is heavier than nucleon, or why Σ is heavier than Λ [1]. When QCD appeared, it has been immediately suggested that such a spin-spin force is supplied by the colour-magnetic component of the one gluon exchange (OGE) [2,3,4], in analogy with the magnetic hyperfine interaction from the one photon exchange in quantum electrodynamics. At the price of a very large strong coupling constant, α s ∼ 1, one can then fit ∆ − N mass difference. Clearly that such a picture is self-contradictory, because a big value of α s is not compatible with the perturbative treatment of QCD.
The crucial point, however, is that the perturbative gluon exchange (does not matter, one gluon exchange or one thousand gluon exchange) is sensitive only to spin (and colour) degrees of freedom of quarks and there is no sensitivity at the operator level to the flavour of quarks (in the u,d,s quark sector there is only a very weak sensitivity via different masses of quarks which, however, completely vanishes in the chiral limit). The spin structure of all baryons in N and Λ spectra, depicted in Fig. 2, is the same, it is described by the mixed permutational symmetry. This means that the contribution of the colour-magnetic interaction to leading order is the same in all these baryons (up to some small difference in baryon orbital wave functions), which is in apparent conflict with the opposite orderings of the lowest levels in N and Λ spectra. The only difference between N and Λ system is that one light quark is substituted by a strange one. It immediately hints that the physics, responsible for Fig. 2, should be explicitly flavour dependent. In addition, a colour magnetic interaction cannot shift the N = 2 states N(1440) and Λ(1600) below the N = 1 states N(1535) − N(1520) and Λ(1670) − Λ(1690), respectively, because to leading order its contribution is the same in all these states. In the ∆ spectrum the situation is even more dramatic as the colour -magnetic interaction shifts the N = 2 state ∆(1600) up, but not down, with respect to the N = 1 pair ∆(1620) − ∆(1700).
These facts rule out perturbative gluon exchange picture as a source of the hyperfine interactions in the light and strange baryons.
The other possible source of the hyperfine interactions, the 't Hooft instanton induced interaction [5] between valence quarks, could, generally speaking, generate the octetdecuplet splittings [6,7,8] when its strength is adjusted. However, it is easy to see from its operator structure that it also fails to explain Fig. 2 concerned. But the most convincing evidence comes from the ∆ spectrum, where the 't Hooft interaction between valence quarks is identically zero (it is absent in flavoursymmetric states). So according to this scenario the ∆ spectrum should be exclusively due to confining interaction, which is ruled out by comparison of Figs. 1 and 2.
Thus a key problem is to explain at the same time both the octet-decuplet splittings and the pattern of There are two important generic consequences of the spontaneous breaking of chiral symmetry (SBCS). The first one is an appearance of the octet of pseudoscalar mesons of low mass, π, K, η, which represent the associated approximate Goldstone bosons (in the large N c limit the flavor singlet state η ′ should be added). The second one is that valence (practically massless) quarks acquire a dynamical mass, which has been called historically constituent mass. Indeed, the nonzero value of the quark condensate, < qq >∼ −(250MeV ) 3 , itself implies at the formal level that there must be at low momenta rather big dynamical mass, which should be a momentum-dependent quantity. Such a dynamical mass is now directly observed on the lattice [9]. Thus the constituent quarks should be considered as quasiparticles whose dynamical mass at low momenta comes from the nonperturbative gluon and quark-antiquark dressing. The flavour-octet axial current conservation in the chiral limit tells that the constituent quarks and Goldstone bosons should be coupled with the strength g = g A M/f π [10], which is a quark analog of the famous Goldberger-Treiman relation.
We have recently suggested that in the low-energy regime, below the chiral symmetry breaking scale, Λ χ ∼ 1 GeV, the low-lying light and strange baryons should be predominantly viewed as systems of 3 constituent quarks with an effective confining interaction 3 Why the Goldstone boson exchange is so important?
Consider as example of a microscopical QCD nonperturbative interaction the instantoninduced 't Hooft interaction for two light flavours (I consider for simplicity a chiral limit) This interaction is known to lead to chiral symmetry breaking, i.e. to creation of the quark condensate and dynamical (constituent) mass m of quarks. It happens because of the first term in (1), which represents a scalar part of the interaction. The interquark interaction in the pseudoscalar-isovectorqq systems is driven by the second term, which is attractive and so strong that when it is iterated it exactly compensates the 2m energy supplied by the first term, and thus there appear T = 1, J P = 0 − mesons with zero mass -Nambu-Goldstone bosons. The first two terms in the Hamiltonian above form a classical Nambu and Jona-Lasinio model [14]. The fourth term in (1), which is repulsive, contributes only in the flavour-singletqq pair (η ′ ), making this meson heavy -contrary to π -and solving thus the U(1) A problem (note that the perturbative gluon exchange force cannot solve it). There is no interaction term which can contribute in vector mesons. This means that the masses of vector mesons, ρ and ω, should be approximately 2m, which is well satisfied empirically. The interaction (1), extended to the u, d, s sector, also naturally explains completely different mixing between the octet and singlet components in the pseudoscalar and vector mesons [15].
Having mentioned all the positive features of the Hamiltonian (1) in mesons, I shall now discuss its implications in baryons [16]. As I said, a direct application of this instantoninduced interaction between valence quarks in baryons does not solve problems. But what happens when this interaction is iterated in qq t-channel, see Fig. 3 ? Specifically, the second term in (1) will imply the following amplitude where J P (q 2 ) is a bubble with a pseudoscalar vertex (vacuum polarization in the pseudoscalar channel). The denominator in (2) has a pole in the chiral limit at q 2 = 0, which can be identified as a pion-exchange (beyond the chiral limit it is shifted to a physical pion mass q 2 = µ 2 π .) The coupling constant of pion to constituent quark can be obtained as a residue of (2) at the pole. The eq. (2) defines a "running amplitude" and a negative sign in the denominator implies its antiscreening behavior. In essence this antiscreening is some kind of asymptotic freedom: at sufficiently large space-like momenta the interaction is represented by a pure 't Hooft vertex (i.e. it has a strength 2G), but at q 2 → 0 it becomes infinitely enhanced in the channel with GBE quantum numbers. So, if a typical momentum transfer is not large, which is the case in baryons in the low-energy regime, the pole contribution dominates. It explains why the GBE is so crucially important both in baryons and baryon-baryon systems. Thus the GBE interaction between constituent quarks is an effective representation of the pole contribution in (2), which is provided by the original quark-gluon degrees of freedom.
In fact any pairwise gluonic interaction between quarks in the local approximation will necessarily contain the first and second terms of (1) with fixed relative strength. This is because of chiral invariance. Thus all our conclusions on π − ρ mass splitting and Goldstone boson exchange interaction in baryons are rather general and do not rely necessarily on 't Hooft interaction.
The Goldstone boson exchange interaction
The coupling of the constituent quarks and the pseudoscalar Goldstone bosons will (in the SU(3) F symmetric approximation) have the form g/(2m)ψγ µ γ 5 λ F · ψ∂ µ φ within the nonlinear realization of chiral symmetry (it would be igψγ 5 λ F · φψ within the linear chiral symmetry representation). A coupling of this form, in a nonrelativistic reduction for the constituent quark spinors, will -to lowest order -give rise the ∼ σ · q λ F structure of the meson-quark vertex, where q is meson momentum. This type of vertex implies spin-spin and tensor interactions between constituent quarks, mediated by Goldstone bosons. The spin-spin force has a traditional long-range Yukawa part, which is important for nuclear force. But at short range the spin-spin force is much stronger and its sign is opposite. This short-range interaction has a form [13] where a radial behavior of this short-range interaction is unknown. It is this short-range part of the GBE interaction between the constituent quarks that is of crucial importance for baryons: it has a sign appropriate to reproduce the level splittings and strongly dominates over the Yukawa tail towards short distances. Note that this spin-spin force is explicitly flavour-dependent, which reflects the fact that the GBE interaction is a flavour-exchange one. It is also significant that this short-range part of the interaction appears at the leading order within the chiral perturbation theory (i.e. in the chiral limit) [17], while the Yukawa part of the interaction vanishes in this limit. This simple observation has by fargoing consequences: while the physics of baryons does not change much in the chiral limit (e.g. the ∆ − N mass splitting persists), the long-range nuclear spin-spin force vanishes. This means that in some sense the short-range part of the pion exchange interaction is "more fundamental" than its Yukawa part.
The vector-and scalar-exchange interactions
Already in ref. [13] it has been pointed out that the vector-like meson exchange interactions could be also important. This possibility is taken seriously in refs. [18,19]. Both the vector-and scalar-meson exchange interaction can be also considered as a representation of the correlated two GBE interaction [20] as it has a vector meson pole in t-channel. A phenomenological motivation to include these interactions in addition to one GBE is as follows. The spin-spin component of the vector-meson exchange interaction at short range has exactly the same flavor-spin structure (3) as one GBE, but their tensor force components are just of opposite sign and cancel each other to a big extent. This could explain an empirical fact that the tensor force component of the interaction between quarks in baryons should not be large. Otherwise it would cause small, but empirically counterindicated spin-orbit splittings in L=1 baryons. The small net tensor force component should be, however, important for the mixing in baryon wave functions, while the baryon mass is weakly sensitive to this small residual tensor force. The present uncertainties in the coupling constants and unknown short-range behavior of these effective interactions make it very difficult to determine a precise amount (and even sign) of this weak net tensor force from the low-lying baryon spectroscopy. Other datum, e.g. mixing angles extracted from strong and electromagnetic decays should be used to determine the precise relative contributions of the effective ps-and vector-exchanges. The scalar-and vector-meson exchanges have spin-orbit force components. These spin-orbit forces are known to be very important in NN system, where both ρ-and ωexchange provide spin-orbit force with the same sign in P-wave. In baryons the relative sign of these spin-orbit components becomes opposite in P-wave (because of additional colour degree of freedom) and the ρ-exchange spin-orbit force becomes strongly enhanced [18]. This explains a weak net spin-orbit force in baryons, while it is big and empirically very important in baryon-baryon systems.
6 The flavour-spin hyperfine interaction and the structure of the baryon spectrum Summarizing previous sections one concludes that the pseudoscalar-and vector-meson exchange interactions produce strong flavour-spin interaction (3) at short range while the net tensor and spin-orbit forces are rather weak. That the net spin-orbit and tensor interactions between constituent quarks in baryons should be weak also follows from the typically small splittings in LS-multiplets, which are of the order 10-30 MeV. These small splittings should be compared with the hyperfine splittings produced by spin-spin force, which are of the order of ∆ − N splitting. Thus, indeed, in baryons it is the spin-spin interaction (3) between constituent quarks that is of crucial importance. Consider first, for the purposes of illustration, a schematic model which neglects the radial dependence of the potential function V (r) in (3), and assume a harmonic confinement among quarks as well as m u = m d = m s . In this model The Hamiltonian (4) reduces the SU(6) FS symmetry down to SU(3) F × SU(2) S . Let us now see how the pure confinement spectrum of Fig. 1 becomes modified when the Hamiltonian (4) is switched on. The leading SU(6) wave functions are known for all lowlying baryons and we thus can evaluate analytically the expectation values of the operator (4) [13]. (4) is −2C χ . The first negative parity excitations in the ∆ spectrum ∆(1620) and ∆(1700) (N = 1) produce the matrix element 4C χ . The first negative parity excitation in the Λ spectrum (N = 1 shell) Λ(1405) -Λ(1520) is flavor singlet and, in this case, the corresponding matrix element is −8C χ . The latter state is unique and is absent in other spectra due to its flavour-singlet nature.
These matrix elements alone suffice to prove that the ordering of the lowest positive and negative parity states in the baryon spectrum will be correctly predicted by the chiral boson exchange interaction (4). The constant C χ may be extracted from the N−∆ splitting to be 29.3 MeV. The oscillator parameterhω, which characterizes the effective confining interaction with this schematic model, may be determined as one half of the mass differences between the first excited whereas for the lowest state 1 2 + in the Λ system, Λ(1600), and the lowest negative parity One recovers precisely the spectrum shown in Fig. 2. It is astonishing that such a crude model predicts not only a general structure of the low-lying spectrum, but also the absolute values for splittings.
This simple example shows how the chiral interaction provides different ordering of the lowest positive and negative parity excited states in the spectra of the nucleon and the Λhyperon. This is a direct consequence of the symmetry properties of the boson-exchange interaction [13]. If one goes beyond the schematic -but analytical -calculation above, one should parameterize the short range parts of the interaction (the long range parts are fixed by meson masses), extract approximate meson-quark coupling constants from the known meson-baryon ones and solve with computer 3 -body equations. Such a program, with a semirelativistic Hamiltonian (i.e. kinetic energy is taken in a relativistic form) and with the linear confinement, has been realized in refs. [21,19]. In the former case [21] only the spin-spin force of GBE interaction is included, while in the latter one [19] ps-, vectorand scalar-exchanges are considered with spin-spin, tensor and central force components. The spectra in both cases look pretty much the same, which is achieved by a slight readjustment of the cut-off parameters in the latter case, see Fig. 4.
It is clear that the higher Fock components QQQπ, QQQK, ... (including meson continuum) cannot be completely integrated out in favor of the meson-exchange Q−Q potentials for some states above or near the corresponding meson thresholds. Such components, in addition to the main one QQQ, could explain e.g. an exceptionally large splitting of the flavour singlet states Λ(1405) − Λ(1520), since the Λ(1405) lies below theKN threshold and can be presented asKN bound system [22]. Note, that in the case of the present approach this old idea is completely natural and does not contradict a flavour-singlet QQQ nature of Λ(1405) (it simply means that both QQQ and QQQK components are significant in the present case) while it would be in conflict with naive constituent quark model where no room for mesons in baryons. The alternative explanation of the latter extraordinary big LS splitting would be that there is some rather large spin-orbit force specific to the flavour-singlet state only, which is also not ruled out.
An admixture of higher Fock components will be important in order to understand strong decays of some excited states, especially in the case where the threshold in the decay channel is close to the resonance energy. While technically inclusion of such components in addition to the main one QQQ in a coupled-channel approach is rather difficult task, it should be considered as one of the most important future directions.
Instead of a conclusion
Similar conclusions, that it is a GBE force which is responsible for ∆ − N splitting have been obtained in a recent lattice study [23]. A phenomenological analysis of the L=1 negative parity spectra [24] as well as 1/N c expansion studies of L=1 nonstrange spectra and of mixing angles obtained in strong and electromagnetic decays [25], also give a credibility to the interaction (3).
Finally, it is worth to mention, that this quark-quark interaction in the baryon-baryon systems provides a strong short-range repulsive core [26,27]. | 2014-10-01T00:00:00.000Z | 1999-08-20T00:00:00.000 | {
"year": 1999,
"sha1": "d683fb80ef7a8b8576d3dc0c35e23945ae99e91a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9908423",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d683fb80ef7a8b8576d3dc0c35e23945ae99e91a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
249686953 | pes2o/s2orc | v3-fos-license | Potential antioxidant activity of Lactobacillus fermentum KF7 from virgin coconut oil products in Padang, West Sumatra, Indonesia
Syukur S, Safrizayanti, Zulaiha S, Supriwardi E, Fahmi A, Nurfadilah KK, Purwati E. 2022. Potential antioxidant activity of Lactobacillus fermentum KF7 from virgin coconut oil products in Padang, West Sumatra, Indonesia. Biodiversitas 23: 1628-1634. The isolation and characterization of five commercial Virgin Coconut Oil (VCO) products circulating in the local market of Padang city were conducted. In this study, eighteen isolates of lactic acid bacteria (LAB) were isolated and characterized microscopically, and biochemically. VCO-LAB isolates were Gram-positive bacilli and cocci, catalase-negative, and homofermentative (except VD was heterofermentative). There were three isolates of VCO-LAB (VB.3, VD and VE.4) that could produce gamma-aminobutyric acid (GABA). The highest concentration of GABA (19.5 mg/mL) was produced by isolate VB.3 after 72 hours of incubation and the addition of 7% MSG. VB.3 isolate shows the highest antioxidant activity with the addition of Monosodium glutamate (MSG), 68.13% for 2,2azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) ABTS and 88.02%. In addition, GABA-producing VCO-LAB could be grown in media containing 0.2-1.0 mM hydrogen peroxide, with the lowest optical density value of 0.82 nm. The antioxidant activity of VCOLAB was affected by the GABA produced and by secondary metabolites such as peptides and antioxidant enzymes (NADH-oxidase, superoxide dismutase, NADH peroxidase and non-heme catalase). The highest antioxidant production was obtained in isolate VB.3 or Lactobacillus fermentum KF7.
INTRODUCTION
Virgin Coconut Oil (VCO) can be made using several procedures such as fermentation, cold press without added chemicals (Syukur et al. 2018). VCO has several advantages as a supplement, such as increased High-Density Lipoprotein (HDL), as anti-cholesterol and producing energy metabolism (Harini and Astirin 2009;Syukur et al. 2017). Several LAB has been found in VCO fermentation methods that possess antibacterial activity against Staphylococcus aureus and Listeria monocytogenes (Syukur et al. 2018;. Several reports concerning Lactic Acid Bacteria as potential probiotic in buffalo milk (Dadih) (Syukur et al. 2015; and Cacao fermentation (Syukur et al. 2013), tempoyak (Hendry et al. 2021) and palm sugar (Rahmadhanti et al. 2021) makes probiotic research will be increasing. Probiotic Lactobacillus fermentum KF7, has antioxidant activity of gammaaminobutyric acid (GABA) in this paper is activated by the activation of the Nrf2 (nuclear factor-E2 related factor 2) molecule which can increase gene transcription of antioxidant enzymes such as glutathione peroxidase (GPx) and superoxide dismutase (SOD) (Chen et al. 2013). GABA can inhibit lipid peroxidation, reduce the content of malondialdehyde (MDA) and increase the activity of antioxidant enzymes such as GPx, SOD and catalase (Chen et al. 2013;Di Lorenzo et al. 2016).
GABA is a non-protein amino acid resulting from glutamate decarboxylation catalyzed by the enzyme glutamate decarboxylase (GAD). GABA is also a major inhibitory neurotransmitter in the central nervous system. GABA deficiency can cause several diseases such as Huntington's, Parkinson's, Alzheimer's, schizophrenia and depression (Diana et al. 2014).
One compound that has antioxidant activity and has the potential to be developed is gamma-aminobutyric acid (GABA), GABA is formed by an irreversibledecarboxylation reaction of L-glutamic acid or its salt catalyzed by the enzyme glutamate decarboxylase (GAD; EC 4.1.1.15). Several studies used Fenton's solution to form hydroxyl radicals and then tested the sample's antioxidant activity against these hydroxyl radicals Spectrophotometrically (Arasu et al. 2013;Das and Goyal 2014).
The principle of the DPPH method is based on the ability of antioxidant compounds to donate hydrogen to DPPH free radicals (Purwaningsih 2012). Compounds that are active as antioxidants will reduce DPPH free radicals to DPPHH (non-radical diphenylpicrylhydrazine compounds) as shown in (Figure 1). The antioxidant activity was indicated by a color change from purple to yellow and the absorbance was measured using a UV-Vis Spectrophotometer at a wavelength of 517 nm.
In this study, we evaluated the antioxidant activity of L. fermentum KF7 against DPPH. Fenton's reagent is a solution of hydrogen peroxide with iron as a catalyst to oxidize contaminants. Iron (II) is oxidized by hydrogen peroxide to iron (III), forming hydroxyl radicals and hydroxide ions. Fenton reaction and homolytic fission can be seen in the following reaction. Antioxidant activity test against free radicals ABTS produced by oxidation of potassium per-sulfate in ABTS before the addition of antioxidant compounds as shown in Figure 2. The ABTS• + method can be measured based on the ABTS• + decolorization (Han et al. 2017). ABTS• +.. The ABTS• + method can be measured based on the removal of ABTS• + color by measuring the absorbance spectrophotometrically at a wavelength of 734 nm. Percentage of antioxidant activity was measured with the equation = [(A0-A1)/A0] x 100 where A0 and A1 were the absorbance of the control (ABTS• + solution) and the sample, respectively (Han et al. 2017).
Isolation and purification of LAB from 5 VCO commercial products
Isolation of LAB was carried out by the serial dilutionagar plate method. The VCO sample was diluted in MRS MERCK de-Mann Rogosa Sharpe (MRS) Broth (1:9, v/v) and incubated anaerobically at 37 o C for 24 hours. Serial dilution was carried out up to 10-8, spread on MRS Agar, and then anaerobically set at 37 o C for 48 hours. Single colonies that were round convex, shiny and yellowishwhite in the color that grew separately with different diameter sizes were re-inoculated on MRS Agar by the scratch method to obtain pure isolates of VCO-LAB (LAB isolated from VCO). After purification, the isolate was stored at -80 o C in a mixture of MRS Broth and glycerol (4:6, v/v). The culture stock should be re-grown in MRS Broth (1:9, v/v) for 18-24 hours before being used for the next stage of the study (Syukur et al. 2013).
Macroscopic, microscopic and biochemical characterization of LAB-VCO
Macroscopic characterization was carried out by inoculation of LAB culture on MRS MERCK de-Mann Rogosa Sharpe (MRS) Agar to see LAB isolates shape, color, and convexity. Next, microscopic characterization was carried out through Gram staining to see the cells' shape and color. Then, the biochemical characterization was carried out utilizing a catalase test and a fermentation type test (Syukur et al. 2018).
Measurement of growth of VCO-LAB
VCO-LAB was cultivated in MRS Broth at 37 o C for 24 hours. The LAB growth was determined by calculating its optical density at a wavelength of 600 nm with an interval of 2 hours until a stationary phase was obtained (Li et al. 2010).
Analysis of VCO-LAB GABA producers by Semi-Quantitative method
VCO-LAB, which showed spot GABA in qualitative analysis, was analyzed semi-quantitatively with KK prestaining method according to Li et al. (2009) procedure with modifications. VCO-LAB cultures were grown in MRS Broth with various MSG MERCK concentrations (1, 3, 5 and 7%) and incubation time (24, 48 and 72 hours) anaerobically at 37 o C. 2 L of the LAB culture supernatant was spotted on the chromatography paper. The separation process was carried out for 3 hours in the mobile phase of n-butanol: acetic acid:aquades (5:3:2) containing 1.2% ninhydrin. The chromatography paper was then dried at 70 o C for 80 minutes for spot color visualization. Spot GABA was cut from the paper and extracted with 5 mL of 75% alcohol (v/v):0.4% copper sulfate (w/v) (38:2, v/v) at a speed of 50 rpm and a temperature of 40 o C for 1 hour. The absorbance of the sample was measured with a UV-Vis spectrophotometer PERKIN ELMER at a wavelength of 512 nm. Before working on the sample, the absorbance measurement of the standard GABA solution was measured using the KK pre-staining method as described. A calibration curve was made to obtain the equation y = ax + b. The absorbance of the GABA extraction results in the sample could be determined by entering the absorbance value into the equation of GABA standard solution regression.
Antioxidant activity of GABA-producing VCO-LAB against DPPH
According to Lee et al. (2010), the antioxidant activity was conducted with a few modifications. A 100 L of 0.4 mM DPPH solution was mixed with 100 L of LAB culture (with or without the addition of MSG). The mixture was incubated at 37 o C in the dark for 30 minutes and then the absorbance was measured at a wavelength of 517 nm using a microplate reader. Antioxidant activity of the sample (%) = [(A0-A1)/A0] x 100, where A0 and A1 are absorbance control (DPPH solution) and sample, respectively.
Antioxidant activity of VCO-LAB, GABA producer, against ABTS•
The antioxidant activity was conducted according to Lee et al. (2010) with a few modifications. ABTS•+ was diluted in aquabides until the absorbance value was 0.70 ± 0.01 at a wavelength of 734 nm. Next, 900 L of the ABTS•+ solution was mixed with 100 L of LAB culture (with or without the addition of MSG). The mixture was incubated at room temperature in the dark for 6 minutes. The absorbance was measured at a wavelength of 734 nm and the antioxidant activity of the sample (%) = [(A0-A1)/A0] x 100 was calculated, where A0 and A1 were the absorbance of the control (ABTS•+ solution) and the sample, respectively (Arasu et al. 2013).
Antioxidant activity of VCO-LAB, GABA-producer, against hydroxyl free radical
According to Arasu et al. (2013), the antioxidant activity procedure was carried out with slight modifications. Fenton's reaction mixture consisting of 1 mL brilliant green (0.435 mM), 2 mL FeSO4.7H2O (0.5 mM), 1.5 mL H2O2 (3%, w/v) was added to 1 mL of LAB culture (with or without the addition of MSG). The mixture was incubated at room temperature for 15 minutes and the absorbance was read at a wavelength of 624 nm. The antioxidant activity of the sample is calculated in (%) = [(As-A0/A-A0)] x 100, where A0, A and As are control's absorbance (Fenton's solution without sample), blank's absorbance and absorbance of the mixture of Fenton and sample, respectively.
Resistance of VCO-LAB, GABA producer, to hydrogen peroxide
LAB cultures were grown in MRS Broth (1:9, v/v) and MRS Broth containing 0.2, 0.4, 0.6, 0.8 and 1.0 mM hydrogen peroxide. The mixture was incubated at 37 o C for 8 hours. The growth of BAL cells was calculated Spectrophotometrically at a wavelength of 600 nm (Arasu et al. 2013).
Data analysis
The data were obtained from three analysis times and shown as mean ± standard deviation (SD). Statistical significance was determined by One Way Analysis of Variance (ANOVA). The Turkey method determined the significant difference between the means results and the p<0.05 was expressed as the significant level.
RESULTS AND DISCUSSION
Data displayed in the average respondent's answers on organoleptic testing of 5 VCO commercial products circulating in Padang City (VA, VB, VC, VD and VE).
Based on Table 1, it can be seen that VA and VB have better color, aroma and taste characteristics than the other 3 products. The diverse characteristics and organoleptic of these 5 commercial VCO products may be due to differences in the processing methods (Anwar and Irmayanti 2020). VCO can be made by various methods such as controlled heating, fermentation (by using microbes or enzymes), centrifugation, fishing and the addition of acetic acid. Each manufacturing process shows different VCO qualities (Setiaji and Prayugo 2006). In this study, 18 colonies have been isolated and 17 VCO-LAB isolates were homofermentative and the other 1 were heterofermentative as shown in Figure 3.
Qualitative analysis of GABA-producing VCO-LAB
VCO-LAB is qualitatively analyzed for its ability to produce GABA by thin-layer chromatography method, with stationary phase and motion phase respectively being TLC plates silica gel 60 F254 and a mixture of n-butanol: acetic acid: water (5:3:2). This method is widely used because there are no expensive chemicals and special pretreatment on samples. Some other qualitative analysis methods such as pH indicator method and enzyme-based microtiter plate assay (EBMPA) are known to require a long time of work and expensive materials, such as GABase enzymes (Li and Cao 2010).
GABA, a non-protein amino acid resulting from decarboxylation of glutamate catalyzed by the enzyme glutamate decarboxylase (GAD) can be produced by LAB because LAB has GAD enzyme activity in its cells . GAD, an intracellular enzyme of LAB, can convert GABA from its substrate in the form of glutamate or salt. This study used monosodium glutamate (MSG) as a GAD enzyme substrate in producing GABA. Generally, LAB produces maximum GABA at the end of the stationary phase, where acidity and nutrient deficiencies will affect the metabolic activity. This acidic condition activates the enzyme GAD, thus catalyzing the formation of GABA in the cytoplasm and then secreted to a culture media (extracellular GABA (Diana et al. 2014), GABA content in VCO-LAB in this study was determined through its extracellular GABA content in VCO-LAB culture supernatant.
In this qualitative analysis, 18 isolates cultivated in MRS Broth containing 1% MSG and GABA were determined in culture supernatant using paper chromatography. 3 LAB isolates (VB.3, VD and VE.4) were selected, and the paper chromatogram can be seen in (Figure 4). The chromatogram showed that 3 selected supernatant culture isolates showed a more pronounced GABA spot than other isolated culture supernatants. Results that are not much different are also obtained by Lee et al. (2010) selected 22 LAB isolates, 6 of which showed the ability to produce GABA based on the clarity of the spot on the paper plate. Furthermore, the ability to produce GABA by VB.3, VD and VE.4 isolates is determined semiquantitatively based on variations in incubation time (24, 48 and 72 hours) and variations in MSG concentrations (1, 3, 5 and 7%) in MRS Broth with the paper chromatography pre-staining method.
Analysis of VCO-LAB GABA producer
GABA-producing VCO-LAB is quantitatively analyzed by pre-staining methods of paper chromatography (according to Li and Cao (2010). GABA concentrations in VCO-LAB cultures are calculated by entering the absorbance value obtained in the GABA standard solution regression equation y = 0.0464x + 0.0292 (R2 = 0.9974). GABA produced by LAB VB.3, VD and VE.4 isolates with variations in incubation time and MSG concentration can be seen in Table 2. GABA concentrations continue to increase for up to 72 hours of incubation and 7% MSG. Based on the growth curve in Figure 4, the isolate is still in the static phase until the 72 hours except the VE.4 isolate which has started to enter the death phase at 2 hours). It has been mentioned earlier that GABA is produced maximum at the end of the stationary phase, this underlies the highest GABA concentration obtained at the 72 hours. In addition to incubation time, MSG concentrations also affect GABA production by LAB. This is due to an increase in substrates around the cytophilic enzyme GAD.
Based on Table 2, almost all VCO-LAB isolates showed a similar tendency in producing GABA with the highest concentrations obtained after 72 hours of incubation. However, there was a decrease in GABA concentration in VE.4 isolates after 72 hours of incubation. This fact may be due to the VE-4 isolate entering the death phase, where the cell quickly loses the ability to divide and die in a matter of hours.
The highest concentrations of GABA were produced by VB.3 isolates at 72 hours of incubation and 7% of the concentration of Mono Sodium Glutamate (MSG) in MRS Broth (19.5 mg/mL), which was significantly different from GABA concentrations produced by other isolates. This result is higher than L. brevis OPY-1 and L. brevis OPK-3 isolated from kimchi with GABA concentrations of 0.825 mg/mL and 2,023 mg/mL Also L. brevis BJ20 from seaweed fermentation with a GABA concentration of 2,465 mg/mL (Lee et al. 2010); L. brevis IFO with GABA concentration of 1,049 mg/mL/and L. brevis CECT8183 were isolated from Spanish-made cheeses with GABA concentrations of 0.1 mg/mL. The high concentration of GABA by VB.3 is due to the high activity of GAD enzymes in their cells. The VB.3 isolate cultured in MRS Broth with 7% MSG continued for testing its activity as an antioxidant.
GABA-producing VCO-LAB antioxidant activity against ABTS• +
The antioxidant activity of GABA-producing VCO-LAB in this method is based on its ability to donate electrons or hydrogen atoms to ABTS •+ . ABTS + accepts these electrons or hydrogen atoms and becomes a stable. The antioxidant activity of VCO-LAB cultures containing MSG in its growth medium is significantly higher than that of cultures without MSG P<0.05, (Figure 5). The 3 highest antioxidant activity of VCO-LAB is indicated by VB.3+MSG (68.13%), VD+MSG (61.66%) and VD (52.89%) which is higher than Pediococcus pentosaceus R1 (Han et al. 2017) and LAB from kimchi with antioxidant activity against ABTS• + is 42.4% and above 50% respectively. Figure 5, shows that antioxidant activity in cultures with the addition of MSG is higher than in cultures without MSG. This indicates that GABA produced in cultures affects the antioxidant activity of VCO-LAB. However, cultures without MSG also showed high antioxidant activity (52.89-43.7%). Based on this it can be said that the antioxidant activity of VCO-LAB is not only influenced by the GABA it produces, but also by secondary metabolites. Based on several tests of antioxidant activity as described, it can be said that GABA affects the antioxidant activity of VCO-LAB. The highest antioxidant activity is produced by VCO-LAB with the addition of MSG in its growth medium. The addition of MSG aims as a substrate for GAD enzymes to produce GABA. However, since the growth medium of VCO-LAB without MSG also indicates antioxidant activity, it can be concluded that the antioxidant activity of VCO-LAB is not only derived from GAB A, but also by other metabolites such as peptides (Komatsuzaki et al. 2005) and antioxidant enzymes produced by LAB such as NADH-oxidase, superoxide dismutase, NADH per-oxidase and non-heme catalase (Arasu et al. 2013). 1.40 ± 1.14 a 4.41 ± 2.05 abc 9.01 ± 7.25 bc 19.5 ± 2.80 d 0.58 ± 0.65 a 1.37 ± 0.57 a 4.75 ± 0.49 abc 10.79 ± 1.47 c 0.44 ± 0.45 a 0.58 ± 0.50 a 1.66 ± 0.12 a 3.39 ± 0.87 ab Note: *Data are shown in mean±standard deviation of 3 repeated measurements of GABA concentrations by paper chromatography prestaining method. Superscripts with different letters in the direction of the column showed significantly different results (P<0.05).
Resistance of VCO-LAB, GABA producer, to hydrogen peroxide
The effect of hydrogen peroxide on the growth and viability of 3 isolates of VCO-LAB producing GABA is shown in (Figure 6). All cultures of VCO-LAB showed moderate to strong resistance to various concentrations of hydrogen peroxide compared to the control (media without hydrogen peroxide). Figure 6 shows that the three isolates could survive on media containing 0.2-1.0 mM hydrogen peroxide, with the lowest OD of 0.82 after incubation for 8 hours. These results are similar to those reported by Lee et al. (2010), who reported that isolated Lactobacillus could survive on 1.0 mM hydrogen peroxide. Arasu et al. (2013) stated that Lactobacillus plantarum K49 could survive on a medium containing 0.2-0.8 mM hydrogen peroxide. Hydrogen peroxide is a weak radical, but it can produce hydroxyl radicals when it reacts with transition metal ions. Organisms can produce catalase to convert hydrogen peroxide into water and O2 gas. Although VCO-LAB is catalase-negative, some types of LAB can produce NADH-peroxidase enzymes that can convert H2O2 (Komatsuzaki et al. 2005).
To conclude, VB.3 possessed an antioxidant activity against ABTS and hydroxyl radicals. The highest antioxidant activity was found by adding MSG (68.13% for ABTS and 88.02% for hydroxyl radicals). VB.3 isolate was identified as L. fermentum KF7. | 2022-06-16T15:38:52.513Z | 2022-03-11T00:00:00.000 | {
"year": 2022,
"sha1": "fbcd3db5b211e4cca674dfe1c06652f13516794b",
"oa_license": "CCBYNCSA",
"oa_url": "https://smujo.id/biodiv/article/download/10155/5623",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "57b2260b4d2bafe19245eea3a8f36d8d1829c71c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
158227669 | pes2o/s2orc | v3-fos-license | The Influence of Income and Education on Saudi Dissatisfied Consumers Behaviour
The purpose of the study is to explore the differences and similarities in post dissatisfaction behaviours between different levels of income and education among Saudi consumers. Currently few articles are available on the postdissatisfaction behaviour in growing markets, such as Arab countries despite the fact that GCC economies like the United Arab Emirates as well as Saudi Arabia are rising rapidly as their shopping background includes Western forms of retail. The sample was drawn from Saudi individuals which encountered a dissatisfying experience within a single type of retail outlet that sells electrical goods. The established survey instrument constructs considered different consumer complaint behaviour variable. Data were analysed using descriptive statistics, t-tests, Anova, and chi square analysis. The research addressed a literature gap and revealed a specific aspect of complaint behaviour which is related to income and educational levels. The quantitative study finds no differences in consumer complaint behaviour in terms of education and income levels of participants, with a single exception in the assessment for chances of success with complaint where lower educated participant is associated with higher complaining disposition.
I. INTRODUCTION
Customer dissatisfaction is considered as an indicator of product or service failure. Consumers evaluate a product after purchase to determine if the product performance meets their expectations (Pride and Ferrell, 2000). The resulting outcome of satisfaction or dissatisfaction influences whether consumers complain, switch to a different supplier, or repurchase the product or brand from the same supplier. Customer dissatisfaction results in different responses, such as complaining or not (Day, 1984). Complaints involve the reaction of a consumer to service failure or to a product subsequently perceived to be unsatisfying (Butelli, 2007).
The loss of customers is a possible consequence of having consumer dissatisfaction, as the consumer may change the brand or supplier and tell their relatives and friend regarding their bad experience (Day and Ash,1979). However, companies can retain customers by managing and handling the received complaints (Crie and Ladwein, 2002), highlighting the significance of service failure recovery. The focus of this paper is to obtain a better understanding of the impact (relation) of demographic variables, such as income and education, and forms of consumer complaint behaviour (CCB) in Saudi Arabia. The significance of this paper is in the contribution to the literature through adding to the limited knowledge that exists concerning differences in perceptions.
Identifying the different circumstances under which customers consider post dissatisfaction action and, if they proceed, how they choose to complain, will help marketers in devising more effective complaint -management strategies. Identifying demographic differences and similarities in complaint behaviour will additionally benefit marketers in determining the degree of standardisation versus customisation required in their service-failure recovery.
The most common consumer complaints studied in this survey involved service failure in the Saudi consumer electronic retail sector. The bad experiences associated with this service failure resulted in consumer dissatisfaction. This paper is written to provide a better understanding of the impact of the income and education demographics of Saudi consumers on their post dissatisfaction behaviour in a market which ethnically differs from the western market.
The research context
What makes the Saudi Arabian Market different from the others is the fact that the Islamic religion strongly affects the Arab markets. In fact, its influence is much stronger in the Arab world than it is in nations with large Muslim populations such as Indonesia, India, and Bangladesh (Mahajan, 2013). Saudi Arabia is a conventional society which recognizes dissimilar gender roles than other countries. When females buy products, some of them are unenthusiastic to ask for service or make a criticism to a man. The separation between male and female right that was widespread in the past decades has caused the civilization and financial system of the country to be under pressure (AlMunajjed, 2010).
Saudi retail market is rising sturdily. Worldwide, Saudi Arabia is 14 th of the topranked promising markets for global retail development according to A.T. Kearney who is a global management consultancy (Mesbah, 2012). Alruwaigh (2010) argued that the customer electrical goods markets in the region of Saudi Arabia are noticeable by competition.
Although, very little research focus has been given to consumer protection in the Saudi Arabian context (Habib, 1988), consumerprotection and complaint handling issues are opening to materialize as significant matters in Saudi Arabia.
II. LITERATURE REVIEW
Consumer complaint behaviour in this study refers to "an action taken by an individual that involves communicating something negative regarding a product or service to either the firm manufacturing or marketing that product or service, or to some third-party organizational entity" (Jacoby and Jaccard, 1981, p. 6). How this action occurs has been the subject of considerable research with outcomes pointing to a variety of drivers and forms of complaints that restrict the ability to generalise conclusions. Day (1984, p. 496) argues in his paper that the complaining behaviour as a consequence of dissatisfaction, the diverse set of actions, as well as the different personal or situational indicators seem to be irrelevant to the degree of dissatisfaction. Bearden and Teel (1983) argues similarly, whereas they considered dissatisfaction as an insufficient sign of complaints. In common with Landon's (1980) who suggested that dissatisfaction is precursor to complaining, in addition, Prakash's (1991) mentioned that the dissatisfaction's intensity has a major role, there are writings which supports the approach chosen in this study of concentrating on post-purchase dissatisfaction resulting in complaining behaviour of consumers.
Although a growing body of research has addressed the causes of CCB, part of the reasons for this growth is the failure to reach generalizable conclusions regarding CCB across product categories (Singh and Wilkes, 1996), across countries (Blodgett et al., 2006) as well as cultures (Hernandez et al., 1991;Ngai et al., 2007). The need to develop a country and product category-specific complaining behaviour model for Saudi Arabia is justified in terms of differences found by the previously cited researchers in addition to the dearth of prior research on consumer behaviour in Saudi Arabia, both in general (Morris and Al Dabbagh, 2004) and within this field. The research supports a lack of understanding of CCB determinants in this emerging Arab economy.
Research has reported that complainers tend to be younger, with a better income and education than non-complainers (e.g. Barnes and Kelloway, 1980;Bearden and Mason, 1984;Singh, 1990). Previous researches have taken education and income variables together in studying complaining behaviour (Liefeld, J. P., Edgecombe, F. H., and Wolfe, L. 1975). However, only little is known whether these education and income variables are related to each other in determining complaining actions in the context of GCC (Gulf Cooperation Council) countries. For example, whether complainers are not only younger, but also have higher levels of education. In summary, the research will advance current knowledge of post dissatisfaction behaviour and complaining action of Saudi consumers.
The customer's decision to make a complaint varies according to the complexity of service, the perceived cost of the complaint, and the level of dissatisfaction (Bolfing 1989;Day and Landon 1977). Singh (1990a) found that rates of certain complaint behaviours, such as voicing complaints to a seller/manufacturer, choosing not to complain, and spreading dissatisfaction via word-of-mouth, vary significantly according to the nature of the service. In general, the importance of a product for everyday life significantly influences the action taken positively (Day and Landon 1977).
i. Complaint Type:
Researches regrading CCB types in business disciplines include several extents together with the business rejoinder and interactions among customers and business (Butelli, 2007). According to a big random sample survey of American households, Singh (1990a) established a typology of customer dissatisfaction response types, fundamentally gathering primary characteristics of complaining types. Passives were in consistence with non-complainers and were less likely to be decisive; voicers were those consumers who are expected to voice their dissatisfaction and complain, in search of compensation from the seller; irates were those consumers who interacted in negative word of mouth or interchanged providers besides giving a direct complaint to the upsetting provider; in addition to activists were those persuaded to complain to a third party agency, not only to get compensation but also to attain social endings. While Goetzinger (2007) provided the below classification of complaints, following Hirschman (1970) and Singh (1990aSingh ( , 1991.
· Voicing a complaint to the retailer, for example verbally stating complaints regarding the product, service or performance disappointment.
· Private complaining, in which the complaint is spoken to family members, friends as well as acquaintances.
· Third-party complaining, in which the complaint is shared with other parties and spread by them.
· Combined complaining, in which complaints are uttered amenably to the public over offline or online networks.
Other works provides unconventional viewpoints. Whereas Day and Landon (1977) characterized complaining action into private, public or no action taken. Broadbridge and Marshall (1995), examined this theoretical model, and supported its categorizations. Public action entails both direct communication to a seller or indirect public action like complaining through the use of media. Private action includes boycotting or advising friends and families; the norm is not taking action. Broadbridge and Marshall argued that this method has been extensively established in the literature.
In this research the post dissatisfaction behaviour were separated into post dissatisfaction action of choosing not to complain or to complain, which included direct public complaint, indirect public complaint, or private complaint.
ii. Education and Income
Demographic factors such as education and socioeconomic levels are particularly relevant to differences with the likelihood of complaint (Andreasen and Manning, 1990). A high level of income, education, or social involvement can increase a consumer's likelihood of complaining (Andreasen and Manning, 1990). In previous western literature, both income and education variables have been found to have an association with the likelihood of complaining (Liefeld, Edgecombe, and Wolfe, 1975;Miller, 1970;Pfaff and Blivice, 1977). At the same time, studies focusing on education have found a direct relationship between the level of education and complaining (Bearden and Mason, 1984;Day and Landon, 1977;Jacoby and Jaccard, 1981;Mayer and Morganosky, 1987). Davidow and Dacin (1997) also supported the investigating several variables to explain complaint behaviour, including sex, age, education, income, and level of perceived dissatisfaction since it is significant in this matter. Bearden (1983) argued that income, but not education, strongly affected complaining activities (Bearden, 1983). In contrast, Warland, Hermann, and Moore (1984) believed that both education and income to a lesser extent are strong predictors of complaining. Bearden and Oliver (1985) also noted the significance of income on public complaint, but found no relation between income and complaining to a third party.
Differences in complaint behaviour may result due to consumers' socioeconomic status.
Vulnerable consumers, those who may be socially marginalised because of low income, discriminatory legislation, or belonging to a disadvantaged group, have been found to have different complaining behaviours compared to mainstream consumers. Andreasen and Manning (1990) found these types of marginalised populations were less prone to voice complaints, which the authors attributed to a lower level of perceived dissatisfaction and problems than other consumers. Mayer and Morganosky (1987) have suggested that the significance of successfully dealing with the complaints of the higher income, better educated consumer takes on greater impact on complaint decision. According to Mayer and Morganosky (1987), advanced income and better educated consumers were considerably more likely than lower income and less educated customers to agree with the statement: if I buy clothes and not satisfied with, I take them back to the store and complain." This research study shall for that reason study and gather information regarding this hypothesis. The researcher therefore hypothesizes that there is a relationship or connection in the attitude of an individual to most likely complain and their educational attained as well as income level earned.
This leads to the following null hypothesis: H1₀: There is no difference in type of complaint action according to education or income among Saudi consumers.
iii. Attitude
Consumers' attitude toward the act of complaining has been treated as a mediating variable (Day, 1984); that is, the above considerations about an unsatisfactory experience affect (strengthen or weaken) the consumer's attitudes towards a specific action (to complain or not to complain). Attitude is an affect, or a feeling for or against. In high-involvement situations, beliefs predict attitudes (Mowen and Minor, 2000). The customer's attitude regarding whether a problem is their fault will affect their behavior. If the problem is related to the product or service that has been caused by an outside or external force, they are more likely to express a complaint (Swartz and Iacobucci, 2000). Consumers who voice complaints are likely to have a positive attitude towards complaining (Richins, 1982;Singh, 1990b). Another article shows that attitudes of consumers towards complaining have been linked to demographic factors such as age, income, gender and the educational level of consumers . Give n these researches, we propose the following null hypothesis: H2₀: There is no relationship between attitude/willingness to complain and demographic variables of educational level and income level of Saudi consumers.
iv. Consumer Complaint Behaviour
Comprehensive models that seek to depict the decision-making process leading to CCB are few (Blodgett and Granbois, 1992;Day, 1984;Huppertz, 2003;Stephens and Gwinner, 1998). Day's (1984) model was chosen in this study as a vehicle to explain complaint behaviour differences among Saudi consumers. Day (1984) focused on the considerations of a dissatisfied customer leading to a decision either to complain or not to complain. Day's (1984) conceptual model offered a broad umbrella to examine the role status differences on the complaining process. In the model, four antecedent variables (perceived significance of the consumption event; consumer knowledge and experience of the product and complaint process; perceived cost of complaint; and assessment of the likelihood that complaining will be successful) influence attitudes toward the act of complaining, serving as the mediating variable that could lead to either complaining (in its various forms) or not complaining.
If the supplier fails to deliver on time, for instance, each consumer may perceive this failure differently, as different groups have different perceptions of time and may value the delay differently (Graham, 1981). Differences in post-dissatisfaction responses that may arise at each stage of the Day (1984) model may be attributable to personal demographics variables, cultural, environmental context or the specific consumption situation. Personal issues can be broadly grouped in terms of demographics such as education and income level (Butelli, 2007;Sing and Howell, 1985). Previous literature suggests the significant influence of sex, age, income, and education to the consumer response to dissatisfaction (Sing and Howell, 1985). Building from the preceding discussion, the following four subhypotheses were formed to compare the CCB variables.
H3₀:There is no difference between Saudi participants according to education and/ or income level in terms of CCB variables such as (H3a) assessment of chances of success of complaining; (H3b) experience with, or knowledge with regard to, the product, consumer rights, or complaining;
III. RESEARCH METHODOLOGY
The method of Non-probability sampling is used to decide on participants to be presented as a symbol of the population of Saudi consumers. Even though probability sampling hypothetically is better, it is far more difficult to make use of in a country like Saudi Arabia due to its conservative society and culture. In addition, this method of sampling is also very challenging to be used on the female populace sample which makes the appeal of using nonprobability sampling high (Onkvisit and Shaw, 2009).
A web-based survey have also been utilized. It has been found that online surveys contains a variety of advantages and benefits. One of the major advantages is that it is easily accessible anytime and anywhere. Additionally, online survey research takes advantage of the capacity of the Internet to grant access for different groups and individuals who may be very difficult to reach through other channels (Garton, Haythornthwaite and Wellman, 1999;Wellman, 1997), in this instance Saudi women are said to be more capable of responding to online surveys than to male data collectors. An online panel was utilized to recruit and select respondents to completely answer an online survey which is available in both English and Arabic language. Subsequently, the final draft of the questionnaire which has been developed after conducting the qualitative research by Badghish and Stanton (2015) is then emailed to Saudi participants. The panel is a part of a larger consumer panel organised by a Saudi market research agency, the Saudi Mandoob Agency (SMA), based in Riyadh. SMA has developed their email list from community bulletin boards and web site recruitment.
The components of the instrument are explained and justified below. Section one of the questionnaire collected data on respondents' characteristics, such as age, income, gender, educational achievement, and geographic location within Saudi Arabia. The second section was based on the multi-item scale proposed in Day's (1984) model, which was also used by Blodgett and Granbois (1992), Davidow and Dacin (1997), Fernandes and Santos (2007), Hernandez et al. (1991), Huppertz (2003), Liu and Zhang (2008), Oh (2003), and Stephens and Gwinner (1998). The list of complaint actions was adopted from Hirschman (1970) and Singh (1991). The respondents' actions were used to categorize the respondents as complainers and non-complainers, respectively. Those who made a direct public complaint, indirect public complaint, or private complaint were considered complainers; those who took no action at all were considered non-complainers .
Validity of the instrument is determined through reporting of prior determined validity from previous literature along with the use of exploratory factor analysis, which is used to support evidence of validity of the scaled construct items (Survey component questions 21-28). Reliability through assessment of the internal consistency of the scaled items is determined through use of Cronbach's alpha.
The computed values for each construct revealed satisfactory levels of reliability (> 0.6) (Nunnally and Bernstein, 1994 For this study, a sample of Saudi consumers participated in a survey on complaint behaviors. The majority of the sample (63.4%) are younger than 35 years and this was expected as they are the major customer of electronics retailers. The sample for the study was described according to (a) demographic characteristics (table 1), (b) purchasing history (table 2), and (c) complaint action (table 3). The data demonstrated the sample to be diverse in terms of socioeconomic status, with a wide range of incomes.
The purchasing history reported by participants was obtained in terms of purchasing types and a rating of the overall experience. Through a filtering question, all the respondents reported that they have experienced a dissatisfying situation about a purchase of the products, representing a complaining experience in the Saudi retail market.
As part of the survey responses, participants rated their overall experience with previous purchases of the products. Table 2 provides the descriptive statistics in terms of frequency data, mean, and standard deviation related to participants' overall experience with the purchases.
IV. RESULTS
To determine the difference, if any, in type of complaint action (complaint or noncomplaint) according to education or income among a sample of Saudi consumers (n = 254), a chi square analysis was conducted. Participant responses regarding complaint action were separated into post dissatisfaction action of choosing not to complain (noncomplainers) and choosing to complain (complainers), which included direct public complaint (to the seller or manufacturer), indirect public complaint (to a third party agency or media), or private complaint (to family or friends). This constructed dichotomous variable was then compared using a chi square crosstabulation with both the income and education levels of the participants. The crosstabulations revealed a non-significant chi square for both education level (chi square = 3.01, df = 3, p = .390) and income level (chi square = .704, df = 4, p = .951). Thus, the null hypothesis was retained, supporting no difference in complaint type according to various income and education levels.
To address the second hypothesis and determine whether a relationship exists between attitude/ willingness to complain and demographic variables of educational level and income level of Saudi consumers a one way ANOVA was conducted. Normal distribution of the sample (n = 254) was assumed due to the large sample size and the assumptions of the central limit theorem (Ott and Longnecker, 2010;Robertson 2002 Looking more closely at the results for Hypothesis 3a (table 4), the analysis revealed the significant difference to be between those who had some high school education and those who had gained at least a diploma (p = .033).
To further examine this hypothesis, a dichotomous variable was constructed to differentiate groups defined by an education level of some high school or less and education of a diploma or more. Using this newly constructed variable as the group defining variable in an independent samples t-test, the results indicated a significant difference (p = .009) between groups in terms of their assessment of the chances of success in their complaint. The mean score for the lower educated group was 24.42 (SD = 12.08) compared to the higher education group (M = 19.44, SD = 10.21). Thus, the lower education group rated their chances of success higher than the more educated group.
Testing the Influence of Education and Income Variables on Post Dissatisfaction Action
Using the scaled score results (Likert-scaled responses) for the likelihood of complaint versus non complaint, the participants' scaled scores were compared using ANOVA to the categorical variables of income and education level. Results again failed to demonstrate a relationship between the two variables of income and education level and complaint action (p = .214 complainer score and p = .874 noncomplainer score for education; p = .564 complainers and p = .608 noncomplainers for income). Based on the results of the previous sections, classifying the education variable into low education (having some high school education or less) and high education (diploma and above) groups of participants, an independent samples t-test was used to assess the mean differences in complaint action scores by education level.
The mean difference in likelihood to complain between the low education level (high school) and high education level (diploma or higher) was not significant (mean difference = 0.05, p = .583) and the mean difference in likelihood of noncomplaint was only 0.129 (p = .581).
Similarly, for income level, the variable was classified as lower income (< SR 6000) and higher income (> SR 8000). Those of the lower income bracket demonstrated a lower likelihood to complain mean score than the higher income bracket by 0.11 (p = .123) and a higher mean score of likelihood of noncomplaint than the higher income group with a difference of 0.19 (p = .331). However, these results failed to demonstrate any statistical significance between group differences for income and education level groups in terms of complaint action.
V. DISCUSSION AND CONCLUSION
This study examined complaint and noncomplaint behaviours among a sample of Saudi consumers as related to the participant's education and income levels. The results of the analysis supported the null hypothesis for all hypotheses with the exception of hypothesis 3a under Research Question 3. Therefore, as a whole, there were no differences in consumer complaint behaviour were noted in terms of education and income levels of participants, with a single exception.
The only significant finding related to complaint behaviour differences by education and income levels was a statistically substantial difference in the assessment of chances of success with complaint. For this variable, participants without a diploma scored significantly higher in their assessment of the chances for success of complaining compared to the more educated group (with at least a diploma). Those consumers with low education have the tendency to complain. Managers need to pay attention to this when dealing with that segment. Companies should acknowledge that serving this segment may involve higher costs compared to other segments as higher number of employees may be needed to handle their complaints. Most of them complain to sellers, and family and friends.
In contrast to previous research that has reported those who complain to have higher income and education compared to non-complainers (Barnes and Kelloway, 1980;Bearden and Mason, 1984;Singh, 1990;Warland et al., 1985), as well as research indicating that a high level of income, education, or social involvement can increase a consumer's likelihood of complaining (Andreasen and Manning, 1990), these variables were not shown to be related to complaint actions. The results pointing to major differences in assessment of the chances of success by educational levels may demonstrate an impact on the decision to complain, particularly among consumers with less than a diploma, who may overestimate their success in complaining, which may contribute to an increased frequency of complaining. Studies focusing on education have found a direct relationship between the level of education and complaining (Bearden and Mason, 1984;Day and Landon, 1977;Jacoby and Jaccard, 1981;Mayer and Morganosky, 1987). Therefore, there is a need for additional research on this variable. This study was limited by self-report data obtained from a sample of Saudi consumers in a single type of retail electronics store. In addition the number of noncomplainers in this study (n = 36) was significantly lower than the number of complainers (n = 218). For this reason, the scale-scored data on likelihood of complaining or not complaining was used to provide scored data on the likelihood of complaint and noncompliant for all participants (n = 254). The results of this study point to the need for further research. Future studies should be considered in order to compare how education levels of consumers may affect their perceptions and assessments of the benefits and risks of complaining and, ultimately, their decision to complain or not to complain. As pointed out in the literature Liu and McClure (2001) also cultural research may help to find more complaint behaviour differences. Different ethnicities or nationalities would prove useful demonstration in a variety of settings, suggesting a more universal CCB model for use in analysing cross-cultural consumer complaint behaviours. | 2019-05-20T13:06:35.590Z | 2016-03-01T00:00:00.000 | {
"year": 2016,
"sha1": "b354ded97e996d03beaa76a8544203d51ad0eb92",
"oa_license": null,
"oa_url": "https://journals.qu.edu.qa/index.php/SBE/article/download/621/296",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "12f33b021435e090c1773f9122b30eee39fdad49",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
15412555 | pes2o/s2orc | v3-fos-license | On Quantum Detection and the Square-Root Measurement
In this paper we consider the problem of constructing measurements optimized to distinguish between a collection of possibly non-orthogonal quantum states. We consider a collection of pure states and seek a positive operator-valued measure (POVM) consisting of rank-one operators with measurement vectors closest in squared norm to the given states. We compare our results to previous measurements suggested by Peres and Wootters [Phys. Rev. Lett. 66, 1119 (1991)] and Hausladen et al. [Phys. Rev. A 54, 1869 (1996)], where we refer to the latter as the square-root measurement (SRM). We obtain a new characterization of the SRM, and prove that it is optimal in a least-squares sense. In addition, we show that for a geometrically uniform state set the SRM minimizes the probability of a detection error. This generalizes a similar result of Ban et al. [Int. J. Theor. Phys. 36, 1269 (1997)].
Introduction
Suppose that a transmitter, Alice, wants to convey classical information to a receiver, Bob, using a quantum-mechanical channel. Alice represents messages by preparing the quantum channel in a pure quantum state drawn from a collection of known states. Bob detects the information by subjecting the channel to a measurement in order to determine the state prepared. If the quantum states are mutually orthogonal, then Bob can perform an optimal orthogonal (von Neumann) measurement that will determine the state correctly with probability one [1]. The optimal measurement consists of projections onto the given states. However, if the given states are not orthogonal, then no measurement will allow Bob to distinguish perfectly between them. Bob's problem is therefore to construct a measurement optimized to distinguish between non-orthogonal pure quantum states.
We may formulate this problem as a quantum detection problem, and seek a measurement that minimizes the probability of a detection error, or more generally, minimizes the Bayes cost.
Necessary and sufficient conditions for an optimum measurement minimizing the Bayes cost have been derived [2,3,4]. However, except in some particular cases [4,5,6,7], obtaining a closed-form analytical expression for the optimal measurement directly from these conditions is a difficult and unsolved problem. Thus in practice, iterative procedures minimizing the Bayes cost [8] or ad-hoc suboptimal measurements are used.
In this paper we take an alternative approach of choosing a different optimality criterion, namely a squared-error criterion, and seeking a measurement that minimizes this criterion. It turns out that the optimal measurement for this criterion is the "square-root measurement" (SRM), which has previously been proposed as a "pretty good" ad-hoc measurement [9,10].
This work was originally motivated by the problems studied by Peres and Wootters in [11] and by Hausladen et al. in [10]. Peres and Wootters [11] consider a source that emits three two-qubit states with equal probability. In order to distinguish between these states, they propose an orthogonal measurement consisting of projections onto measurement vectors "close" to the given states. Their choice of measurement results in a high probability of correctly determining the state emitted by the source, and a large mutual information between the state and the measurement outcome.
However, they do not explain how they construct their measurement, and do not prove that it is optimal in any sense. Moreover, the measurement they propose is specific for the problem that they pose; they do not describe a general procedure for constructing an orthogonal measurement with measurement vectors close to given states. They also remark that improved probabilities might be obtained by considering a general positive operator-valued measure (POVM) [12] consisting of positive Hermitian operators Π i satisfying i Π i = I, where the operators Π i are not required to be orthogonal projection operators as in an orthogonal measurement.
Hausladen et al. [10] consider the general problem of distinguishing between an arbitrary set of pure states, where the number of states is no larger than the dimension of the space U they span. They describe a procedure for constructing a general "decoding observable", corresponding to a POVM consisting of rank-one operators that distinguishes between the states "pretty well"; this measurement has subsequently been called the square-root measurement (SRM) (see e.g., [13,14,15]). However, they make no assertion of (non-asymptotic) optimality. Although they mention the problem studied by Peres and Wootters in [11], they make no connection between their measurement and the Peres-Wootters measurement.
The SRM [7,9,10,13,14,15] has many desirable properties. Its construction is relatively simple; it can be determined directly from the given collection of states; it minimizes the probability of a detection error when the states exhibit certain symmetries [7]; it is "pretty good" when the states to be distinguished are equally likely and almost orthogonal [9]; and it is asymptotically optimal [10]. Because of these properties, the SRM has been employed as a detection measurement in many applications (see e.g., [13,14,15]). However, apart from some particular cases mentioned above [7], no assertion of (non-asymptotic) optimality is known for the SRM.
In this paper we systematically construct detection measurements optimized to distinguish between a collection of quantum states. Motivated by the example studied by Peres and Wootters [11], we consider pure-state ensembles and seek a POVM consisting of rank-one positive operators with measurement vectors that minimize the sum of the squared norms of the error vectors, where the ith error vector is defined as the difference between the ith state vector and the ith measurement vector. We refer to the optimizing measurement as the least-squares measurement (LSM). We then generalize this approach to allow for unequal weighting of the squared norms of the error vectors.
This weighted criterion may be of interest when the given states have unequal prior probabilities.
We refer to the resulting measurement as the weighted least-squares measurement (WLSM). We show that the SRM coincides with the LSM when the prior probabilities are equal, and with the WLSM otherwise (if the weights are proportional to the square roots of the prior probabilities).
We then consider the case in which the collection of states has a strong symmetry property called geometric uniformity [16]. We show that for such a state set the SRM minimizes the probability of a detection error. This generalizes a similar result of Ban et al. [7].
The organization of this paper is as follows. In Section 2 we formulate our problem and present our main results. In Section 3 we construct a measurement consisting of rank-one operators with measurement vectors closest to a given collection of states in the least-squares sense. In Section 4 we construct the optimal orthogonal LSM. Section 5 generalizes these results to allow for weighting of the squared norms of the error vectors. In Section 7 we discuss the relationships between our results and the previous results of Peres and Wootters [11] and Hausladen et al. [10]. We obtain a new characterization of the SRM, and summarize the properties of the SRM that follow from this characterization. In Section 8 we discuss connections between the SRM and the measurement minimizing the probability of a detection error (MPEM). We show that for a geometrically uniform state set the SRM is equivalent to the MPEM. We will consistently use [10] as our principal reference on the SRM.
Problem Statement and Main Results
In this section, we formulate our problem and describe our main results.
Problem Formulation
Assume that Alice conveys classical information to Bob by preparing a quantum channel in a pure quantum state drawn from a collection of given states {|φ i }. Bob's problem is to construct a measurement that will correctly determine the state of the channel with high probability. Therefore, let {|φ i } be a collection of m ≤ n normalized vectors |φ i in an n-dimensional complex Hilbert space H. In general these vectors are non-orthogonal and span an r-dimensional subspace U ⊆ H. The vectors are linearly independent if r = m.
For our measurement, we restrict our attention to POVMs consisting of m rank-one operators of the form Π i = |µ i µ i | with measurement vectors |µ i ∈ U . We do not require the vectors |µ i to be orthogonal or normalized. However, to constitute a POVM the measurement vectors must satisfy where P U is the projection operator onto U ; i.e., the operators Π i must be a resolution of the identity on U . 1 We seek the measurement vectors |µ i such that one of the following quantities is minimized: Weighted squared error E w = m i=1 w i e i |e i for a given set of positive weights w i . 1 Often these operators are supplemented by a projection Π0 = P U ⊥ = IH − PU onto the orthogonal subspace U ⊥ ⊆ H, so that m i=0 Πi = IH-i.e., the augmented POVM is a resolution of the identity on H. However, if the state vectors are confined to U, then the probability of this additional outcome is 0, so we omit it.
Main Results
If the states |φ i are linearly independent (i.e., if r = m), then the optimal solutions to problems (1) and (2) are of the same general form. We express this optimal solution in different ways.
In particular, we find that the optimal solution is an orthogonal measurement and not a general POVM.
If r < m, then the solution to problem (1) still has the same general form. We show how it can be realized as an orthogonal measurement in an m-dimensional space. This orthogonal measurement is just a realization of the optimal POVM in a larger space than U , along the lines suggested by Neumark's theorem [12], and it furnishes a physical interpretation of the optimal POVM.
We define a geometrically uniform (GU) state set as a collection of vectors S = {|φ i = U i |φ , U i ∈ G}, where G is a finite abelian (commutative) group of m unitary matrices U i , and |φ is an arbitrary state. We show that for such a state set the SRM minimizes the probability of a detection error.
Using these results, we can make the following remarks about [11] and the SRM [10]: 1. The Peres-Wootters measurement is optimal in the least-squares sense and is equal to the SRM (strangely, this was not noticed in [10]); it also minimizes the probability of a detection error.
2. The SRM proposed by Hausladen et al. [10] minimizes the squared error. It may always be chosen as an orthogonal measurement equivalent to the optimal measurement in the linearly independent case. Further properties of the SRM are summarized in Theorem 3 (Section 7).
Least-Squares Measurement
Our objective is to construct a POVM with measurement vectors |µ i , optimized to distinguish between a collection of m pure states |φ i that span a space U ⊆ H. A reasonable approach is to find a set of vectors |µ i ∈ U that are "closest" to the states |φ i in the least-squares sense. Thus our measurement consists of m rank-one positive operators of the form The measurement vectors |µ i are chosen to minimize the squared error E, defined by where |e i denotes the ith error vector subject to the constraint (1); i.e., the operators Π i must be a resolution of the identity on U .
If the vectors |φ i are mutually orthonormal, then the solution to (2) satisfying the constraint To derive the solution in the general case where the vectors |φ i are not orthonormal, denote by M and Φ the n × m matrices whose columns are the vectors |µ i and |φ i , respectively. The squared error E of (2)-(3) may then be expressed in terms of these matrices as where Tr(·) and (·) * denote the trace and the Hermitian conjugate respectively, and the second equality follows from the identity Tr(AB) = Tr(BA) for all matrices A, B. The constraint (1) may then be restated as
The Singular Value Decomposition
The least-squares problem of (4) seeks a measurement matrix M that is "close" to the matrix Φ. If the two matrices are close, then we expect that the underlying linear transformations they represent will share similar properties. We therefore begin by decomposing the matrix Φ into elementary matrices that reveal these properties via the singular value decomposition (SVD) [17].
The SVD is known in quantum mechanics, but possibly not very well known. It has sometimes been presented as a corollary of the polar decomposition (e.g., in Appendix A of [18]). We present here a brief derivation based on the properties of eigendecompositions, since the SVD can be interpreted as a sort of "square root" of an eigendecomposition.
Let Φ be an arbitrary n × m complex matrix of rank r. (c) Σ is a diagonal n×m matrix whose first r diagonal elements are σ i , and whose remaining m−r diagonal elements are 0, so Σ * Σ is a diagonal m×m matrix with diagonal elements is an m × m unitary matrix whose first r columns are the eigenvectors |v i , which span a subspace V ⊆ C m , and whose remaining m − r columns |v i span the orthogonal (c) Σ is as before, so ΣΣ * is a diagonal n × n matrix with diagonal elements σ 2 i for 1 ≤ i ≤ r and 0 otherwise; (d) U is an n × n unitary matrix whose first r columns are the eigenvectors |u i , which span the subspace U ⊆ H, and whose remaining n − r columns |u i span the orthogonal complement U ⊥ ⊆ H.
Since U is unitary, we have not only U * U = I H , which implies that the vectors |u k ∈ H are orthonormal, u k |u j = δ kj , but also that U U * = I H , which implies that the rank-one projection operators |u k u k | are a resolution of the identity, k |u k u k | = I H . Similarly the vectors |v k ∈ C m are orthonormal and k |v k v k | = I m . These orthonormal bases for H and C m will be called the U -basis and the V -basis, respectively. The first r vectors of the U -basis and the V -basis span the subspaces U and V, respectively. Thus we refer to the set of vectors {|u k , 1 ≤ k ≤ r} as the U -basis, and to the set {|v k , 1 ≤ k ≤ r} as the V-basis.
The matrix Φ may be viewed as defining a linear transformation Φ : C m → H according to |v → Φ|v . The SVD allows us to interpret this map as follows. A vector |v ∈ C m is first Similarly, the conjugate Hermitian matrix Φ * defines the adjoint linear transformation The key element in these maps is the "transjector" (partial isometry) |u i v i |, which maps the rank-one eigenspace of S generated by |v i into the corresponding eigenspace of T generated by |u i , and the adjoint transjector |v i u i |, which performs the inverse map.
The Least-Squares POVM
The SVD of Φ specifies orthonormal bases for V and U such that the linear transformations Φ and Φ * map one basis to the other with appropriate scale factors. Thus, to find an M close to Φ we need to find a linear transformation M that performs a map similar to Φ.
Employing the SVD Φ = U ΣV * , we rewrite the squared error E of (4) as where The vectors {|u i , 1 ≤ i ≤ r} form an orthonormal basis for U . Therefore, the projection operator onto U is given by Essentially, we want to construct a map M * such that the images of the maps defined by Φ * and M * are as close as possible in the squared norm sense, subject to the constraint The SVD of Φ * is given by Φ where |0 denotes the zero vector. Denoting the image of |u i under M * by |a i = M * |u i , for any choice of M satisfying the constraint (9) we have and Thus the vectors |a i , 1 ≤ i ≤ r, are mutually orthonormal and |a i = |0 , r + 1 ≤ i ≤ n.
Combining (10) and (11), we may express |d i as Our problem therefore reduces to finding a set of r orthonormal vectors |a i that minimize Thus the optimal measurement matrix M , denoted byM , satisfieŝ In other words, the optimalM is just the sum of the r transjectors of the map Φ.
We may expressM in matrix form asM where Z r , 1 ≤ r ≤ m is an n × m matrix defined by The residual squared error is then then the diagonal elements of S are all equal to 1, so Tr(S) = m. Therefore, Note Thus j |u j v j | = j |u j u j |Φ, independent of the choice of {|u j }, and the optimal measurement is unique.
In Appendix A we discuss some of the properties of the residual squared error E min .
Orthogonal Least-Squares Measurement
In the previous section we sought the POVM consisting of rank-one operators that minimizes the least-squares error. We may similarly seek the optimal orthogonal measurement of the same form.
We will explore the connection between the resulting optimal measurements both in the case of linearly independent states |φ i (r = m), and in the case of linearly dependent states (r < m).
Linearly independent states: If the states |φ i are linearly independent and consequently Φ has full column rank (i.e., r = m), then (20) reduces tô The optimal measurement vectors |μ i are mutually orthonormal, since their Gram matrix iŝ Thus, the optimal POVM is in fact an orthogonal measurement corresponding to projections onto a set of mutually orthonormal measurement vectors, which must of course be the optimal orthogonal measurement as well.
Linearly dependent states: If the vectors |φ i are linearly dependent, so that the matrix Φ does not have full column rank (i.e., r < m), then the m measurement vectors |μ i cannot be mutually orthonormal since they span an r-dimensional subspace. We therefore seek the orthogonal measurement M that minimizes the squared error E given by (4), subject to the orthonormality In the previous section the constraint was on M M * . Here the constraint is on M * M , so we now write the squared error E as: where and where the columns |v i of V form the V -basis in the SVD of Φ. Essentially, we now want the images of the maps defined by Φ and M to be as close as possible in the squared norm sense.
The SVD of Φ is given by Φ = U ΣV * . Thus, Our problem therefore reduces to finding a set of r orthonormal vectors |b i that minimize Since the vectors |u i are orthonormal, the minimizing vectors We may choose the remaining vectors |b i , r + 1 ≤ i ≤ m, arbitrarily, as long as the resulting m vectors |b i are mutually orthonormal. This choice will not affect the residual squared error. A convenient choice is |b i = |u i , r + 1 ≤ i ≤ m. This results in an optimal measurement matrix denoted byM , namelyM We may expressM in matrix form asM where Z m is given by (17) with r = m.
The residual squared error is theñ where E min is given by (18).
Evidently, the optimal orthogonal measurement is not strictly unique. However, its action in the subspace U spanned by the vectors |φ i and the resultingẼ min are unique.
The Optimal Measurement and Neumark's Theorem
We now try to gain some insight into the orthogonal measurement. Our problem is to find a set of measurement vectors that are as close as possible to the states |φ i , where the states lie in an r-dimensional subspace U . When r = m we showed that the optimal measurement vectors |μ i are mutually orthonormal. However, when r < m, there are at most r orthonormal vectors in U .
Therefore, imposing an orthogonality constraint forces the optimal orthonormal measurement vectors |μ i to lie partly in the orthogonal complement U ⊥ . The corresponding measurement consists of projections onto m orthonormal measurement vectors, where each vector has a component in U , . We may expressM in terms of these components as where |μ U i and |μ U ⊥ i are the columns ofM U andM U ⊥ , respectively. From (27) it then follows Comparing (31) with (15), we conclude thatM U =M and therefore |μ U i = |μ i . Thus, although Essentially, the optimal orthogonal measurement seeks m orthonormal measurement vectors |μ i whose projections onto U are as close as possible to the m states |φ i . We now see that these projections are the measurement vectors |μ i of the optimal POVM. If we consider only the components of the measurement vectors that lie in U , thenẼ Indeed, Neumark's theorem [12] shows that our optimal orthogonal measurement is just a realization of the optimal POVM. This theorem guarantees that any POVM with measurement operators of the form Π i = |µ i µ i | may be realized by a set of orthogonal projection operators Π i in an extended space such that Π i = PΠ i P , where P is the projection operator onto the original smaller space. Denoting byΠ i andΠ i the optimal rank-one operators |μ i μ i | and |μ i μ i | respectively, (31) asserts thatΠ Thus the optimal orthogonal measurement is a set of m projection operators in H that realizes the optimal POVM in the r-dimensional space U ⊆ H. This furnishes a physical interpretation of the optimal POVM. The two measurements are equivalent on the subspace U .
We summarize our results regarding the LSM in the following theorem: the optimal m measurement vectors that minimize the least-squares error defined by (2)-(3), subject to the constraint (1). Let Φ = U ΣV * be the rank-r n × m matrix whose columns are the vectors |φ i , and letM be the n × m measurement matrix whose columns are the vectors |μ i . Then the unique optimalM is given bŷ where |u i and |v i denote the columns of U and V respectively, and Z r is defined in (17).
The residual squared error is given by (b) the action of the two optimal measurements in the subspace U is the same.
Weighted Least-Squares Measurement
In the previous section we sought a set of vectors |µ i to minimize the sum of the squared errors, is prepared with high probability, then we might wish to assign a large weight to e j |e j . It may therefore be of interest to seek the vectors |µ i that minimize a weighted squared error.
Thus we consider the more general problem of minimizing the weighted squared error E w given by subject to the constraint where w i > 0 is the weight given to the ith squared norm error. Throughout this section we will assume that the vectors |φ i are linearly independent and normalized.
The derivation of the solution to this minimization problem is analogous to the derivation of the LSM with a slight modification. In addition to the the matrices M and Φ, we define an m × m diagonal matrix W with diagonal elements w i . We further define Φ w = ΦW . We then express E w in terms of M, Φ w and W as From (8) and (9) where E ′ w is defined as Thus minimization of E w is equivalent to minimization of E ′ w . Furthermore, this minimization problem is equivalent to the least-squares minimization given by (4), if we substitute Φ w for Φ.
Therefore we now employ the SVD of Φ w , namely Φ w = U w Σ w V * w . Since W is assumed to be invertible, the space spanned by the columns of Φ w = ΦW is equivalent to the space spanned by the columns of Φ, namely U . Thus the first m columns of U w , denoted by |u w i , constitute an orthonormal basis for U , and M M * = P U , where We now follow the derivation of the previous section, where we substitute Φ w for Φ and U w , V w and σ w i for U, V and σ i , respectively. The minimizingM w follows from Theorem 2, where the |v w i are the columns of V w . The resulting error E ′ min is given by In addition, S w = W Φ * ΦW = W SW . Assuming the vectors |φ i are normalized, the diagonal elements of S are all equal to 1, so Tr(S w ) = m i=1 w 2 i and From (37) the residual squared error E w min is therefore given by Recall that (σ w i ) 2 and σ 2 i are the eigenvalues of S w = W SW and S, respectively. We may therefore use Ostrowski's theorem (see Appendix A) to obtain the following bounds: Since max i w i ≥ 1 and min i w i ≤ 1, E w min can be greater or smaller then E min , depending on the weights w i .
Example of the LSM and the WLSM
We now give an example illustrating the LSM and the WLSM.
Consider the two states, We wish to construct the optimal LSM for distinguishing between these two states. We begin by forming the matrix Φ, The vectors |φ 1 and |φ 2 are linearly independent, so Φ is a full-rank matrix (r = 2). Using Theorem 1 we may determine the SVD Φ = U ΣV * , which yields From (16) and (17), we now haveM where |μ 1 and |μ 2 are the optimal measurement vectors that minimize the least-squares error defined by (2) thus As expected from Theorem 2, μ 1 |μ 2 = 0; the vectors |φ 1 and |φ 2 are linearly independent, so the optimal measurement vectors must be orthonormal. The LSM then consists of the orthogonal projection operators Π 1 = |μ 1 μ 1 | and Π 2 = |μ 2 μ 2 |. Figure 1 depicts the vectors |φ 1 and |φ 2 together with the optimal measurement vectors |μ 1 and |μ 2 . As is evident from (52) and from Fig. 1, the optimal measurement vectors are as close as possible to the corresponding states, given that they must be orthogonal.
Suppose now we are given the additional information p 1 = p and p 2 = 1 − p, where p 1 and p 2 denote the prior probabilities of |φ 1 and |φ 2 respectively, and p ∈ (0, 1). We may still employ the LSM to distinguish between the two states. However, we expect that a smaller residual squared error may be achieved by employing a WLSM. In Fig. 2 we plot the residual squared error E w min given by (43) as a function of p, when using a WLSM with weights w 1 = √ p and w 2 = √ 1 − p (we will justify this choice of weights in Section 7). When p = 1/2, w 1 = w 2 and the resulting WLSM is equivalent to the LSM. For p = 1/2, the WLSM does indeed yield a smaller residual squared error than the LSM (for which the residual squared error is approximately 0.095).
Comparison With Other Proposed Measurements
We now compare our results with the SRM proposed by Hausladen et al. in [10], and with the measurement proposed by Peres and Wootters in [11].
Hausladen et al. construct a POVM consisting of rank-one operators Π i = |µ i µ i | to distinguish between an arbitrary set of vectors |φ i . We refer to this POVM as the SRM. They give two alternative definitions of their measurement: Explicitly, where M denotes the matrix of columns |µ i . Implicitly, the optimal measurement vectors |µ i are those that satisfy i.e., µ j |φ k is equal to the jkth element of S 1/2 , where S = Φ * Φ.
Comparing We summarize our results regarding the SRM in the following theorem: Theorem 3 (Square-root measurement (SRM)) Let {|φ i } be a set of m vectors in an ndimensional complex Hilbert space H that span an r-dimensional subspace U ⊆ H. Let Φ = U ΣV * be the rank-r n × m matrix whose columns are the vectors |φ i . Let |u i and |v i denote the columns of the unitary matrices U and V respectively, and let Z r be defined as in (17). Let {|µ i } be m vectors satisfying (b) M * M = I m and the corresponding SRM is an orthogonal measurement; (c) the SRM is equal to the optimal LSM.
If r < m,
(a) the SRM is not unique; ii. M U is a SRM matrix; the corresponding SRM is equal to the optimal LSM.
iii. M U may be realized by the optimal orthogonal The SRM defined in [10] does not take the prior probabilities of the states |φ i into account.
In [9], a more general definition of the SRM that accounts for the prior probabilities is given by We next apply our results to a problem considered by Peres and Wootters in [11]. The problem is to distinguish between three two-qubit states where |a , |b and |c correspond to polarizations of a photon at 0 • , 60 • and 120 • , and the states have equal prior probabilities. Since the vectors |φ i are linearly independent, the optimal measurement vectors are the columns ofM given by (20), Substituting (55) in (56) results in the same measurement vectors |μ i as those proposed by Peres and Wootters. Thus their measurement is optimal in the least-squares sense. Furthermore, the measurement that they propose coincides with the SRM for this case. In the next section we will show that this measurement also minimizes the probability of a detection error.
The SRM for Geometrically Uniform State Sets
In this section we will consider the case in which the collection of states has a strong symmetry property, called geometric uniformity [16]. Under these conditions we show that the SRM is equivalent to the measurement minimizing the probability of a detection error, which we refer to as the MPEM. This result generalizes a similar result of Ban et al. [7].
Geometrically Uniform State Sets
Let G be a finite abelian (commutative) group of m unitary matrices U i . That is, G contains the identity matrix I; if G contains U i , then it also contains its inverse U −1 i = U * i ; the product U i U j of any two elements of G is in G; and U i U j = U j U i for any two elements in G [19].
where |φ is an arbitrary state. The group G will be called the generating group of S. Such a state set has strong symmetry properties, and will be called geometrically uniform (GU). For consistency with the symmetry of S, we will assume equiprobable prior probabilities on S.
If the group G contains a rotation R such that R k = I for some integer k > 1, then the GU state set S is linearly dependent, because k j=1 R j |φ is a fixed point under R, and the only fixed point of a rotation is the zero vector |0 .
Since U * i = U −1 i , the inner product of two vectors in S is where s is the function on G defined by For fixed i, the set U −1 all i, j [19]. Therefore the m numbers {s(U −1 i U j ), 1 ≤ j ≤ m} are a permutation of the numbers The same is true for fixed j. Consequently, every row and column of the m × m It will be convenient to replace the multiplicative group G by an additive group G to which G is isomorphic 2 . Every finite abelian group G is isomorphic to a direct product G of a finite number of cyclic groups: G ∼ = G = Z m 1 × · · · × Z mp , where Z m k is the cyclic additive group of integers modulo m k , and m = k m k [19]. Thus every element U i ∈ G can be associated with an element g ∈ G of the form g = (g 1 , g 2 , . . . , g p ), where g k ∈ Z m k . We denote this one-to-one correspondence by is performed by componentwise addition modulo the corresponding m k .
Each state vector |φ i = U i |φ will henceforth be denoted as |φ(g) , where g ∈ G is the group element corresponding to U i ∈ G. The zero element 0 = (0, 0, . . . , 0) ∈ G corresponds to the identity matrix I ∈ G, and an additive inverse −g ∈ G corresponds to a multiplicative inverse The Gram matrix is then the m × m matrix with row and column indices g ′ , g ∈ G, where s is now the function on G defined by
The SRM
We now obtain the SRM for a GU state set. We begin by determining the SVD of Φ. To this end we introduce the following definition. The Fourier transform (FT) of a complex-valued function where the Fourier kernel h, g is Here h k and g k are the kth components of h and g respectively, and the product h k g k is taken as an ordinary integer modulo m k . The Fourier kernel evidently satisfies: We define the FT matrix over G as the m × m matrix F = { 1 √ m h, g , h, g ∈ G}. The FT of a column vector |ϕ = {ϕ(g), g ∈ G} is then the column vector |φ = {φ(h), h ∈ G} given by |φ = F|ϕ . It is easy to show that the rows and columns of F are orthonormal; i.e., F is unitary: Consequently we obtain the inverse FT formula We now show that the eigenvectors of the Gram matrix S of (59) are the column vectors where the last equality follows from (66), and {ŝ(h), h ∈ G} is the FT of {s(g), g ∈ G}. Thus S has the eigendecomposition where Σ is an m × m diagonal matrix with diagonal elements {σ(h) = m 1/4 ŝ(h), h ∈ G} (the eigenvalues σ 2 (h) are real and nonnegative because S is Hermitian). Consequently, the V -basis of the SVD of Φ is V = F, and the singular values of Φ are σ(h).
We now write the SVD of Φ in the following form: where Υ is the n×m matrix whose columns |u(h) are the columns of the U -basis of the SVD of Φ for values of h ∈ G such that σ(h) = 0 and are zero columns otherwise, and It then follows that where is the hth element of the FT of Φ regarded as a row vector of column vectors, Φ = {|φ(g) , g ∈ G}.
Finally, the SRM is given by the measurement matrix The measurement vectors |µ(g) (the columns of M ) are thus the inverse FT of the columns of Υ: Note that if |φ(g) = U i |φ where U i ↔ g, and U j ↔ g ′ , then U j |φ(g) = U j U i |φ = |φ(g + g ′ ) .
This shows that the measurement vectors |µ(g) have the same symmetries as the state vectors; i.e., they also form a GU set with generating group G. Explicitly, if U i ↔ g, then |µ(g) = U i |µ , where |µ denotes |µ(0) .
The SRM and the MPEM
We now show that for GU state sets the SRM is equivalent to the MPEM. In the process, we derive a sufficient condition for the SRM to minimize the probability of a detection error for a general state set (not necessarily GU) comprised of linearly independent states.
Holevo [2,4] and Yuen et al. [3] showed that a set of measurement operators Π i comprises the MPEM for a set of weighted density operators W i = p i ρ i if they satisfy where and is required to be Hermitian. Note that if (78) is satisfied, then Γ is Hermitian.
In our case the measurement operators Π i are the operators |µ(g) µ(g)|, and the weighted density operators may be taken simply as the projectors |φ(g) φ(g)|, since their prior probabilities are equal. The conditions (78)-(79) then become We first verify that the conditions (78) (or equivalently (81)) are satisfied. Since the matrix where w(g) = µ|φ(g) is a complex-valued function that satisfies w(−g) = w * (g). Therefore, Substituting these relations back into (81), we obtain which verifies that the conditions (78) are satisfied.
Next, we show that conditions (79) are satisfied. Since M * Φ = FΣF * , where F(g)| denotes the row of F corresponding to g. Then, From (71) and (74) we have and Substituting (87)-(89) back into (82), the conditions of (82) reduce to where w(0) is given by (86). It is therefore sufficient to show that or equivalently that u|T |u ≥ 0 for any |u ∈ C m . Using the Cauchy-Schwartz inequality we have which verifies that the conditions (79) are satisfied. We conclude that when the state set S is GU, the SRM is also the MPEM.
An alternative way of deriving this result for the case of linearly independent states |φ i is by use of the following criterion of Sasaki et al. [13]. Denote by Φ w the matrix whose columns are This condition is hard to verify directly from the vectors |φ w i . The difficulty arises from the fact that generally there is no simple relation between the diagonal elements of S 1/2 and the elements of S. Thus given an ensemble of pure states |φ i with prior probabilities p i , we typically need to calculate S 1/2 (which in itself is not simple to do analytically) in order to verify the condition above.
However, as we now show, in some cases this condition may be verified directly from the elements of S using the SVD.
Employing the SVD Φ w = U ΣV * we may express S 1/2 as where Σ is a diagonal matrix with the first r diagonal elements equal to σ i , and the remaining elements all equal zero, where the σ i are the singular values of Φ w . Thus, the WSRM is equal to The SRM has the following properties: 1. The measurement matrix M has the same symmetries as Φ;
The SRM is the least-squares measurement (LSM);
3. The SRM is the minimum-probability-of-error measurement (MPEM).
Example of a GU State Set
We now consider an example demonstrating the ideas of the previous section. Consider the group G of m = 4 unitary matrices U i , where Let the state set be S and the Gram matrix S is given by Note that the sum of the states |φ i is |0 , so the state set is linearly dependent.
In this case G is isomorphic to G = Z 2 × Z 2 , i.e., G = {(0, 0), (0, 1), (1, 0), (1, 1)}. The multipli-cation table of the group G is If we define the correspondence Over G = Z 2 × Z 2 , the Fourier matrix F is the Hadamard matrix Using (72) and (74), we may find the measurement matrix of the SRM: We verify that the columns |µ i of M may be expressed as |µ i = U i |µ 1 , 1 ≤ i ≤ 4, where Thus the measurement vectors |µ i also form a GU set generated by G.
Applications of GU State Sets
We now discuss some applications of Theorem 4.
A. Binary state set: Any binary state set S = {|φ 1 , |φ 2 } is GU, because it can be generated by the binary group G = {I, R}, where I is the identity and R is the reflection about the hyperplane halfway between the two states. Specifically, if the two states |φ 1 and |φ 2 are real, then where |w = |φ 2 − |φ 1 . We may immediately verify that R 2 = I, so that R −1 = R, and that If the states are complex with φ 1 |φ 2 = ae jθ , then define |φ ′ 2 = e −jθ |φ 2 . The states |φ 2 and |φ ′ 2 differ by a phase factor and therefore correspond to the same physical state. We may therefore replace our state set S = {|φ 1 , |φ 2 } by the equivalent state set S = {|φ 1 , |φ ′ 2 }. Now the generating group is G = {I, R}, where R is defined by (102), with |w = |φ ′ 2 − |φ 1 .
The generating group G = {I, R} is isomorphic to G = Z 2 . The Fourier matrix F therefore reduces to the 2 × 2 discrete FT (DFT) matrix, The squares of the singular values of Φ are therefore are the DFT values of {s(g), g ∈ G}, with s(0) = 1 and s(1) = a. Thus, From Theorem 4 we then have We may now apply (105) to the example of Section 6. In that example a = φ 1 |φ 2 = −1/2.
which is equivalent to the optimal measurement matrix obtained in Section 6.
We could have obtained the measurement vectors directly from the symmetry property of between the two states, as illustrated in Fig. 3. The measurement vectors must also be invariant under the same reflection. In addition, since the states are linearly independent, the measurement vectors must be orthonormal. This completely determines the measurement vectors shown in Fig. 3.
(The only other possibility, namely the negatives of these two vectors, is physically equivalent.)
Conclusion
In this paper we constructed optimal measurements in the least-squares sense for distinguishing between a collection of quantum states. We considered POVMs consisting of rank-one operators, where the vectors were chosen to minimize a possibly weighted sum of squared errors. We saw that for linearly independent states the optimal least-squares measurement is an orthogonal measurement, which coincides with the SRM proposed by Hausladen et al. [10]. If the states are linearly dependent, then the optimal POVM still has the same general form. We showed that it may be realized by an orthogonal measurement of the same form as in the linearly independent case. We also noted that the SRM, which was constructed by Hausladen et al. [10] and used to achieve the classical channel capacity of a quantum channel, may always be chosen as an orthogonal measurement.
We showed that for a GU state set the SRM minimizes the probability of a detection error. We also derived a sufficient condition for the SRM to minimize the probability of a detection error in the case of linearly independent states based on the properties of the SVD.
Appendix A. Properties of the Residual Squared Error
We noted at the beginning of Section 3 that if the vectors |φ i are mutually orthonormal, then the optimal measurement is a set of projections onto the states |φ i , and the resulting squared error is zero. In this case S = Φ * Φ = I m , and σ i = 1, 1 ≤ i ≤ m.
If the vectors |φ i are normalized but not orthogonal, then we may decompose S as S = I m + D, where D is the matrix of inner products φ i |φ j for i = j and has diagonal elements all equal to 0. We expect that if the inner products are relatively small, i.e., if the states |φ i are nearly orthonormal, then we will be able to distinguish between them pretty well; equivalently, we would expect the singular values to be close to 1. Indeed, from [20] we have the following bound on the singular values of S = I + D: We now point out some properties of the minimal achievable squared error E min given by (19).
For a given m, E min depends only on the singular values of the matrix Φ. Consequently, any linear operation on the vectors |φ i that does not affect the singular values of Φ will not affect E min .
For example, if we obtain a new set of states |φ ′ i by unitary mixing of the states |φ i , i.e., Φ ′ = ΦQ * where Q is an m × m unitary matrix, then the new optimal measurement vectors |µ ′ i will typically differ from the measurement vectors |μ i ; however the minimal achievable squared error is the same. Indeed, defining S ′ = Φ ′ * Φ ′ = QSQ * , where S = Φ * Φ, we see that the matrices S ′ and S are related through a similarity transformation and consequently have equal eigenvalues [20].
Next, suppose we obtain a new set of states |φ ′ i by a general nonsingular linear mixing of the states |φ i , i.e., Φ ′ = ΦA * , where A is an arbitrary m × m nonsingular matrix. In this case the eigenvalues of S ′ = ASA * will in general differ from the eigenvalues of S. Nevertheless, we have the following theorem: denote the matrix whose ijth element is a ij . Let λ 1 (AA * ) and λ m (AA * ) denote the largest and smallest eigenvalues of AA * respectively, and let {σ i , 1 ≤ i ≤ r} denote the singular values of the matrix Φ of columns |φ i . Then, Thus, E ′ min ≤ E min if λ m (AA * ) ≥ 1 and E ′ min ≥ E min if λ 1 (AA * ) ≤ 1.
In particular, if A is unitary then E min = E ′ min .
Proof: We rely on the following theorem due to Ostrowski (see e.g., [20], p. 224): Ostrowski Theorem: Let A and S denote m × m matrices with S Hermitian and A nonsingular, and let S ′ = ASA * . Let λ k (·) denote the kth eigenvalue of the corresponding matrix, where the eigenvalues are arranged in decreasing order. For every 1 ≤ i ≤ m, there exists a positive real number a i such that λ m (AA * ) ≤ a i ≤ λ 1 (AA * ) and λ i (S ′ ) = a i λ i (S).
Combining this theorem with the expression (19) for the residual squared error results in E ′ min − E min = 2 r i=1 1 − √ a i σ i . Substituting λ m (AA * ) ≤ a i ≤ λ 1 (AA * ) results in Theorem 5. If A is unitary, then AA * = I, and λ i (AA * ) = 1 for all i. | 2014-10-01T00:00:00.000Z | 2000-05-31T00:00:00.000 | {
"year": 2000,
"sha1": "6f908e8f4157acd39aad63d44bab524c5a2a3171",
"oa_license": null,
"oa_url": "http://www.ee.technion.ac.il/Sites/People/YoninaEldar/Download/EF01.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6f908e8f4157acd39aad63d44bab524c5a2a3171",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics",
"Computer Science"
]
} |
51727067 | pes2o/s2orc | v3-fos-license | NADH:Cytochrome b5 Reductase and Cytochrome b5 Can Act as Sole Electron Donors to Human Cytochrome P450 1A1-Mediated Oxidation and DNA Adduct Formation by Benzo[a]pyrene
Benzo[a]pyrene (BaP) is a human carcinogen that covalently binds to DNA after activation by cytochrome P450 (P450). Here, we investigated whether NADH:cytochrome b5 reductase (CBR) in the presence of cytochrome b5 can act as sole electron donor to human P450 1A1 during BaP oxidation and replace the canonical NADPH:cytochrome P450 reductase (POR) system. We also studied the efficiencies of the coenzymes of these reductases, NADPH as a coenzyme of POR, and NADH as a coenzyme of CBR, to mediate BaP oxidation. Two systems containing human P450 1A1 were utilized: human recombinant P450 1A1 expressed with POR, CBR, epoxide hydrolase, and cytochrome b5 in Supersomes and human recombinant P450 1A1 reconstituted with POR and/or with CBR and cytochrome b5 in liposomes. BaP-9,10-dihydrodiol, BaP-7,8-dihydrodiol, BaP-1,6-dione, BaP-3,6-dione, BaP-9-ol, BaP-3-ol, a metabolite of unknown structure, and two BaP-DNA adducts were generated by the P450 1A1-Supersomes system, both in the presence of NADPH and in the presence of NADH. The major BaP-DNA adduct detected by 32P-postlabeling was characterized as 10-(deoxyguanosin-N2-yl)-7,8,9-trihydroxy-7,8,9,10-tetrahydro-BaP (assigned adduct 1), while the minor adduct is probably a guanine adduct derived from 9-hydroxy-BaP-4,5-epoxide (assigned adduct 2). BaP-3-ol as the major metabolite, BaP-9-ol, BaP-1,6-dione, BaP-3,6-dione, an unknown metabolite, and adduct 2 were observed in the system using P450 1A1 reconstituted with POR plus NADPH. When P450 1A1 was reconstituted with CBR and cytochrome b5 plus NADH, BaP-3-ol was the predominant metabolite too, and an adduct 2 was also generated. Our results demonstrate that the NADH/cytochrome b5/CBR system can act as the sole electron donor both for the first and second reduction of P450 1A1 during the oxidation of BaP in vitro. They suggest that NADH-dependent CBR can replace NADPH-dependent POR in the P450 1A1-catalyzed metabolism of BaP.
) is a polycyclic aromatic hydrocarbon (PAH) that has been classified as human carcinogen (Group 1) by the International Agency for Research on Cancer. 1 It is a pro-carcinogen requiring metabolic activation catalyzed by cytochrome P450 (P450) enzymes prior to reaction with DNA. 2 Among the P450s, P450 1A1 is the most important enzyme, in combination with microsomal epoxide hydrolase (mEH), involved in the metabolic activation of BaP to species forming DNA adducts. 2,3 First, P450 1A1 oxidizes BaP to an epoxide (i.e. BaP-7,8-epoxide) that is then converted to a dihydrodiol by mEH (i.e. BaP-7, 8-dihydrodiol). Further bioactivation by P450 1A1 leads to the ultimate reactive species, BaP-7,8-dihydrodiol-9,10-epoxide (BPDE) that can react with DNA, forming adducts preferentially at guanine residues ( Figure 1). The 10-(deoxyguanosin-N 2 -yl)-7,8,9-trihydroxy-7,8,9,10-tetrahydrobenzo[a]pyrene (dG-N 2 -BPDE) adduct is the major product of the reaction of BPDE with DNA in vitro and in vivo. [4][5][6][7][8][9][10] However, BaP is also oxidized to other metabolites such as other dihydrodiols, BaP-diones and further hydroxylated metabolites. 2,6,[11][12][13][14][15][16] Although most of these metabolites are detoxification products, BaP-9-ol is the precursor of 9-hydroxy-BaP-4,5-epoxide that can form another adduct with deoxyguanosine in DNA ( Figure 1). 7,8,10,15,[17][18][19] The P450 enzymes, including P450 1A1, are components of a mixed-function oxidase (MFO) system located in the membrane of endoplasmic reticulum (microsomes). This enzymatic system also contains other enzymes, the multidomain flavoprotein NADPH:cytochrome P450 oxidoreductase (POR) and cytochrome b 5 accompanied by its NADH:cytochrome b 5 reductase (CBR). Mammalian microsomal P450s function by catalyzing the insertion of one atom of molecular oxygen into a variety of xenobiotics, including BaP, while reducing the other atom to water, a reaction that requires two electrons. 20 The oxygen is activated in the active center of P450s by two electrons. It is generally accepted that POR with NADPH serves as donor of electrons for both reductions of P450 in the MFO reaction cycle. 20 However, the second electron may also be provided by CBR with cytochrome b 5 and NADH, but cytochrome b 5 seems to have also additional roles in the monooxygenase system. [20][21][22][23][24][25][26][27][28] Although POR is considered an essential constituent of the electron transport chain towards P450, 20 its exact role in the P450-mediated reaction cycle is still not clearly established. Recently we used two mouse models, in which the expression of POR has been permanently (the Hepatic P450 Reductase Null (HRN) line) or conditionally (the Reductase Conditional Null (RCN) line) deleted in hepatocytes leading to a lack of almost all hepatic POR activity. Despite this lack of POR the levels of the P450-mediated dG-N 2 -BPDE adducts in the livers of mice of both lines exposed to BaP were higher than in BaP-treated wild-type mice. 7,8,19 These findings suggested BaP activation in other liver cells other than hepatocytes (e.g. Kupffer or endothelial cells), 29 or bioactivation of BaP by non-P450 enzymes (e.g. prostaglandin H synthase and lipoxygenases), 30,31 or combinations of these mechanisms were operative. However, these phenomena might also indicate that another reductase such as microsomal CBR can contribute to the P450-mediated BaP oxidation in these animal models.
The latter possibility is supported by experiments with rat P450s indicating the involvement of cytochrome b 5 in the NADH-dependent hydroxylation of BaP in a reconstituted P450containing system. 32,33 Studies using cytochrome b 5 -knockout mouse lines, namely, HBN mice (Hepatic cytochrome b 5 Null) with a conditional hepatic deletion of cytochrome b 5 , and HBRN mice (Hepatic cytochrome b 5 /P450 Reductase Null), in which POR and cytochrome b 5 are deleted specifically in the liver, also indicate the involvement of CBR in the P450mediated metabolism of some other P450 substrates. 27,34,35 We have recently demonstrated that the NADH/cytochrome b 5 /CBR system is indeed able to function as the sole electron donor for both reduction steps of rat P450 1A1 during 6 oxidation of BaP in vitro. 15 Although this function of the NADH/cytochrome b 5
Preparation of Human Recombinant P450 1A1 and Rat Recombinant POR.
Human recombinant P450 1A1 (EC 1.14.14.1) and rat POR (EC 1.6.2.4) were prepared by heterologous expression in E. coli and purified to apparent homogeneity (i.e. as single bands on sodium dodecyl sulfate-polyacrylamide gel electrophoresis) as described recently. 36 The specific content of human recombinant CYP1A1 was 11.5 nmol/mg protein. Cytochrome b 5 reductase (CBR) (E.C. 1.6.2.2) was isolated from rat liver microsomes by a procedure described by Perkins and Duncan. 38 The specific activity of rat CBR measured as NADH-ferricyanide reductase was 49.2 µmol ferricyanide/min/mg protein. Cytochrome b 5 was isolated from rabbit liver microsomes as described. 39 Both proteins purified to apparent homogeneity 38,39 were utilized in the reconstitution experiments. and purified rat CBR (200 pmol in a ratio of 1:1), without or with cytochrome b 5 (in a ratio of P450 1A1 with reductase to cytochrome b 5 of 1:5) were added to the prepared dispersion and incubated at 20°C for 10 min. As shown in previous studies, 19,25,37,42,43 the enzymatic activity 9 of human P450 1A1 reconstituted with POR and cytochrome b 5 from several animal models was the same as that of the enzyme reconstituted with the human orthologs of these enzymes. substrates has been found previously. 7,8,19,25,28,44 The reaction was initiated by adding the NADPH or NADH.
Incubations to Study the Metabolism of BaP by Human
In separate experiments 100 nM pure human recombinant P450 1A1 reconstituted individually with other components of the MFO-system was used instead of P450 1A1 in Supersomes™. Negative control reactions lacked either P450 1A1, reductases or BaP. After incubation (37°C, 20 min), 5 µl of 1 mM phenacetin in methanol was added as an internal standard. BaP metabolism by the P450 1A1 systems has been shown to be linear up to 30 min of incubation. 15,16,19 BaP metabolites were extracted twice with ethyl acetate (2 × 1 ml), the solvent was evaporated to dryness, the residues were dissolved in 25 µl methanol and subsequently BaP metabolites were separated by HPLC as reported. 19,45 BaP metabolite peaks were analyzed by HPLC by comparison with metabolite standards whose structures were determined previously by NMR and/or mass spectrometry. 19 Determination of BaP-DNA Adduct Formation by 32 P-postlabeling. Incubation mixtures used to assess DNA adduct formation by BaP activated with all enzymatic systems containing human P450 1A1 consisted of 50 mM potassium phosphate buffer (pH 7.4), 1 mM NADPH or NADH, 100 nM human recombinant P450 1A1 plus other enzymes as indicated in the figures (or as described above), 0.1 mM BaP (dissolved in 7.5 µl DMSO), and 0.5 mg of calf thymus DNA in a final volume of 0.75 ml as described previously. 19 The reaction was initiated by adding 0.1 mM BaP and incubations were carried out at 37°C for 60 min. BaP-DNA adduct formation has been shown to be linear up to 90 min. 7,19 Control incubations were carried out without P450 1A1, or without reductases, or without cytochrome b 5 , or without NADPH (or NADH), or without DNA, or without BaP. After the incubation, DNA was isolated from the residual water phase by standard phenol/chloroform extraction. DNA adduct formation was analyzed using the nuclease P1 version of the 32 P-postlabeling technique. 7,19 Resolution of the adducts by thin-layer chromatography (TLC) using polyethyleniminecellulose plates (Macherey and Nagel, Düren, Germany) was carried out as described. 7,19,46 DNA adduct levels (RAL, relative adduct labelling) were calculated as described. 47 Statistical Analyses. Statistical analyses were carried out on the means ± standard deviations of three parallel experiments with Student's t-test (UNISTAT Statistics Software v6, Unistat Ltd., Highgate, London N6 5UQ, UK) and p < 0.05 was considered significant.
Statistical association between amounts of BaP metabolites formed by oxidation of BaP by the P450 reconstituted systems containing POR or CBR and levels of BaP-DNA adduct 2 formed by the same systems were determined by the Spearman correlation coefficients using version 6.12 Statistical Analysis System software. Spearman correlation coefficients were based on a sample size of 6. All Ps are two-tailed and considered significant at the 0.05 level.
Oxidation of BaP by Human P450 1A1 Expressed in Supersomes™ and Pure Human
Recombinant P450 1A1 Reconstituted with POR or CBR. In order to evaluate the role of POR and CBR in the reduction of human P450 1A1 during BaP oxidation, two enzymatic systems containing this human P450 were utilized. The first system used Supersomes™ containing human recombinant P450 1A1 expressed with POR, CBR, mEH and/or cytochrome b 5 . In the second system, pure human recombinant P450 1A1 was reconstituted in liposomes with either pure POR or CBR. The latter enzymatic system was utilized with or without cytochrome b 5 to examine the function of both reductases as electron donors to P450 1A1 during BaP metabolism with NADPH (cofactor of POR) or NADH (cofactor of CBR).
The BaP metabolites formed by human P450 1A1 in these enzyme systems were analyzed by HPLC ( Figure 2A Figure 2B. In addition, a metabolite of unknown structure (Mx) was detected. Essentially no BaP metabolites were found when both NADPH and NADH were omitted from the incubation mixtures containing the P450 1A1-Supersomes™ ( Figure 2C).
NADH was less effective than NADPH to act as electron donor to human P450 1A1 in Supersomes™ ( Figure 2C). Addition of cytochrome b 5 to the incubation mixtures led to an increase in P450 1A1-mediated BaP oxidation both in the presence of NADPH and in the presence of NADH ( Figure 2C).
In the second enzymatic system, where pure human recombinant P450 1A1 was reconstituted with POR with or without cytochrome b 5 in liposomes, this P450 enzyme was also able to oxidize BaP in the presence of NADPH (Figure 3). Only five BaP metabolites were formed, which were BaP-3-ol (M7) as the major metabolite, BaP-9-ol (M6), BaP-1,6dione (M4), BaP-3,6-dione (M5) and metabolite Mx ( Figure S1). Because mEH was absent from the system, no dihydrodiols were formed. Addition of cytochrome b 5 to this P450 1A1 system increased formation of all BaP metabolites, in particular BaP-3-ol (Figure 3 and Figure S1). In contrast, NADH did not lead to significant BaP oxidation by P450 1A1 reconstituted with POR ( Figure 3 and Figure S1), which confirms that NADH functions as a coenzyme of POR at very slow rate that is negligible relative to NADPH, as we showed recently with cytochrome c as a substrate for POR. 15 No BaP metabolites were found when both NADPH and NADH were omitted from the incubation mixtures containing the human recombinant P450 1A1 reconstituted with POR or CBR (Figure 3). In the enzymatic system where human P450 1A1 was reconstituted with CBR and cytochrome b 5 in liposomes, BaP was predominantly oxidized to BaP-3-ol (M7) and to a lower extent to BaP-9-ol (M6), BaP-1,6-dione (M4), BaP-3,6-dione (M5) and a metabolite Mx ( Figure S1). Cytochrome b 5 as a substrate of CBR was necessary for BaP oxidation in the system of human P450 1A1 reconstituted with CBR with NADH as cofactor. Without this protein, essentially no BaP oxidation was detectable by P450 1A1 reconstituted with CBR ( Figure 3 and Figure S1). Addition of POR to the reconstituted system of human P450 1A1 with CBR and cytochrome b 5 did not change BaP metabolite levels ( Figure 3 and Figure S1). BaP was activated with human P450 1A1 in Supersomes™, in the presence of either NADPH or NADH. No such BaP adducts were found when both NADPH and NADH were omitted from the incubation mixtures containing the P450 1A1-Supersomes™ (Figure 4). Adduct 1 was the BaP-DNA adduct predominantly formed in this system (Figure 4). Comparison with previous 32 P-postlabeling analyses 10,19 showed that adduct 1 is the dG-N 2 -BPDE adduct. 10,19 The other adduct, which was formed in this enzymatic system as only a minor product, if detectable at all (e.g. there was no formation of this adduct in Supersomes™ in the presence of NADPH), has similar chromatographic properties by TLC to a guanine adduct derived from reaction with 9-hydroxy-BaP-4,5-epoxide (see adduct spot 2 in insert of Figure 1 and Figures 5B and 6). Because of the absence of mEH, no adduct 1 (i.e. dG-N 2 -BPDE) was formed. Addition of cytochrome b 5 to this P450 1A1 system decreased the levels of adduct 2, but this decrease was not statistically significant ( Figure 6). NADH was essentially ineffective to mediate the activation of BaP by P450 1A1 reconstituted with POR ( Figures 5B and 6), which again indicates that NADH functions as a coenzyme of POR at very slow rate that is negligible relative to NADPH.
Human P450 1A1 reconstituted with CBR and cytochrome b 5 also generated adduct 2.
The presence of cytochrome b 5 as a substrate of CBR and NADH as its cofactor was essential for this adduct formation; NADPH, the cofactor of POR, was ineffective ( Figures 5B and 6).No BaP-DNA adduct 2 was found when both NADPH and NADH were omitted from the incubation mixtures containing human recombinant P450 1A1 reconstituted with POR or 14 CBR ( Figure 6). Levels of adduct 2 formed by human P450 1A1 in the presence of the NADH/cytochrome b 5 /CBR system were lower than those formed in the system of human P450 1A1 containing POR and NADPH (Figures 5 and 6). These results corresponded to lower levels of BaP-9-ol (a precursor of 9-hydroxy-BaP 4,5-epoxide generating this adduct) and other BaP metabolites formed in this system (see Figure S1). Significant correlations were found between levels of adduct 2 and BaP-9-ol (r = 0.943, P < 0.001, Spearman's correlations) or BaP-3-ol (r = 0.943, P < 0.001), whereas no such correlations were found between levels of this adduct and BaP-1,6-dione, BaP-3,6-dione or a metabolite Mx (r < 0.5).
DISCUSSION
The metabolism of BaP has been extensively studied over the past decades 2 and various studies have examined the role of P450 enzymes, particularly P450 1A1 of several species, to metabolize this carcinogen. 2,6,7,8,15,16,19,28,44 However, the mechanism of the reaction cycle of BaP oxidation catalyzed by P450 1A1, particularly the roles of POR and CBR as electron donors to P450 1A1, has not yet been fully resolved. 7,8,15,19 Enigmatic results have been found in two mouse models where deletion of POR in hepatocytes did not lead to an expected decrease in BaP-DNA adduct formation in liver in vivo, but instead to higher BaP-DNA binding. 7,8,19 We also showed that in livers of HRN mice P450 1A1, cytochrome b 5 and mEH can effectively activate BaP to DNA binding species, even in the presence of very low amounts of POR. 19 Because this feature has biological significance, studying the role of the enzymes reducing P450 1A1 is important to better understand the mechanism(s) involved in BaP metabolism. Recently we demonstrated that the NADH/cytochrome b 5 /CBR system is able to function as the sole donor of electrons for both reduction steps of rat P450 1A1 during BaP oxidation in vitro. 15 This finding indicates that CBR as an NADH-dependent reductase might substitute POR in the P450 1A1-mediated BaP metabolism and might help to explain our enigmatic results in the POR-knockout mouse models. 7,8,19 However, the question whether this novel function of the NADH/cytochrome b 5 /CBR system as electron donor to rat P450 1A1 in BaP metabolism represents a general feature for the P450 1A1 reaction cycle in other species including humans remained to be answered. Therefore, the primary aim of this study was to determine whether the NADH/cytochrome b 5 /CBR system can be the exclusive donor Using the first system, human P450 1A1-Supersomes™, we proved that NADH acts as electron donor for both reductions of the P450 1A1 human orthologue in BaP oxidation independent of NADPH and POR if CBR and cytochrome b 5 is present. This conclusion is supported by the fact that NADH functions as a poor coenzyme of POR when cytochrome c is used as its substrate. 15 Our results therefore confirm the novel feature on the mechanism of the catalytic cycle of human P450 1A1 during BaP oxidation previously described for rat P450 1A1. We demonstrated that the reaction cycle of BaP oxidation catalyzed by human P450 1A1 can proceed by ways that differ from the generally accepted mechanism, where the first reduction of P450 is considered to be catalyzed by POR without cytochrome b 5 . 20,48-52 Considering the redox potentials of cytochrome b 5 (+20 mV) 53,54 and ferric substrate-bound P450 (-237 mV) 27,53 it is thermodynamically impossible for cytochrome b 5 to provide the first electron in the P450 catalytic cycle 55 . Given that the redox potential of oxyferrous P450 is also approximately +20 mV, it is feasible that cytochrome b 5 can supply the second electron into the catalytic cycle. 27,53,55 However, based on the redox potential of CBR determined under the anaerobic conditions (-265 mV) 56 it could provide the first electron in the P450 catalytic cycle. 27,57 Moreover, considering the effect of Le Chatelier's principle given, the reduced P450 will rapidly bind dioxygen under aerobic conditions of the experiments. Indeed, in the present study we demonstrate that the first reduction of P450 1A1 which previously had been considered to be mediated exclusively by the NADPH/POR system can be substituted by the NADH/cytochrome b 5 /CBR system. This reaction was found not only in Supersomes™ containing human recombinant P450 1A1, but, even more importantly, in the system of pure human P450 1A1 reconstituted with CBR and cytochrome b 5 . Such NADH/cytochrome b 5 /CBR-mediated activity of the human P450 1A1 systems was proven by the formation of up to seven BaP metabolites [BaP-9,10-dihydrodiol (M1), BaP-7,8-dihydrodiol (M3), BaP-1,6dione (M4), BaP-3,6-dione (M5), BaP-9-ol (M6), BaP-3-ol (M7) and a metabolite of unknown structure (Mx)]. In addition, these systems generated two BaP-DNA adducts, the dG-N 2 -BPDE adduct (adduct 1) and/or a guanine adduct derived from the reaction with 9hydroxy-BaP-4,5-epoxide (adduct 2). The BaP metabolite profiles formed by P450 1A1 with the NADH/cytochrome b 5 /CBR system were the same as in the system where NADPH and POR were used as electron donors. BaP-4,5-dihydrodiol (M2), which was previously found to be formed by rat P450 1A1 15,19 was not generated by the human P450 1A1 orthologue ( Figure 2). This finding is in line with previous findings showing that this BaP metabolite was not formed in human bronchoalveolar H358 cells expressing P450 1A1 after BaP exposure. 13 Interestingly, essentially no differences in the levels of dG-N 2 -BPDE adducts were seen when using either NADPH or NADH as cofactors in the P450 1A1-Supersomes™ system (see Figure 4). However, it should be noted that the oxidation reaction of BaP to its metabolites was catalyzed less efficiently by NADH than by NADPH in this system (see Figure 2). One In summary, this study demonstrates for the first time that NADH, CBR and cytochrome b 5 can act as sole electron donors for both the first and second reduction of human P450 1A1 during the oxidative metabolism of BaP and formation of BaP-DNA adducts in vitro. These findings confirm our results of a recent study where rat P450 1A1 was utilized to study BaP metabolism. 15 However, although the role of the NADH/cytochrome b 5 /CBR system in both reductions of rat 15 and human P450 1A1 present work is proven in our in vitro studies, further investigations are needed in the future. In this context, the mechanism of both the first and second reduction of P450 1A1 remains to be examined in detail. In addition, as shown by Guengerich and coworkers for the P450 3A4-mediated 6-hydroxylation of testosterone 57 , the question whether NADH/cytochrome b 5 /CBR system might also reduce other P450 enzymes needs to be addressed. This might be the case in rat hepatic microsomes where various P450s were induced by their specific inducers; in these microsomes, BaP was oxidized not only in the presence of NADPH, but also in the presence of NADH. 15 Another crucial question that remains to be addressed relates to the impact of the NADH/cytochrome b 5 /CBR system on BaP metabolism in vivo. Recent in vivo experiments using cytochrome b 5 -knockout mouse lines (i.e. HBN and HBRN) provide evidence that in the absence of POR, cytochrome b 5 /CBR are capable of supplying electrons for P450 catalytic function (i.e. in metabolism of the P450 3A substrate midazolam). 27 It is anticipated that these mouse lines will help to elucidate the different mechanisms of P450-catalyzed BaP biotransformation in vivo.
Notes
The authors declare no competing financial interest.
Description of the Supporting Information material
This material is available free of charge via the Internet at http://pubs.acs.org. | 2018-04-03T01:01:26.619Z | 2016-07-12T00:00:00.000 | {
"year": 2016,
"sha1": "d0da5fed181495090d2b41ce3f104bc4a39240f6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1021/acs.chemrestox.6b00143",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "dface19bf5b0e189b9a670cb8ee18521df6fd69f",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
253218822 | pes2o/s2orc | v3-fos-license | Gender and Educational Variation in How Temporal Dimensions of Paid Work Affect Parental Child Care Time
Using the 2017–2018 American Time Use Survey, the authors investigate how a comprehensive set of temporal conditions of paid work affects parental child care time, with attention to gender and education. Temporal work conditions include access to leave, inflexible start and end times, short advance notice of work schedules, types of work shifts, and usual days worked. Among mothers, the only significant relationship is between usual days worked and routine care time. Among fathers, lacking access to paid leave and having inflexible start and end times are associated with reduced routine care time, and working on variable days of the week is related to less developmental care time. Temporal work conditions also shape the educational gap in parental child care time. Importantly, nonstandard shifts and working on weekends widen the educational gradient in mothers’ developmental care time. The findings imply that temporal work conditions amplify gender inequality in work-family lives and families as agents of class reproduction.
The increase in demands from work time and work schedules has raised the question, "Can employed parents make time for children?" (Presser 1989). Given the critical influence of parental involvement on child development (Kalil 2015;Waldfogel and Washbrook 2011), how temporal work conditions shape parents' child care time has far-reaching implications for child well-being.
In this study, we use nationally representative time-diary data to investigate how a comprehensive set of temporal work conditions affects mothers' and fathers' child care time. We draw on the American Time Use Survey (ATUS) 2017-2018 Leave and Job Flexibilities Module to examine routine and developmental child care time among employed parents with children younger than 13 years. Because parental ability to control their time to manage competing work-family demands varies with social class (Gerstel and Clawson 2018), we consider how associations between temporal work conditions and parental child care time are conditioned by education. Education is a powerful predictor of both access to job resources and parental time investments in children (Altintas 2016;Gerstel and Clawson 2015;Schneider, Hastings, and LaBriola 2018).
This study makes several contributions to the existing literature. First, although prior research has examined temporal dimensions of work and parental child care time in the United States, most studies have focused on only one dimension, without considering temporal dimensions more comprehensively (Davis et al. 2015;Fox et al. 2013;Hill et al. 2013;Kim 2020;Noonan, Estes, and Glass 2007;Sayer and Gornick 2012;Wight, Raley, and Bianchi 2008). Parental child care time is influenced by competing time demands from paid work and also constrained by work-schedule rigidity and instability. Additionally, temporal dimensions of employment are not mutually exclusive (Estes 2005;Presser 1989). For instance, some parents work at night with schedule flexibility. Flexible schedules and nonstandard work shifts may enhance parents' ability to "time shift" or coordinate availability of their time with times when children require care, thus theoretically increasing child care time. By contrast, other parents work on weekends or variable days with schedule unpredictability, which might make it harder to coordinate parental time availability and work schedules with times when children require care, thus reducing child care time. In this study, we leverage rich information on work time, work schedules, and their flexibility, stability, and predictability to better isolate how each temporal dimension of employment is associated with parental child care time.
Second, because work-family experiences vary by gender ; Perry-Jenkins and Gerstel 2020), we investigate whether time "binds" between work and family responsibilities are experienced more strongly among mothers or among fathers. Gendered patterns of child care influence gendered employment and relational outcomes (Goldin 2021). Work demands are often cited as barriers to parental (especially father) involvement in child care activities (Kelly and Moen 2020;Roeters, Van Der Lippe, and Kluwer 2009). Yet research indicates that women use flexible schedules to reduce time conflicts between employment and care (Chung and Van der Horst 2018; Kim 2020), whereas findings are more mixed for fathers. Two studies revealed that fathers more often used schedule flexibility to increase paid work and personal time (Chung and Van der Horst 2020;Sharpe, Hermsen, and Billings 2002), but one study showed that schedule flexibility increased fathers' daily interactions with children (Kim 2020). The mixed evidence for fathers from these studies may stem from their singular focus on job flexibility rather than a more comprehensive consideration of temporal work conditions. Given that men's involvement in the private sphere is crucial to completing the gender revolution (Goldscheider, Bernhardt, and Lappegård 2015), our findings regarding mothers' and fathers' child care time in the changing economy shed light on the future of gender equality.
Third, although temporal conditions of employment differ by workers' education Clawson 2015, 2018;Kalleberg 2011), less is known about educational variation in the associations of temporal work conditions with child care time (Perry-Jenkins and Gerstel 2020). We advance the literature by identifying which specific temporal work conditions intersect with education in affecting child care time. Furthermore, we separately consider influences on routine and developmental child care. College-educated mothers spend more total time on child care activities, particularly developmental activities, compared with non-college-educated mothers (Altintas 2016; Hsin and Felfe 2014). Because parents across social classes espouse ideologies of intensive mothering (Ishizuka 2019), the educational gradient in child care time has been attributed in part to the higher likelihood of less educated employed parents holding "bad" jobs. The idea is that less educated parents' job schedule instability and nonstandard work shifts reduce their time available when children need care (Gerstel and Clawson 2018; Prickett and Augustine 2021). This suggests that regardless of education, parents in jobs with unfavorable temporal conditions (lack of job flexibility, nonstandard work shifts) may not be able to protect child care time to the same extent as parents in "good" jobs. It is also possible, however, that parents with more education reduce other time constraints (e.g., by outsourcing and using technology to reduce time necessary for housework or travel), so their child care time is less affected by temporal work conditions. Educational gradients in parental time investments in children are one aspect of how families reproduce intergenerational advantage (Kalil 2015;Schneider et al. 2018;Waldfogel and Washbrook 2011). Determining whether temporal conditions of work affect child care time in similar or distinct ways by parental education is necessary to further understanding of the role of family in the reproduction of social inequality.
The Job Demands-Resources Model
This study draws on the job demands-resources (JD-R) model. This model posits that job characteristics profoundly influence worker well-being (Bakker, Demerouti, and Sanz-Vergel 2014). Regardless of occupational settings, job characteristics generally can be categorized into two types: job demands and job resources. Job demands deplete employees' energy to fulfill work-related requirements and create substantial physiological and psychological costs for workers (Bakker and Demerouti 2007). Job resources, by contrast, stimulate workers' motivation, personal growth, and accomplishment of work goals, and mitigate job demands and associated negative consequences (Bakker and Demerouti 2007). Although the JD-R model originally was developed to understand the impact of job characteristics on work-related outcomes, such as burnout and work engagement (Bakker et al. 2014), scholars have applied it to investigate how demands and resources of the job shape workers' outcomes in the family domain (Bakker et al. 2011;Hook, Ruppanner, and Casper 2022;Kelly et al. 2014;Minnotte 2016). According to the JD-R model, job resources (e.g., autonomy on the job, employees' control over work schedules) increase workers' ability to accomplish work tasks and still fulfill family responsibilities, thus reducing stress and presumably increasing time available for care work (Kelly et al. 2014). By contrast, job demands increase parents' strain, spill over to employees' energies outside work and constrain the time they could devote to child care (Hook et al. 2022).
Temporal Conditions of Work as Job Demands or Resources
Temporal dimensions of employment that represent job demands and job resources include conditions of work time and work schedules (Gerstel and Clawson 2015; Schneider and Harknett 2019). Working long hours and nonstandard times is commonplace in the U.S. workforce (Hamermesh and Stancanelli 2015). The Fair Labor Standards Act defines 40 hours to be a standard workweek, but half of full-time workers worked more than 40 hours a week in 2013 and 2014 (Saad 2014). In addition, nonstandard work hours outside the 9-to-5 Monday-to-Friday schedule have become pervasive in the 24/7 economy (Gerstel and Clawson 2018;Presser 2005). National data from 2003 through 2011 showed that 34 percent of employees worked on weekends (Hamermesh and Stancanelli 2015). In 2017 and 2018, 16 percent of wage and salary workers did not work a regular day shift, and 35 percent of workers learned their work schedules less than two weeks in advance (Bureau of Labor Statistics 2019b). Furthermore, a large share of workers face rigid workplace schedules that often lead to work-life conflict (Schieman, Glavin, and Milkie 2009). About half of workers cannot adjust their starting and ending times of work (Bureau of Labor Statistics 2019b; Kim 2020). Nearly 40 percent of wage and salaried employees in a 2002 national survey reported that it was somewhat or very hard to take time off during the workday for personal or family reasons, and 54 percent of those with children reported that they had no paid leave allowing them to care for sick children (Galinsky et al. 2010).
Temporal work conditions as job demands or resources could spill over beyond the workplace to affect nonwork outcomes such as parental care of children (Minnotte 2016). For example, working longer hours reduces time available for child care (Kelly and Moen 2020;Sayer and Gornick 2012), but studies have rarely considered other temporal aspects of work. We consider multiple temporal work conditions to determine influences of work schedule conflicts and time conflicts on parental child care time. In addition to long work hours, nonstandard, irregular, and unpredictable employment schedules are also job demands that likely increase time conflicts and interfere with family responsibilities (Harknett, Schneider, and Luhr 2022;Schieman et al. 2009). These demanding work schedules can limit parents' ability to align available time with activities of other individuals and with temporal rhythms of schools, child care centers, and children's events (Hill et al. 2013). For example, parents working nonstandard evening hours face more challenges than those working a standard daytime shift to care for or interact with their children during the after-school hours when children are awake (Wight et al. 2008). On-call or last-minute shifts also make it harder for parents working in low-wage jobs to secure child care arrangements ).
Employee-controlled schedule flexibility is a job resource that offers workers autonomy (Bakker and Demerouti 2007;Kelly et al. 2014;Kim 2020). Two main forms of schedule flexibility are flextime and job leaves. Flextime allows workers to change the start and end times of their workday, and job leaves enable workers to take time off to meet their personal or family needs (Galinsky et al. 2010;Glass and Estes 1997). Voydanoff (2005) conceptualized schedule flexibility as a boundary-spanning resource, which connects the work and family domains, enhances the boundary flexibility between the two domains, and thus helps workers coordinate their activities in both domains and increases workfamily balance. Qualitative evidence suggests that schedule flexibility decreases problems coordinating time availability for time-sensitive child care, such as attending school events that happen during the workday, dropping off or picking up children at child care centers or their events, and being home for children outside school hours (Estes 2005). Therefore, schedule flexibility may mitigate employment constraints on care work and thus increase parental child care time (Davis et al. 2015).
There are several limitations of existing studies examining the impact of temporal work conditions on child care time among U.S. workers. First, some studies were based on nonrepresentative samples and reached inconclusive findings. For example, two studies showed that flextime was associated with increased child care time for parents (Davis et al. 2015;Estes 2005), whereas others did not demonstrate a significant association (Hill et al. 2013;Noonan et al. 2007). Second, three studies used nationally representative time-diary data from the ATUS, but they all used older data collected in 2003 and/or 2004 and mostly examined selected temporal aspects of work. Sayer and Gornick (2012) examined employment hours and found that parents working longer hours generally spent less time on child care activities. Wight et al. (2008) examined nonstandard shifts and showed that compared with their same-gender counterparts working day shifts, mothers working evening shifts performed less routine child care, whereas fathers working nonday shifts did more routine child care. In addition, parents working evening shifts were less likely than those working day shifts to engage in education-related child care activities (Wight et al.2008). Genadek and Hill (2017) investigated multiple temporal conditions of work, including flextime, working a regular daytime schedule, working a variable schedule, and working after 6 p.m. They found that fathers with variable schedules spent less time with children, whereas working after 6 p.m. was associated with more child care time for mothers. Genadek and Hill, however, used the ATUS linked to the Current Population Survey to examine a small sample of parents (237 mothers and 294 fathers). The scheduling arrangements and child care time were measured several months or a year apart, which may account for the lack of association between flexible or daytime schedules and parental child care time.
In the present study, we address these limitations using the ATUS 2017-2018 Leave and Job Flexibilities Module to examine how mothers' and fathers' child care time varies across a comprehensive set of temporal work conditions. This allows us to determine if work amount, timing, and control have independent associations with child care time and, if so, which is more consequential. We focus on a gender comparison between mothers and fathers, and we advance prior research by investigating how the relationships between multiple temporal aspects of work and child care time differ by education. Maternal nonstandard, unstable, and unpredictable work schedules are associated with lower cognitive and behavioral wellbeing among children (Han 2005; Schneider and Harknett 2022) and higher parental strain Qian and Sayer Page 5 Socius. Author manuscript; available in PMC 2023 March 23. and distress (Nomaguchi and Milkie 2020). Identifying more precisely which temporal conditions of work affect parental child care time, and how this varies by gender and education, is necessary to more fully understanding the costs and benefits of employment for the well-being of parents and children. Our findings will also advance understanding of family transmission of advantage and disadvantage.
Temporal Conditions of Work and Child Care Time: Gender Variation
The gender perspective contends that the division of labor is based on demarcating "men's" time from "women's" time (Twiggs, McQuillan, and Ferree 1999 Although the JD-R model predicts that child care time generally increases with job resources and decreases with job demands, we expect that mothers' child care time is less influenced by temporal conditions of work than is fathers' child care time. The intensive mothering norm places women's devotion to the family above other commitments (Hays 1996). Irrespective of employment conditions, women are expected to be actively involved in "labor-consuming child rearing" (Hays 1996:4). Notably, despite increases in the labor force participation of women with young children, their time spent on child care nearly doubled from 1975 to -2010. Given that mothers find ways (though not easily) to work around their job schedules to accommodate children's needs and perform intensive mothering (Bianchi 2000;Wight et al. 2008), their child care (especially developmental care) time may be less sensitive to the temporal aspects of their job.
In contrast to intensive mothering that requires substantial maternal time investments in children, good fathering is more closely tied to men's breadwinning abilities than to their provision of time-intensive daily caregiving (Townsend 2002). If fathers work at jobs with long hours or those with inflexible, unpredictable, or nonstandard schedules, they may need to devote greater effort to meeting work demands, sapping time and energy for family life. Although fathers' child care time has increased in recent decades (Bianchi et al. 2012), their child care activities are often perceived as discretionary "helping" activities that are subsidiary to employment (Gerstel and Clawson 2014). Thus, fathers' child care time, especially time devoted to routine child care, may be more sensitive to the demands and resources associated with the temporal conditions of their job.
Temporal Conditions of Work and Child Care Time: Educational Variation
In today's bifurcated economy, educational attainment increasingly demarcates the hours people work and the quality of jobs they hold. Highly educated people tend to work in "good" jobs-professional or managerial jobs with autonomy, generous benefits, and more supervisor support-but also typically work longer hours (Kalleberg 2011; Kelly and Moen 2020). Less educated workers tend to work at precarious "bad" jobs, characterized by less than full-time hours, scant if any benefits, and high levels of routine schedule instability (Kalleberg 2011; Schneider and Harknett 2019). Job quality can shape the time parents devote to developmental child care activities and ultimately affect child well-being (Schneider and Harknett 2022).
Temporal aspects of work may have distinct meanings and implications depending on workers' education. For highly educated workers, flexibility tends to be employee driven and indicates worker autonomy and control over time (Gerstel and Clawson 2015). Employee-driven flexibility may offer resources that help workers meet family needs (e.g., spending time with children) and professional goals (Davis et al. 2015; Kelly and Moen 2020). For less educated workers, however, routine instability in their work schedules is increasingly a way for employers to minimize labor costs and offload risks onto workers, thereby representing a type of employer-driven flexibility (Gerstel and Clawson 2015; Schneider and Harknett 2019). These less educated workers often have very little input into their work schedules and are expected to be available for last-minute shifts and adjust other aspects of lives around uncertain hours that vary from week to week. One study of parents working in retail and food service sector finds that parents who work on-call shifts and those with variable work schedules must rely on multiple child care arrangements, including sibling and self-care of children because parents are not able to count on being available to care for their children ).
Approaches to parenting differ by education in ways that may yield different associations of temporal work conditions with child care time. A college degree is particularly important for shaping parental child care time (Sayer 2016). College-educated fathers and mothers spend more time on child care than their less educated counterparts (Prickett and Augustine 2021; Sayer 2016). Although parents regardless of social class espouse intensive parenting norms (Ishizuka 2019), college-educated parents may be more able than their less educated counterparts to invest in their children and perform time-intensive child rearing, and thus facilitate intergenerational transmission of class advantages (Lareau 2003). College-educated parents may also be able to leverage job resources, such as schedule flexibility and access to leave, to time-shift their availability to correspond with children's availability.
Although the educational gradient in parental child care time is well established, much less is known about how education and temporal aspects of work intersect to shape child care time. Studies consider only work hours as a temporal work condition and offer mixed evidence about how work hours differentially affect college-educated and less educated mothers' child care time. A study using child time diaries from the Panel Study of Income Dynamics revealed a negative association of work hours with child care only for non-college-educated mothers (Hsin and Felfe 2014), but a study using ATUS parent time diaries showed that longer work hours are associated with a greater reduction in child care time for college-educated mothers (Gupta, Sayer, and Pearlman 2021). Open empirical questions remain about how education and other temporal work conditions jointly affect child care time and if results differ for mothers' and fathers' time.
Bakker and Demerouti (2007) posited that incorporating personal resources in the JD-R model is an important extension of the model. Work demands and the lack or loss of job resources may be less consequential for individuals with greater personal resources (Bakker and Demerouti 2007; Kim 2020). Given social and economic returns to college education (Hout 2012), highly educated parents likely have more resources outside work to mitigate the negative impact of job demands on their family life and to better protect their time with children. Considering that highly educated parents emphasize concerted cultivation and developmental activities in parenting (Hsin and Felfe 2014;Lareau 2003), they may use resources afforded by their education (e.g., money to outsource housework and use of time-saving technologies and services) to ensure that they are able to devote more time to developmental child care activities. By contrast, less educated parents may be doubly disadvantaged when they are faced with demanding temporal conditions of employment and limited access to economic, technological, or informational resources. In short, we expect that educational differences in child care (especially developmental care) time may be heightened by temporal demands of work.
Data
We use data from the ATUS 2017-2018 Leave and Job Flexibilities Module. The data were obtained online from the Integrated Public Use Microdata Series Time Use system (https://timeuse.ipums.org). As the first federally administered time-diary survey in the United States, the ATUS collects nationally representative data on how adults allocate time to all activities, including paid work and child care. The ATUS sample consists of noninstitutionalized U.S. residents ages 15 and older.
The ATUS 2017-2018 Leave and Job Flexibilities Module was designed to collect information on access to paid and unpaid work leave, access to job flexibility, and work schedules. Of the 19,816 ATUS respondents in 2017-2018, 10,554 employed wage and salary workers were eligible for the module. Of those eligible workers, 10,071 respondents completed the module. Although there was also a leave module in 2011, the questions we use to assess temporal work conditions are markedly different and not directly comparable between the 2011 and 2017-2018 modules. Thus, we do not include respondents from the 2011 leave module.
Of the 10,071 employed workers who completed the Leave and Job Flexibilities Module, we limit our sample to 1,807 women and 1,721 men with at least one own child younger than 13 years living in the household, because we are interested in parents whose child-rearing demands are the most intense and for whom the child
Measures Dependent Variables
We follow previous research (Hook et al. 2022;Raley et al. 2012) to measure parental child care time in minutes during the diary day when parents report doing two specific types of activities to care for or help any child younger than 18 years in the household. The measures capture the time parents directly engage in caregiving activities, with a child being the main focus of the activity:
1.
Routine child care, which includes the everyday physical care required to ensure that children are fed, groomed, and getting adequate sleep and necessary medical care; general supervision and monitoring of children; organization and planning for children; waiting for and transporting children; and coordinating child care services.
2.
Developmental child care, which includes interactive activities, such as playing, reading, talking, and doing arts and crafts with children; helping or teaching children; attending children's events (e.g., recital, school play); and all activities related to children's education (including attending meetings and school conferences).
We code related travel with the type of care. For instance, travel related to children's health would be coded with routine child care, whereas travel related to children's education would be coded with developmental child care (see Appendix Section 1 for details about creating the two measures of parental child care time in the Integrated Public Use Microdata Series Time Use system).
Independent Variables
Temporal conditions of paid work are measured using five variables:
1.
Access to leave, a form of schedule flexibility (Galinsky et al. 2010;Glass and Estes 1997), is measured through three dummy variables: no access to leave, access to unpaid leave on job only, and access to paid leave on job regardless of receiving unpaid leave or not.
2.
Types of work shift are from responses to a question about whether respondents usually worked a daytime schedule or some other schedule on their job. A standard day shift is usually distinguished from various nonstandard shifts (Schneider and Harknett 2019;Wight et al. 2008). We therefore categorize work shifts into three types: a regular day shift, a regular evening or night shift, and other shifts (rotating shift, split shift, irregular schedule, or others).
3.
Usual days worked are from responses to a question about which days of the week respondents usually worked. We categorize responses into three dummy Qian
4.
Inflexible start and end times, an indicator of schedule inflexibility, are derived from the question "Do you have flexible work hours that allow you to vary or make changes in the times you begin and end work?" We create a dummy variable coded 1 if respondents answered "no" and 0 if respondents answered "yes."
5.
Short advance notice, a form of schedule unpredictability and instability (Schneider and Harknett 2019), is measured using the question "How far in advance do you know your work schedule?" We create a dummy variable, with 1 indicating having less than two weeks' advance notice and 0 indicating at least two weeks' advance notice (Schneider and Harknett 2019).
To facilitate interpretation, for all temporal conditions of work, we code indicators of "good jobs" as the reference category, namely, access to paid leave, a regular day shift, working on weekdays only, flextime (flexible start and end times), and at least two weeks' advance notice of work schedules.
Educational variation in the relationship between temporal conditions of employment and parental child care time is also a focus. Education is measured using a dummy variable indicating whether respondents are college graduates or not (1 = yes, 0 = no) (Sayer 2016).
Control Variables
Our control variables include those documented to affect child care time (Sayer 2016;Sayer et al. 2004). We control for work hours, a well-studied temporal dimension of employment (Sayer and Gornick 2012). In addition to the three typical categories (Bianchi 2000; Sayer and Gornick 2012)-part-time hours (<35 hours), standard full-time hours (35-40 hours), and long full-time hours (≥41 hours)-we include a separate indicator to capture those who work full-time but their work hours vary. In supplementary analyses, we also examined interactions between work hours and education. 1 Partnership status is measured using three dummy variables: spouse present, unmarried partner present, and no spouse or partner present. Age is measured in years as a continuous variable. Race is categorized into four groups: non-Hispanic White, non-Hispanic Black, Hispanic, and other racial/ethnic groups (combined because there are not enough cases to separate respondents who identify as Asian, Native American, and multiracial). Working 1 The relationship between work hours and parental child care time revealed in our study is consistent with prior research (Kim 2020; Sayer and Gornick 2012). Part-time employed mothers devote more time to both routine and developmental child care, compared with mothers working standard full-time. Child care time is similar between part-time and full-time employed fathers, but routine care time of fathers working long full-time hours is lower than that of fathers working standard full-time hours. When examining the interaction between work hours and education in predicting parental child care time, we found no significant results except that less educated, part-time employed fathers spent less time on developmental child care than all other fathers. We nevertheless caution against overinterpretation of this result because only 31 less educated fathers and 20 college-educated fathers in our sample worked part-time hours. in professional or managerial occupations has implications for job demands and resources experienced by workers (Kalleberg 2011). We thus control for occupation with a dummy variable coded 1 indicating professional and managerial occupations and 0 otherwise. Because child care demands vary with the number and age of children in the household (Raley et al. 2012), we control for the number of children younger than 18 years and age of the youngest child. We top-code the number of children at four because only 1.3 percent of respondents had five or more children. Following Gershenson (2013), we include a dummy variable to measure whether the diary day is in the summer months of June, July, and August (1 = yes, 0 = no), because parental child care time tends to differ between summertime and other times. We also control for whether the diary day is on the weekend (1)
Analytic Strategies
We use ordinary least squares regression models to predict average minutes per day mothers and fathers spend doing child care activities. All analyses are performed separately for mothers and fathers (Sayer et al. 2004;Wight et al. 2008). All regression models are weighted to account for survey design and the minimal nonresponse, and 160 replicate weights are used to generate standard errors (Bureau of Labor Statistics 2019a).
Our analyses are conducted in two steps. First, we include the temporal conditions of employment, accounting for education and control variables in the model. This analysis allows us to evaluate whether parental child care time is associated with temporal aspects of paid work, net of other characteristics. By examining the correlation matrix and variance inflation factors, we have confirmed that our models do not suffer multicollinearity problems when we add all measures of temporal work conditions at once. Second, we add interaction terms between education and temporal dimensions of work. We fit separate models by adding only one set of interaction term(s) each time (e.g., interactions between education and access to leave from work in the first model, those between education and types of work shift in the second model). This analysis illuminates whether temporal conditions of paid work have different impacts on child care time of college-educated and less educated parents. Data and code for this article are available online: https://osf.io/7cu6v/. (73 percent). The data also indicate that many parents experience temporal work condition instability. For nearly 30 percent of parents, their usual days worked either involve weekends or vary. Furthermore, about half of mothers (45 percent) and fathers (42 percent) have inflexible start and stop times of work. A quarter of mothers and nearly two fifths of fathers have less than two weeks' advance notice of their work schedules.
Ordinary Least Squares Regression Results: Temporal Conditions of Work and Parental Child Care Time
Next, we turn to ordinary least squares regression models in Table 2 to examine the relationship between temporal dimensions of employment and parental child care time. Because child care time is measured by minutes per day, regression coefficients indicate differences in average daily minutes mothers and fathers spend on child care activities across temporal conditions of work. Table 2 shows that access to leave from work and flextime are significantly associated with fathers' routine child care time. Specifically, compared with fathers who have access to paid leave, routine child care time is 20 minutes lower for fathers who have no access to any paid or unpaid leave (p < .001) and 9 minutes lower for those with access to only unpaid leave (p < .05). Routine child care time of fathers with inflexible start and end times is 8 minutes lower than that of fathers with flextime (p < .05). This result based on time-diary data corresponds with non-time-diary research, which reveals that the availability of flextime arrangements is associated with a higher frequency of daily routine parent-child interactions for fathers (Kim 2020).
Usual days worked are significantly associated with both mothers' and fathers' child care time but in gender-differentiated ways (three of the four gender differences in coefficients are significant at the .01-.10 level). Compared with mothers who usually work on weekdays only, time in routine child care activities is 17 minutes lower among mothers who usually work on weekends (p < .05). Having usual days worked that vary is, however, associated with more maternal time in routine child care (b = 31.522, p < .05). Different patterns emerge among fathers: working on variable days of the week, as opposed to usually working on weekdays only, is associated with less paternal time in developmental child care (b = −12.947, p < .05).
Two temporal work conditions that indicate work-schedule instability and unpredictabilityworking a shift other than a regular day shift and having less than two weeks' advance notice -are not significantly related to either mothers' or fathers' child care time. Overall, across two types of child care activities and five temporal aspects of work, the only significant relationship among mothers is between usual days worked and routine care time: compared with working on weekdays only, variable work days increase mothers' routine child care time, whereas working on weekends negatively affects mothers' routine child care time. By comparison, among fathers, lacking access to paid leave and having inflexible start and end times are associated with reduced routine care time, and working on variable days of the week is related to less developmental care time. The results suggest that mothers' developmental child care time is least sensitive to temporal conditions of employment. Parental child care time is also shaped by parents' educational level in gendered ways. Even holding other variables constant, college-educated mothers spend more time (12 minutes more) on developmental care than less educated mothers (p < .05). College-educated fathers spend more time (14 minutes more) on routine child care than less educated fathers (p < .001).
Ordinary Least Squares Regression Results: Educational Variation
Next, we fit models with interaction terms between education and temporal aspects of work.
To facilitate interpretation, we graphically present predicted daily child care time in minutes for significant results in Figures 1 and 2, with other covariates set at their means. Models with significant interaction terms are presented in Appendix Section 2, and results from the other models with nonsignificant interaction terms are presented in Appendix Section 3.
Working nonstandard shifts and working on weekends widen educational gaps in mothers' developmental child care time. As shown in Figure 1A, among mothers who work a regular day shift, predicted developmental child care time is 37.61 minutes for less educated mothers and 47.48 minutes for college-educated mothers, a small difference of 10 minutes (p = .058). 2 Among mothers who work a regular evening/night shift, the difference in developmental child care time between college-educated and less educated mothers is slightly larger (27 minutes) but not statistically significant, perhaps because of the small sample size (p = .106). Among mothers working nonstandard shifts, the educational gap is larger, at 42 minutes (13.69 minutes among less educated mothers vs. 55.89 minutes among college-educated mothers, p = .025).
Importantly, working other nonstandard shifts, as opposed to a regular day shift, negatively affects developmental child care time only among less educated mothers. Non-collegeeducated mothers who work regular day shifts report triple the time in developmental child care compared with those who work nonstandard shifts (37.61 vs. 13.69 minutes, p = .016).
By contrast, college-educated mothers' developmental child care time does not significantly differ across the three types of work shift. The results are consistent with our expectation that the child care "costs" of work schedules affect less educated mothers more strongly. Figure 1B shows that the educational gap in mothers' developmental child care time is larger among those usually working on weekends than those working on weekdays only. Among mothers who work on weekdays only, the gap in developmental care time between less educated and college-educated mothers is only 8 minutes and not statistically significant (37.77 vs. 45.61 minutes, p = .126). Among mothers who usually work on weekends, less educated mothers spend 23.37 minutes on developmental child care, whereas college-educated mothers spend 61.91 minutes, resulting in a difference of 39 minutes (p = .002). Among mothers whose usual days worked vary, although college-educated mothers spend more time on developmental child care than less educated mothers (56.89 vs. 46.63 minutes), the difference of 10 minutes is not statistically significant (p = .445). Thus, compared with working on weekdays only, working on weekends widens the educational gap in mothers' developmental child care time by 31 minutes per day (p = .013). 3 As for fathers, type of work shift and advance notice of work schedules interact with education to shape routine child care. The results, as we elaborate on later, are somewhat surprising because job demands arising from scheduling arrangements appear to narrow or even reverse the educational gap in fathers' routine child care time. Figure 2A shows fathers' routine child care time by education and type of work shift. Among fathers who work a regular day shift, college-educated fathers spend 45.71 minutes on routine care, significantly higher than their less educated counterparts, who spend 29.39 minutes (p < .001). The gap in routine care time between college-and less educated fathers narrows to about 9 minutes and becomes nonsignificant among those who work a regular evening or night shift (36.14 vs. 27.55 minutes, p = .264). Among fathers who work other nonstandard shifts, college-educated fathers spend 31.01 minutes on routine care, lower than their less educated counterparts, who spend 50.97 minutes, but this difference between the two educational groups is not significant at the 0.05 level (p = .107). Compared with working a regular day shift, working other nonstandard shifts significantly reverses the educational gap in routine child care time from favoring college-educated fathers to favoring less educated fathers (p = .003). Figure 2B shows that the educational difference in fathers' routine child care time differs by the advance notice of their work schedules. Among fathers with relatively predictable schedules (at least two weeks' advance notice), highly educated fathers spend significantly more time (16 minutes more) on routine child care than less educated fathers (47.93 vs. 29.94 minutes, p < .001). By contrast, among fathers with less than two weeks' advance notice, college-educated and less educated fathers do not differ significantly in their routine care time (38.75 vs. 33.18 minutes, p = .203). Thus, the educational gap in routine child care time favors college-educated fathers to a smaller extent when fathers have less than two weeks' advance notice of their work schedules.
Conclusions
Temporal conditions of paid work likely spill over to nonwork domains and influence parental child care time. Empirical questions remain as to which temporal work condition matters more and whether the influence differs between fathers and mothers or varies with parental education. We advance prior research by investigating how a comprehensive set of temporal work conditions shapes parental child care time. For mothers, usual days worked affect routine child care time, but access to leave from work, flextime, advance notice of work schedules, and types of work shift are not associated with time in either routine or developmental child care. Compared with usually working on weekdays only, nonstandard work arrangements such as usually working on weekends are associated with mothers' lower levels of routine child care time. Weekend jobs may take mothers' time away from their children during the days when schools and child care facilities are closed and more parental supervision or care is needed.
Somewhat unexpectedly, we find that working on variable days of the week is associated with mothers' higher levels of routine child care time, relative to working on weekdays only. Supplementary analysis shows that compared with mothers who usually work on weekdays only, mothers who work variable days are more likely to work part-time hours (38 percent vs. 20 percent) and are much less likely to work five days a week (42 percent vs. 86 percent). Thus, although variable work days could represent unpredictability and instability of employment schedules and thus increase job demands, it is possible that working on variable days of the week reflects schedule flexibility and that mothers work on those "flexible" jobs to better accommodate their child care responsibilities. It is also possible that variable work days are incompatible with nonparental child care arrangements, thus requiring higher levels of parental care . To better understand our finding, future research is needed about whether mothers working variable days have the flexibility or autonomy to decide during which days of the week they work and if this is related to child care responsibilities and ability to use nonparental child care.
Fathers' child care time is associated with three temporal dimensions of their job. Having no access to paid leave and lacking flextime arrangements are both associated with fathers' lower levels of routine child care time. Thus, when work and family interfere with each other, fathers appear to mostly reduce the routine care they provide to their children. Additionally, working variable days is significantly associated with fathers' reduced developmental care time. Unlike mothers working on variable days of the week, nearly 90 percent of fathers in our sample whose usual work days vary are full-time workers. Given that fathers rarely adjust their employment to accommodate child care needs (Raley et al. 2012), usual days worked that vary may indeed represent schedule unpredictability for fathers, increase their job demands, and hinder their ability to fit in the schedules of and participate in children's developmental activities.
These results are consistent with the gender perspective of understanding parental child care time. Although mothers' routine care time is responsive to temporal work conditions (like usual days worked), their developmental care time is much less so. In light of the pervasive intensive mothering norms, caring for children and engaging in activities that foster child development are critical to "being a good mother" under societal expectations (Hays 1996; Ishizuka 2019). Therefore, even with inflexible, unstable, or nonstandard work schedules, mothers may find ways to maximize their time spent with children especially on developmental activities (Bianchi 2000; Prickett and Augustine 2021). By contrast, how much time fathers spend on developmental and routine child care is sensitive to the demands and resources of their job. Providing for the family is still more central to good fathering than being highly involved in child care (Townsend 2002). Thus, when faced with competing demands from work and family, fathers may tend to prioritize meeting job demands over contributing to child care. Fathers working on variable days of the week may cut down on providing developmental child care as they encounter more difficulty aligning their time availability with temporal rhythms of children's school and extracurricular activities (e.g., educational classes, school conferences, recreational activities). For fathers who work at Qian jobs that require rigid start and end times and offer no access to paid leave, meeting job demands may interfere with family life and reduce fathers' time in routine child care, which is typically performed by mothers anyway and often seen as "optional" for fathers to take on (Gerstel and Clawson 2014).
We also find that temporal dimensions of work play a role in structuring educational disparities in parenting time. Less educated mothers who work nonstandard schedules (nonstandard shifts, usually working on weekends) spend less time on developmental child care activities than their college-educated counterparts, a difference that is much smaller or statistically nonsignificant among mothers who work standard schedules (a regular day shift, working on weekdays only). Not only do worse temporal conditions of work have a more negative impact on less educated mothers' child care time, those mothers are also more likely to occupy jobs with worse conditions (Gerstel and Clawson 2015). Supplementary analysis (Appendix Section 4) shows that less educated mothers are three times as likely as college-educated mothers to usually work on weekends (24 percent vs. 8 percent) and twice as likely to work a shift other than a regular day/evening/night shift (6 percent vs. 3 percent). Thus, nonstandard aspects of work time may exacerbate educational differences in mothers' parenting time, leading to children's diverging destinies and growing family socioeconomic inequalities (McLanahan 2004;Schneider et al. 2018).
Two temporal work conditions seem to narrow the educational gap in fathers' routine child care time: short advance notice and nonstandard shifts. College-educated fathers spend more time on routine child care than their less educated counterparts, only among those with at least two weeks' advance notice of their work schedules. This educational gap disappears among fathers with less than two weeks' advance notice. Similarly, working nonstandard shifts appears to narrow or even reverse the educational gap in fathers' routine child care time. Among those working a regular day shift, college-educated fathers spend more time on routine child care than less educated fathers, but this gap narrows to nonexistent among those working a regular evening or night shift and reverses to favor less educated fathers among those working other nonstandard shifts (e.g., rotating, split, or irregular shifts). It is worth noting that nearly 90 percent of fathers in our sample work regular day shifts. Our finding is nevertheless consistent with prior nonrepresentative research, which showed that nonstandard aspects of work time led to more child care time for male emergency medical technicians (working class, less educated) than for male doctors (middle class, highly educated) (Gerstel and Clawson 2014). Additional research is needed to understand why work shift (standard vs. nonstandard) and advance notice of work schedules (predictable vs. unpredictable) differentially influence college-educated and less educated fathers' involvement in routine care of children. Research is also needed using couple-level data on educational variation in how temporal work conditions affect child care time for both partners. Non-time-diary studies report that working-class partnered parents work split shifts to maximize parental time available for child care (Gerstel and Clawson 2014), but research is needed among other partnered parents.
This research has several limitations. First, given the cross-sectional nature of our data, we cannot establish the causal relationship between temporal conditions of work and parental child care time. It is possible that mothers, and to a lesser extent fathers, choose certain Third, the mechanisms underlying the associations between temporal aspects of work and parental child care time are largely speculative. We draw on the JD-R model and prior research to conceptualize this study, but more qualitative research is needed to understand the ways in which schedules, as well as the flexibility, instability, and unpredictability of work time, affect parents' ability to engage in developmental and routine child care activities.
To conclude, our results show that temporal dimensions of employment are associated with parents' child care time in ways that differ by gender and education. Only usual days worked are associated with mothers' routine care time, whereas for fathers, no access to paid leave and inflexible start and end times are both related to reduced routine care time and variable work days are associated with less developmental care time. This finding suggests that reducing work-schedule demands has the potential to increase men's involvement in family life and ultimately contribute to the unstalling and completion of the gender revolution (Goldscheider et al. 2015). Furthermore, prior research shows that less educated workers are more likely than highly educated workers to hold jobs with inflexible, unpredictable, and nonstandard schedules (Gerstel and Clawson 2015, 2018). We extend prior research to show that short advance notice and nonstandard shifts appear to narrow or even reverse the educational gap in fathers' routine child care time. Because lack of advance notice of work schedules attenuates the educational gradient in fathers' routine child care time, whereas it does not affect the educational gradient in mothers' routine child care time, inequalities in which fathers get "good" versus "bad" jobs reduce the potential of less gendered parenting practices in day-to-day child care. Moreover, unstable and irregular scheduling arrangements, such as nonstandard shifts and weekend jobs, have a more negative impact on less educated mothers' engagement in developmental activities with children. Thus, inequalities in who gets "good" jobs and who gets "bad" jobs, as well as the differential impacts of temporal work conditions on parental child care time, especially on maternal time in developmental activities, may contribute to and further widen disparities in child well-being across social classes. Predicted developmental child care time (minutes/day) for mothers, by maternal education and type of work shift or usual days worked. Predicted routine child care time (minutes/day) for fathers, by paternal education and type of work shift or advance notice of work schedules. | 2022-10-30T15:10:06.580Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "307912378c680055c94f08dbb154e86e7b5fe475",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/23780231221132383",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5723238c144ab05e917c765ba4743ccfc2e83811",
"s2fieldsofstudy": [
"Sociology",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212890430 | pes2o/s2orc | v3-fos-license | Workbench Control System Design Based on Mecanum Wheel
The Mecanum wheel gains the particular interests in the field of robot, vehicle and workbench due to the capabilities of moving toward, sideway, rotation and combination of other motion pattern. This report details the progress of designing and building an omni-wheel robot control system which is controlled by Programmable Logic Controller (PLC) to achieve the ability of instantaneous motion in any direction with 4 coupled Mecanum wheels involved. A simulative control program has been developed to fulfill the function of line motion. Additionally, several advanced control methods have been reviewed, such as FPGA, NLP and TLC, which can be helpful in the future modification and improvement.
Introduction
The omni-directional wheel is implemented to transmit different parts of force into a total force vector to longitudinal (forwards or backwards), lateral (left or right shift) and other desired rotary (angled translation). It had been widely used in the fields of vehicle and robot and the very most basic dynamic and kinematic models had been summarized over the past twenty years [1]. However, most of the previous research results mainly focus on small dimension and model with less number degree of freedom, such as Asama et al. aimed to develop an omni-directional mobile robot, using a decoupling drive mechanism to control the motion for three degrees of freedom by three correspondent actuators in a decoupled manner in 1995 [2]. What's more, Diegel et al. indicated the comparison of conventional wheel designs and special universal wheels [3]. Therefore, a study for the omnidirectional mobile workbench with steel frame structure and full mobility has been proposed. It uses the programmable logic controller as the main control means, combined with the Pulse-Width Modulation of the motor output, to achieve the effect of flexible mobile movement. In addition, this paper also compares various types of control methods as reference for the improvement and optimization of the system in the future.
Mechanical components selection
In this paper, 8 Mecanum wheels are selected to form the 4 paired distribution of coupled wheels. The Mecanum wheel was initially invited by Ilon [4]. The wheel geometry can be seen in Figure 1 below. The outline of the omni-directional wheel is circular by the shaped of rollers. If we cat a cylinder with a plane angled at γ, in this case γ=45° for Mecanum wheels. With analytic geometry and dynamics analysis if the roller length, Lr, is much smaller than the wheel external radius, R, then the roller shape could be approximated with a circle arc having 2R as radius. During the actual use process, in order to get a circular outline for the wheel, the minimum number of rollers should not be less than 8 [5]. Two sets of 10 inches Mecanum wheel have been selected. This set of four wheels includes two right wheels and two left wheels. These wheels are assembled with 12 rollers per wheel. Each roller is a heavy-duty wheel itself, with 2 steel GGB bushings riding on a 1/2" aluminum axle. These axles and rollers are sandwiched between two 0.125" thick aluminum plates. The rollers are made of gray SBR rubber, with 80A durometer.
According to the general calculation of total power required, four 350W DC motor have been selected, with 24V nominal voltages and 15A current when fully loaded. For this reason, eight 12V lead-acid batteries are purchased as well, connecting with four groups of parallel connection. A set of spring damper has been used to reduce vibrations and oscillations when uneven ground condition is encountered. Additionally, steel driven shaft and connective ring have been built for the simultaneous movement between paired omni-wheels. The final assembly of the table-like robot with 8 Mecanum wheels is shown in Figure 2.
Control system design
PLC is a digital processor used for automation of electromechanical processes. Unlike general-purpose controller [6], the PLC is designed for multiple inputs and output arrangements, extended temperature ranges, immunity to electrical noise and resistance to vibration and impact. In this project, the PLC could be acting as the importer and processor both simultaneously. This can be achieved because the robot can be started or stopped by the button on the control board and the motor rotating direction can There are six buttons used which can achieve the functionality of whole system. They are start/stop, reset, forward, backward, right and left and are all push buttons with latch, except the reset button. Therefore every motion of instruction can only be executed individually, meaning that the PLC cannot execute other three direction movement if the forward button is pressed. The advantage of this kind of setting is that the amount of programming is small and it can avoid frequent positive inversion and extend the service life of the motor. While in some special occasions, the efficiency of system is not very high due to the motion decomposition when moving with angle. When the reset button is pressed, the PLC will run the initialization program, all the tags or data stored in the memory of the controller will be cleared.
For the safety concern when the motor is operating, a safe model test function will be implemented. This function only allows the motor to drive 5 seconds and then stop. It can be easily achieved by using the timer control in PLC programs. After 5 seconds, the control box will output the PWM signals at 0 percentages which means the motor is stopped. The only way to restart to movement is pressing the reset button, which can initialize the program and eliminate the memory flags on PLC chip.
Pulse Width Modulation is a process of digital coding to analog signal. By using the high resolution counter, the duty cycle of the square wave is modulated to encode the level of a specific analog signal. The signal is still digital form cause at any given moment, the full magnitude of the DC power supply is either full (ON) or free (OFF). In the case, voltage or current source is applied to a simulated load on a repetitive pulse sequence, which is connected or disconnected. The effect achieved is very similar to the traditional digital analog converter (DAC), but the PWM technology is more suitable for the case due to the consideration of the price and the motor type used in this paper. The control sequence for a forward driving example can be seen in Figure 3. Four VNH3SP30 motor driver chips are used to drive the four motors independently, labeling 1 to 4 sequentially. For the convenience of test condition and operation test, a control box was made to integrate all the components of motor driver by the connection of two optoisolators in couple for one control card (see Figure 4). It has four input interfaces for PLC signals transportation and one power switch button, also four channels of outputs have been set with serial cable which could connect to motors directly.
Fuzzy control
Fuzzy control has been used to explain the behavior of an unknown system described by a set of numerical data. Wong et al. intends to design a fuzzy system to control an omni-directional mobile robot based on Genetic Algorithms method which can automatically generate fuzzy sets [7]. The prototype consists of three omni-wheels which located as an equilateral triangle and their rotation axes intersect at the center O of the chassis. When the coordinate frame is set fixed on the world plane, we can derive the inverse Jacobian matrix for this mobile system. Based on the kinematic model and the relationship between three inputs and three outputs, the rule base could be established, the triangularshaped membership function can be determined accordingly, as shown in Figure 5. Seven subsections indicate the region of negative big (NB), negative medium (NM), negative small (NS), zero (ZE), positive small (PS), positive medium (PM) and positive big (PB). Although system can be subdivided into more parts, it will lead to a huge library of rules, the amount of data showed geometric growth.
It should be noted that the pulse waveform width (PWM) technology itself is a type of fuzzy control. But the PWM output is dominated by square wave form, which is only 0 or 1, which is different from the home function of Figure 5. If the motor speed needs to be changed, the voltages of input, feedback and output should be sampled and the duty cycle is calculated in real time.
Field Programmable Gate Array
Although the membership function seems sophisticated and tedious with a number of unknown parameters, they are quite detailed for the system in a very single motion. The value of these unknown parameters can be derived by an appropriate fitness function by the Field Programmable Gate Array (FPGA) chip involvement using Genetic Algorithm (GA). The GA method is a search heuristic which solutions better suited to the evaluation will have a greater chance of survival and offspring producing. It could extract the numerical data directly from function approximation and terminate when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. Actuality, the combination of GA method and fuzzy logic control had been really in progress. It can be used to enhance the searching ability greatly of system movement towards optimal solution state in static and dynamic environment [8].
Hashemi et al. introduced a novel PI-fuzzy path planner for a linear discrete dynamics model of omni-directional mobile robot to satisfy planning prerequisites and prevent slippage with the implementation of velocity and acceleration filters [9]. Compared with the normal PID control method, the discrete-time linear quadratic tracking controller was used to provide an optimal solution to minimize the differences between the reference trajectory and the system output.
In fact, FPGA technology can also produce PWM rectangular pulse wave through a certain counter and comparator. In one process, by using the register to save the PWM cycle parameters, in the system clock driven by self plus, until the condition is satisfied to reset to 0, complete a PWM cycle. In another process, the PWM pulse width is controlled by comparing the register and the pulse width parameter. This method can achieve a better realization of the PWM signal, while it does not have a very good independence when the multi signal is required to appear at the same time.
Trajectory Linearization Control
Liu et al. improved a Trajectory Linearization control (TLC) method to fulfill the requirement of a nonlinear robot dynamic model [10]. It can achieve robust stability and performance along the trajectory without interpolating controller gains by combining a nonlinear dynamic inversion and a linear time-varying regulator. The TLC controller structure can be seen in Figure 6. It employed two loops architecture instead of dynamic inversion of the robot kinematics, based on the time-scale separation principle and singular perturbation theory. Following the body rate command from the outer-loop, in which the robot position is adjusted associated with desired trajectory, the inner-loop outputs the applied voltage on each motor subsequently. Additionally, with the cooperation of onboard sensor and the vision system data, the model provides accurate and reliable position and orientation measurements. A nonlinear Kalman filter method is employed to linearize the nonlinear robot kinematics by using the nominal trajectory based on the outer-loop controller pseudo-inverse [11]. What's more, if the PMW method is applied to the control method, the abnormal value can be eliminated effectively, and the measuring range can be greatly reduced.
Nonlinear Programming
The time-optimal control is a method which considers the movement between two positions in the minimum time as the priority. Normally, the solution leads to the utilization the complicated mathematical theory of Pontryagin's Minimum Principle (PMP) due to its highly nonlinear characteristic. However, Fu, Ko & Wu designed a nonlinear programming (NLP) methods which fixed the number of control steps while set the sampling period as a variable to minimize the optimization process based on a three-wheeled omni-directional mobile robot model [12]. The NLP method requires an initial feasible solution to start with. Thus, different initial feasible solutions are generated by a GA method in this project. Owning to the same configuration of wheel's distribution as Wong et al., the dynamic equations of the robot system is easily to be derived. However, as mentioned above, the solution to the dynamic equations is hard to find due to nature of the nonlinear and coupled relationship. Fortunately, the results can be presented in the form of nonlinear control algorithm with some experimental data by Wang et al [13]. Through nonlinear transformation of the initial dynamic model, the synthesized control system had begun to take shape.
In order to solve the numerical solution, it is necessary to assume the equal time interval and the fixed acceleration in the discrete domain. This assumption is only referenced for the special circumstances, the actual operation of the system is not suitable. And in the process of solution, the balance between the number of control steps and the computation should be concerned, as well as the contradiction of sampling interval and the discrete precision. Unlike the normal configuration sets of global and local different objectives [14], which needs two Lyapunov functions to derive the control law with hybrid feedback control strategy, the nonlinear programming method does not require the feedback error in control circuit, it automatically eliminates the steady-state and instantaneous error within one cycle, the error does not affect the forward cycle to the next cycle, while the hardware circuit of the system is much more complex. Considering the effect of performance, this method has the advantages on fast response, distortion reduction and interference suppression and it will be the first option for system optimization in the future. | 2019-12-12T10:50:14.970Z | 2019-12-06T00:00:00.000 | {
"year": 2019,
"sha1": "2b39dcdae470f67c7105e714409bb958e6c1ca45",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/688/2/022062",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d12cceb698813b8cdbc55315c03f0e10a920dc97",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
44077909 | pes2o/s2orc | v3-fos-license | Multiple organ involvement in severe fever with thrombocytopenia syndrome: an immunohistochemical finding in a fatal case
Abstracts Background Severe fever with thrombocytopenia syndrome (SFTS) is an emerging infectious disease caused by SFTS bunyavirus (SFTSV), a tick borne bunyavirus. However, Immunohistochemistry of SFTS patients are not well studied. Methods We obtained multiple of tissues from a fatal case with SFTS, including blood, lungs, kidneys, heart, and spleen. The blood samples were used to isolate the causative agent for detection of viral RNA and further expression of recombinant viral protein as primary antibody. Immunohistochemistry of the heart, lungs, spleen and kidneys was used to characterize the viral antigen in tissue sections. Results A 79-year-old man, together with his wife, was admitted because of fever. Both patients were diagnosed with SFTS by the positive SFTSV RNA in the blood. The gentleman died of multiple organ failure 8 days after hospitalization. However, his wife recovered and was discharged. Immunohistochemistry indicated that SFTSV antigens were present in all studied organs including the heart, kidney, lung and spleen, of which the spleen presented with the highest amount of SFTSV antigens. The kidney was next while the heart and lungs showed lower amount of SFTSV antigens. Conclusions SFTSV can direct infect multiple organs, resulting in multiple organ failure and ultimately in an unfavorable outcome. Electronic supplementary material The online version of this article (10.1186/s12985-018-1006-7) contains supplementary material, which is available to authorized users.
Background
Severe fever with thrombocytopenia syndrome (SFTS) is an emerging infectious disease that was first reported in China in 2011 and then in South Korea and Japan [1][2][3]. SFTS bunyavirus (SFTSV), the causative agent of SFTS [3,4], is mainly transmitted through tick bites, but occasionally from person to person through blood [5,6], and rarely through aerosol [7]. SFTSV infects humans of all the ages, but predominantly those above 50 years old with mortality mainly occurring to those above 60 years, suggesting that the severity of SFTS is correlated to compromised immunity [8]. The mortality rate ranged from 7.9 to 50% in previous studies [1,3,8]. The incidence of the disease followed a trend of increasing annually [9].
The genome of SFTSV consists of three single-stranded negative sense RNA segments: S, M and L [3]. The M and L segments encode viral envelope glycoproteins and viral RNA polymerase, respectively. The S segment is an ambisense RNA that encodes a nonstructural protein (NSs) and a nucleoprotein (NP). The NSs protein of SFTSV has been reported to play pivotal roles in SFTSV replication and host responses [10].
The clinical manifestations of SFTS range from an acute febrile illness to multiple organ failure, encompassing fever, thrombocytopenia, leukopenia, gastrointestinal symptoms, and liver injury [3]. However, SFTSV viral protein expression in human tissues has been rarely studied. In light of these uncertainties, in this study we analyzed the viral NP expression in various tissues from autopsy in a lethal SFTV case by immunohistochemistry assays and indirect immunofluorescence assays with closely monitoring SFTSV viral load.
Ethics statement
The ethical committee of the Zhoushan Hospital has approved all human study and the study was conducted according to the medical research regulations of China. Written informed consent was obtained from the patients and their family members for this study. The animal experimental research in this study was approved by the bioethics committee of School of Public Health, Shandong University. All experiments were performed in accordance with relevant guidelines and regulations of China.
SFTSV RNA load determination using quantitative real time PCR (qPCR) Patients' blood sample in heparin anticoagulant was collected on day 7 of hospitalization and RNA was extracted with RNeasy Purification Kits (QIAGEN, Germany). SFTSV RNA in the patient' blood was detected by a qPCR. PCR primers and a probe were designed from a conserved region of the S segment of SFTSV, including forward primer P3: AGT TCA CAG CAG CAT GGA GAG GAT, reverse primer P4: ACT CTC TGT GGC AAG ATG CCT TCA, and a probe: FAM-TTG CTG GCT CCG CGC ATC TTC ACA TT -TAMRA. The qPCR was performed for one cycle at 95°C for 15 s, 45 cycles at 95°C for 5 s and 60°C for 31 s.
Genetic analysis
After extracting the viral RNA from the fatal case, the virus was sequenced with the use of the sequence-independent single-primer amplification (SISPA) method. Phylogenetic analyses were performed with the maximum likelihood method with the use of MEGA7 software.
Viral detection using the immunohistochemistry assays and indirect immunofluorescence assays Expression and purification of recombinant NP and obtaining primary antibody The NP gene of SFTSV was amplified by reverse transcription PCR (RT-PCR) with the Access RT-PCR System Kit (Promega, Madison, WI). The sequences of the forward primer and reverse primer were: GAG GTA CCA TGT CAG AGT GGT CC and AAT CTC GAG TTA CAG GTT CCT GTA AG, respectively. The amplification conditions and parameters were as follows: one cycles at 45°C for 45 min, 94°C for 2 min, 40 cycles at 94°C for 30s, 60°C for 1 min, 68°C for 2 min, one cycles at 68°C for 7 min. The PCR product was cloned into pMD19 (Simple) T-Vectors (Clontech Laboratories, CA). The cloned insert was excised from the recombinant vector by double enzyme digestion and sub-cloned into pet-32a vector to express the NP. The recombinant NP was purified with the pET Express & Purify Kit-His60 (Clontech Laboratories, CA).
The purified recombinant NP was mixed with equal volume complete or incomplete Freund's adjuvant (Sigma-Aldrich, USA) and injected subcutaneously into six 6-to 8-weekold Kunming mice (The Animal Experiment Center of Shandong University, Jinan City, China) to make polyclonal antibody. Each mouse was injected with 100 μg recombinant protein at multiple sites at one-week interval for 4 times. Mice were sacrificed 15 days after the last immunization to obtain sera. Sera were frozen at − 80°C until use as the primary antibody.
Sample preparation
Autopsy tissues were obtained by puncture from the heart, lungs, spleen, liver, and kidneys of the fatal case in postmortem examine during the immediate 30 min following the death. Tissue slides were initially stained with Hematoxylin & Eosin (H&E) for morphological observations. Furthermore, tissue slides of heart, lungs, spleen, and kidneys were selected for immunohistochemistry with mouse antibody against recombinant NP of SFTSV. Paraffin embedded tissue sections were deparaffinized as described elsewhere [11]. Briefly, the sections were placed in a 60°C incubator for 30 min. The sections were dewaxed with xylene and gradient ethanol and washed with phosphate buffer (PBS) and distilled water.
The sections were heat repaired in a container containing citrate buffer by heating with a microwave to keep the liquid temperature at about 98°C for 10 to 15 min. The sections in the container were cooled down at room temperature for 30 min and then washed with PBS and dried by blotting. Hydrogen peroxide (3%) was added to the sections which were then incubated in a 37°C water bath for 15 min to block the activity of endogenous peroxidase. After washing with PBS, the sections were dried by blotting.
Sampling labeling
One drop (30-50 μl) of the diluted mouse antibody to SFTSV recombinant NP as previously mentioned was added. Negative controls were added with normal mouse sera without the primary antibody correspondently. The sections were incubated at 37°C for 1 h. The slides were rinsed with PBS. For the immunohistochemistry assays, the sections were incubated with rabbit anti-mouse horseradish peroxidase-labeled secondary antibody (1: 1000) (ZSGB-Bio, Beijing, China), and incubated at 37°C for 1 h. The sections were stained with DAB coloring liquid (ZSGB-Bio, Beijing, China) and observed under a light microscope by two pathologists. For the indirect immunofluorescence assays, the sections were incubated with rabbit anti-mouse peroxidase-labeled secondary antibody (Alexa Fluor 488-conjugated Affinipure Goat Anti-Rabbit IgG, Proteintech, USA) (1:100 diluted), and then placed in an incubator at 37°C for 1 h. After immunoreaction, nucleus staining with DAPI (C0065, Solarbio) was performed for 10 min at room temperature. The slides were washed again and the slides were examined under a fluorescence microscope (Olympus BX3).
Case presentations
A couple was admitted into a local hospital on Zhoushan Island, Zhejiang Province, China because of fever with nausea and vomiting. Both two patients had no significant past medical history. Case one was the husband, 79-year-old male presenting with fever (38.5°C), fatigue, diarrhea for 6 days, complicated with one episode of bright red hematemesis. On admission blood tests showed neutropenia, thrombocytopenia, with normal hemoglobin ( Table 1). The patient was given hemostatic drugs, ceftizoxime and ribavirin after admission. Case two was the wife, 66-year-old. On admission, she reported fever (38.2°C ), fatigue, nausea, and anorexia. Blood test showed slightly low white blood cells (WBC), thrombocytopenia, and slightly anemia (Table 1). She was given lansoprazole, levofloxacin, rehydration and symptomatic treatment. Further questioning revealed that both patients had tick bites on the face and legs about 1 week before onset of illness. Therefore, SFTS was highly suspected for this couple. The decision was made to determine the SFTSV viral load and samples from the couple tested positive for SFTFV.
After admission, Case one experienced continuous progressive increase of aminotransferase, lactate dehydrogenase (LDH), creatine phosphokinase (CK), and serum SFTSV viral load (Fig. 1) and decrease of platelet and serum albumin, prolonged activated partial thromboplastin clotting time (APTT) ( Table 1). On day 5 after hospitalization, the patient became delirium and unconsciousness and had oral mucosal bleeding, crackles in the lungs, right lower extremity ecchymosis, and respiratory failure. An APACHE II (Acute Physiology and Chronic Health Evaluation II) score was seven points and a SOFA (sequential organ failure assessment) score was four points. The patient was given tracheal intubated on mechanic ventilation, plasma exchange, blood filtration, red blood cell transfusion, anti-viral and antibacterial drugs, albumin, and fibrinogen. His symptoms still did not resolve. On day 7 after hospitalization, the patient developed coma with sluggish pupillary light reflex and unstable vital signs. Two days later, the patient died.
Case two was conscious but listless without bleeding, skin rash, jaundice, or lymphadenopathy. The patient had scattered rales in the lungs. Laboratory test results showed that aminotransferase, LDH, CK and viral load were mildly increased (Fig. 1) and thrombocytopenia and leukocytopenia were further observed; serum potassium and sodium ions were decreased slightly (Table 1). Bone marrow biopsy showed hemophagocytic phenomenon. The patient was treated positively and her condition gradually returned to normal. The patient recovered and was discharged on the sixteenth day after admission.
SFTSV viral load of the patients
SFTSV viral load was closely followed up for 7 days for both patients. On the second day after hospitalization, case one was serum positive for SFTSV RNA by qPCR amplification. On day 3 after hospitalization case two also turned into serum positive for SFTSV by qPCR. Case one had much higher viral load and longer period of SFTSV viremia than case two and the viral load of case one had been continuously increasing with the extension of the disease until death (Fig. 1).
Laboratory results of the patients
Laboratory examination showed that PLT, WBC, and hemoglobin decreased in both patients. The level of aspartate transaminase (AST), LDH, and CK was dramatically elevated in the fatal patient (case one). Unremarkable change (LDH) or no change (AST, CK) was observed in mild patient (case two) (Fig. 1). The level of D -dimer was significantly high in the fatal case during the entire course of illness, but only slightly increased in the mild patient for 2 days. Prolonged APTT was only presented in the fatal case. These results suggested multiple organ failure and presence of DIC in the fatal case.
Microscopic morphological findings
Findings of H&E staining sections showed congestion and focal hemorrhage in the spleen. Ischemic lesions were also observed (Fig. 2a). The kidney was microscopically eroded with dilated tubules where swollen renal tubular epithelial cells were seen (Fig. 2b). The alveolar spaces were flooded with edema fluid and interstitial fibrous proliferated (Fig. 2c). A small amount of expansion of capillary could be observed in several organs including kidneys and lungs. Myocardium cells revealed structural disorders with vacuolar degeneration, with lipofuscin dispersed (Fig. 2d). Liver histological changes could be found as well, such as expansion of portal area, congestion in hepatic sinusoid, and acidophilic degeneration (Fig. 2e).
Immunohistochemistry assays and indirect immunofluorescence assay results
Immunohistochemistry studies showed a positive staining for SFTSV NP in sections from all organs tested including the spleen, kidneys, lungs, and heart (Fig. 3a, c, e, g) while negative staining corresponded to the controls specimen (Fig. 3b, d, f, h). Furthermore, the SFTSV antigens predominantly exhibits a cytoplasmic pattern. The SFTSV antigens were the most widespread and abundant in the spleen, especially in the white pulp. The sections from the kidneys also revealed the viral antigens expressing in the glomeruli. Compared to the spleen and the kidneys, the virus antigens in the heart and lungs was detectable despite much lower abundance. Meanwhile, the immunofluorescence assays showed the spleen, kidneys, lungs, and heart tissue were positive in SFTSV antigens (Fig. 4). The discrepancies in antigens between different organs were comparable with that appeared in the immunohistochemistry assays.
Molecular characterization
The whole genome of the fatal patient's SFTSV isolates was completely sequenced. We are able to yield the full sequence of three viral segments, including L segment of 6368 nucleotides, M segment of 3378 nucleotides and S segment of 1744 nucleotides respectively (see Additional file 1). Phylogenetic trees based on complete viral genome sequence of L segment showed the SFTSV isolate clustered well with other known SFTSV isolates (Fig. 5). And the sequence date also suggested that case one was infected by genotype D SFTSV.
Discussion
We reported the clinical manifestations and laboratory tests of a fatal SFTS patient and a mild SFTS patient in a cluster. Viral immunohistochemistry assays and indirect immunofluorescence assays of multiple tissues were examined in the deceased patient. Tick bites were rarely reported in SFTS patients usually with an unknown incubation time. These two patients had a clear date of their tick bites and onset of illness, consisting with the incubation time around 5 to 8 days since the contact. The fatal case suffered from severe coagulopathy with diffused bleeding of GI tract and skin, ended up with delirium, and coma. The fatal patient had higher liver enzymes, LDH, and CK than the mild patient who recovered from the disease. The fatal patient also had coagulation dysfunction with prolonged activated partial thromboplastin time and elevated D-dimer levels than the mild patient. Previous study indicated that elevated AST, LDH, CK and CK-MB were risk factors associated with severity among SFTS patients and fatality among severe SFTS patients [12]. High serum viral load has been considered to be a high-risk factor that resulted in the death of SFTS patients [13]. Our study further confirmed that the serum viral load was correlated with the severity of the disease. Therefore, monitoring viral load might assist in evaluating the prognosis of the disease. Although the pathogenesis of SFTS remains elusive. The host immune system is essentially indispensable for the pathogenesis of SFTSV infection, in addition to the high level of virus replication. Some studies suggested that cytokine mediated inflammatory response is characterized by the imbalance of cytokines and chemokines, and plays an important role in the progression of SFTSV infection [12,14]. Previous reports have showed SFTSV antigens in lymph nodes, liver, spleen, bone marrow and adrenals, but not in the heart, lungs, kidneys, gastrointestinal tract, aorta, or iliopsoas muscle [1,15,16]. Although we failed to perform the IHC assay for the liver tissue due to overmuch liver tissues loss when trimmed and liver tissue sections falling coming off slide, the initial immunohistochemistry and immunofluorescence findings still broadened the knowledge of the extent of SFTS that SFTSV infection is not only limited in the spleen, but also extensively involves the kidneys, the lungs, and the heart. Our study is consistent with a mouse model of SFTSV infection, which indicated that SFTSV primarily infects the spleen and lymph nodes [17]. These studies suggested that SFTSV could infect most organs of the patients, with heaviest infection in the spleen.
Conclusions
SFTSV virus was found in multiple organs, with the highest viral load in the spleen, moderate load in kidneys, and the least in the lung and heart. In addition, SFTS patients with higher viral load and higher liver enzymes, LDH, and CK indicated severity of the disease and even fatal outcome; the incubation time of SFTS was about 5 to 8 days after tick bite. | 2018-06-03T21:20:48.036Z | 2018-05-30T00:00:00.000 | {
"year": 2018,
"sha1": "fd02744d0582c18cd523d95f5a18a88592cc4616",
"oa_license": "CCBY",
"oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/s12985-018-1006-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd02744d0582c18cd523d95f5a18a88592cc4616",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
263622856 | pes2o/s2orc | v3-fos-license | Benefits of Community Voice: A Framework for Understanding Inclusion of Community Voice in HCI4D
Community voice is widely used in computer-supported cooperative work (CSCW) and human-computer interaction (HCI) work with underserved communities. However, the term is unresolved, denoting disparate activities, methods, and phenomena that are at their most useful when combined. We argue for a rethink by setting out a more nuanced understanding of "community" and "voice". Building on our own experiences of HCI for development (HCI4D) work and the existing literature, we propose a framework for the benefits it can bring to those who actively engage with communities as part of their work. This framework can be understood in terms of its four constituent benefits for CSCW and HCI4D practitioners: (i) understanding context, (ii) creating empathy, (iii) leveraging local skills and knowledge, and (iv) building trust and buy-in. We reflect on how an improved understanding of these benefits applies to three prior projects with women living in Bangladesh and discuss the issues and need for more work on community voice. Finally, we discuss how this more detailed perspective on community voice helps us understand power dynamics and polyvocal communities in development contexts.
INTRODUCTION
In the computer-supported cooperative work (CSCW) and HCI for development (HCI4D) projects, "community voice" is an oft-used but ill-defined concept.While the term evokes beneficial relationships, these are rarely detailed.In the field of development the rhetoric of organizations such as the World Bank suggests that community voice is an accountability approach that enhances citizen-government relationships improving service delivery [8].However [18], criticisms include the approach acts as propaganda or serves preconceived agendas [85]: that communities are rarely engaged in decision-making [22,86], that bottom-up, participatory approaches are fleeting and unsustainable [18], and that top-down approaches to participation only support communities sharing what decision makers want to hear [127].
In HCI the concept of community voice is widely used in ethnography [58,123,156], participatory design [141], disability research [72], and international development [18,55,131].Researchers and practitioners who enable community voice to elicit the articulations of interests, values, and constraints and see themselves as facilitators of knowledge production and exchange [62,131].Yet, whatever the good intentions, the fluidity of the definition of community voice is convenient for researchers who adopt the standpoint that it is an inherent good [69] as it allows almost any activity to be pointed to as an example of success [82].In reality, community voice has been used to describe virtually any collective interaction with a target population [36,69], and this imprecision in its use not only raises questions as to its benefits but also makes developing effective ways to design for it problematic.
Research highlights the need for a framework to understand the disparate failures of development initiatives to produce effective development outcomes [1,26,45,68,76,109,110,149,150,155].Our previous research showed us that although community voice is turned to in search of a gestalt effect to address these issues, it falls short.In three prior development projects, we encountered substantial challenges in understanding and realizing the benefits of local stakeholder communities' voices.Despite working with large organizations with commitments to hearing community voices, problems building trust with local communities, recognizing important contextual factors, understanding their perspectives, and not taking advantage of local skills and knowledge led to serious mistakes in project delivery.Our experiences have shown that sometimes a community voice can dramatically alter the course of a project, but it is not reliably heard.
Our intention is to reconsider community voice, starting with the conceptualization that underpins it, to look at the nature of organizational commitment to it and the techniques that are employed to realize it in the hopes that we can address these issues.Therefore, in this work, we aim to establish a better understanding of the benefits that community voice can bring so that HCI4D researchers can see where they are under-delivering and explore methodological changes.We develop an understanding of the nature of community voice, particularly that it is polyvocal and lacks consensus, to allow researchers to understand the community they work with, it's authenticity, and conflicts within it.
This paper contributes to the discussion of community voice by delineating its meanings in CSCW and HCI for development: voice as medium, voice as input, and voice as dialogue.We show that these perspectives shape our conceptualization of a "community" and present a framework of four distinct benefits: (i) understanding the context of a community, (ii) creating empathy with stakeholders and a space to talk to them, (iii) leveraging the skills and knowledge of the community, and (iv) building trust and buy-in.We reflect on how these benefits were or were not realized in the three projects that motivated the creation of this framework.Finally, we consider how this new perspective on community voice helps us understand power dynamics and polyvocal communities in development contexts.
WHO IS THE COMMUNITY?
We engage critically with the problematic uses of the term community, acknowledging its descriptive weaknesses and masking assumptions."Community" has been celebrated within participatory development frameworks [148], building upon Western assumptions about community dynamics, mutual regard, and continuity [101].It conjures images of grateful subjects at the end of the development pipeline of charitable projects.However, the concept has rightly been criticized in participatory research for its homogenizing tendencies and blurring of distinctions between stakeholders within groups of people whose significant commonality is simply spatial proximity [50].
External actors use the concept to further their own agendas.Nelson et al. have observed that community is a concept used by state and institutional powers more than by people themselves, and it carries connotations of consensus [94].Mohan et al. highlight the dangers as thus: "actions based on consensus may actually empower the powerful vested interests that manipulated the research in the first place" [87].Gujit found that representations of "community" interests muffle dissent and inequities [50].Civil society consists of institutions built from a community, and they use the community's loaded, problematic ideals of harmony and collaboration to erase conflict [77].At the heart of the concept of community and how it interacts with civil society, there is a paradox: 'community' is presumed to exist already yet it requires governments and other developmental agents to ensure that it is improved [77].Civil society actors affirm the notion of community and help improve it in contrast to the nature of human communities.
In HCI4D, participatory development approaches with communities are criticized on ontological and epistemological grounds for treating the "locals" as harmonious groups in which consensus is readily available [87].In reality, dissensus is a critical element of a healthy community debate as consensus [13,16,153], so we must seek out debate and methodologically find ways to respond to diverse, even contradictory opinions.In practice, disagreements about the distribution of development support can have real-world consequences communities have retaliated against members because of disagreements [13].The issue is not resolved simply by hearing and mediating between voices in a town hall.If participation is genuine, it probably brings conflict into the picture and the absence of conflict in participatory programs is suspicious [27,147].
The terminology for people who live in marginalized contexts and receive assistance from development organizations is debated.Historically, development practitioners used the term beneficiaries.However, "to be a beneficiary implies a relational weakness to the benefactor.It also implies that what she receives is beneficial or good" [59].Terms such as citizen, constituent, consumer, client, stakeholder, and partner are used by various development organizations but do not always capture the authentic relationship between development organizations and the individual.Some words appear too transactional (client, consumer); others too imprecise (constituent, stakeholder, partner); yet others appear exclusionary or constrained (citizens) [59,142].No single phrase can capture the complexities of the relationship that local populations in development contexts have with development organizations.
Despite its limitations, we draw on the term "community" to refer to populations who live in development contexts and are engaged directly or indirectly with development organizations.We use the term "community stakeholders" to draw particular attention to community members who are actively taking part in development activities and "local community" to draw attention to those who mainly share Gujit's "spatial proximity" [50] and, in reality, may have their own needs or agendas that undermine the community stakeholders.We adopt a critical perspective and, when discussing our own examples, try to give the reader a sense of the authenticity of the stakeholder community.Do members of the community have rich, routine interactions with one another?Is the community a label of convenience applied by the development organization to delineate a target population?We argue that this distinction is needed to reflect the complex relationship between local populations and development organizations.
"VOICE": A REVIEW
Over the past decade, HCI4D and related fields debated how to embed or hear, a voice in research with disadvantaged groups [141,150].The language used suggests that "community voice" should be clear, singular, directly observed and interrogable but this idealization hides considerations of authenticity, accuracy, and accountability and leans into problematic assumptions about the community [50,87].Prior work sought to empower marginalized community "voice" [5,14,70,73,152] using a wide range of methods.In particular, participatory or co-design approaches that provide excellent techniques to empower some stakeholder voices are widely used and sometimes uncritically assumed to be hearing community voice [60].However, these paradigms are profoundly entrenched in a European or North American workplace epistemology that embeds values, frameworks, and models that we know are not suitable for use in developing communities [60,151,152].
Researchers have echoed many times the relationship between voice and empowerment for disadvantaged communities [42,51,115,116].However, the narrative of empowerment itself can be problematic as it often obfuscates questions of who, within a local community, is being empowered.More critical researchers reflecting on their own work have highlighted how "empowerment" has masked economic, generational, and gender conflicts [28,48] alongside political [48,109] and religious ones [109] where empowering one group is seen, or can, disempower another.Digital divides can also determine who, in reality, is empowered by work [48,109,110], and more critical reflections lead to asking whether the voice of the participants or the voice of the facilitator is being heard [109,114].The issue is fraught as small changes to a design method can have a significant impact on participation, changing how much participants perceive a technology as being "for them" or excluding less literate participants [110].Matristic design has proposed that these issues be addressed through understanding, participation, collaboration, sacredness, and cyclic renovation of life [48].Other work echoes the need for ongoing engagement to actually empower participants [109,114].This combination of issues and solutions highlights the need to integrate local and cross-cultural design approaches such as storytelling, inclusive decision-making, and participatory community meetings [105,106,154].Bidwell and Winschier, who frequently work with rural African and Indigenous communities, provide concrete case studies to show how classic participatory design and HCI4D work diverge stressing the need for new methods to understand and integrate community voice [153].
Broadly speaking, whether work defines itself as HCI4D, ICT4D, participatory design, or codesign with disadvantaged stakeholders, we observe three distinct perspectives taken on voice: (1) the medium of voice -the avenue we use to communicate (e.g.speech) -used to engage with stakeholders in design work or by the technology in ICT deployments [161]; (2) functional considerations around giving communities of stakeholders input into processes that affect them including the methods used to talk with them and analyze their contribution to derive direction from it [31]; or (3) dialogue opening up or democratizing project delivery that explicitly places decisions in the hands of the community and attempts to build consensus on project direction through discussions [3].
Each of the different uses of the term highlights different beneficial attitudes, techniques, and positions, but although there are projects that use "voice" to refer to multiple facets of communityfacing work [79,133,139,159], we observe a tendency for many projects to focus on one of the three uses of the term meaning they can lose sight of the other benefits.We envisage projects engaging throughout their life cycle in ongoing community voice.In the following sections, we examine each concept of voice in turn and discuss the benefits of the attitudes they embed.We consider how the three different areas relate to other work on voice and the ways in which they are mutually re-enforcing, showing why all three should be deployed in combination.
Voice as a medium
Development projects that conceptualize voice as a medium recognize that, for community stakeholders on the ground, their speech is typically their primary form of communication due to variable levels of literacy in their local community [161].The practitioner who considers voice as a medium sees it a means of verbal and non-verbal communication encompassing everything from body language cues through to a show-and-tell tour of a village in contextual inquiry.The use of the medium is motivated by recognition of the importance of oral tradition and the role that speech and storytelling play in many developing communities [105,106,118,120,154].Emphasizing voice as a medium prioritizes making engagement with community stakeholders as free of barriers as possible.Work conducted in this way recognizes the importance of accessibility in project design and delivery and that literacy requirements are a barrier to useful engagement [14,161,162].Written communication has its own distinct benefits, such as allowing for the creation of detailed artefacts that enshrine knowledge and decisions [52], that are traded away to remove barriers.Projects that are solely configured to use voice as a medium must contend with the removal of powerful tools to record knowledge driving the adaptations of methodological tools that assume literacy for less literate populations [21] and socio-technical tools that do the same for illiterate communities [110].
Prior work in HCI has emphasized the importance of taking a broad perspective on the medium of voice.For example, when working with people with autism, Wilson et al. found that the medium of voice included words, sounds, bodily movement, touch, gesture, play, and creativity [150].When working with older adults, Leong et al. went further to frame the medium of voice as something that embeds the speakers' values within it [76].Beyond methodological consideration, HCI4D projects examine speech-based technologies and the benefits of audio as input to engage marginalized participants [6,107], designing community development spaces around participant speech input [128,137] or technologies such as interactive voice response (IVR) platforms [97,104,107].Traditional spaces have been reconfigured in projects such as CGNet which supports citizen journalism [79] and Root.io, which creates community radio [30] for audio-driven engagement.
Voice as an input
Other development works discuss the role that voice plays as an input into research or project organization [11,31,67].These projects are discussed as a form of consultation with community stakeholders.This is important because their input is not typically sought, and when it is, the methods soliciting it are not well-enacted [31].Following this approach recognizes that input from stakeholders is key to successful project delivery.Input can be realized through a diverse range of activities with stakeholders, including focus groups, workshops, town hall meetings, or voting on project agendas.These projects embody the idea that communities of stakeholders have some right to be heard in work that impacts their lives.However, when input neglects voice as a medium, the richness of the data gathered can be undermined [100], and when there is no dialogue, valuable insights can be lost or appear in the process when they are too late to act on [9,18].
Input can be gathered by foregrounding qualitative data to emphasize their narratives [71].Governments' surveys are often used in formulating development programs, so local communities are initially involved in sharing their needs and challenges but are not consulted during program and project designs [100].In academic development work, Brown and Micklson conducted a pilot project testing the use of smartphones to track children's health and development in Rwanda [18].They conducted surveys within the local community to determine their needs but did not involve community stakeholders or health workers during the design.Without dialogue, they were unaware of cultural norms in Rwanda; for example, discussing food was taboo.This illustrates the limitations of voice as the only input mode alone.Finally, some projects seek out community voices after delivering their interventions, gathering rich voices but doing so too late to realize the benefits [9].
Voice as dialogue
Voice is also framed as a dialogue between practitioners and stakeholder communities, empowering them not only to influence but also direct development work [3,89].The practitioner in these approaches does not remove themselves from the design and delivery but instead participates in a dialogue with community stakeholders using reflexive tools to understand their own biases and insertions into the process [150].The use of dialogue here is distinct from discussion, as dialogue implies a back and forth with an impact on the work [153].The techniques used for dialogue are diverse though superficially similar to those used in capturing voice as an input, but the treatment of community voice is different, and so are the questions put to stakeholders [136].
Voice in these works is perceived as analogous to democratization or empowerment [39,134], adopting the perspective that these are essential rights for stakeholders.This is true even when we see the underlying reason for allowing stakeholders to drive an agenda as something other than democratization's inherent value [2,136].However, these approaches can be problematic because they position the researcher as one with an agency in the process echoing challenges with the conceptualization of community [59].When the researcher is absent after the completion of the project, this risks normalizing the agency being removed or revoked.
This challenge is addressed through the capabilities approach of development thinkers such as Amartya Sen's and through frameworks such as participatory rural appraisal or assets-based inclusive design [3].Feminist approaches also pursue this, for example, by creating spaces for and with marginalized participants [2,61,68,76,149,150].These focus on dialogue with underserved communities.In the same way, that voice as a medium is not just about hearing other people speak, and voice as a dialogue is not literally about back-and-forth conversations [76].Participatory design in developing contexts has also envisaged digital platforms for communication and information playing a critical role in enabling direct influence on political and social matters [39,136,141].Sun presents "Culturally Localized User Experience (CLUE)" by advocating a dialogic view of local culture to satisfy cultural expectations to produce a usable and meaningful technology for local users [125].
Participatory design in developing contexts has also envisaged digital platforms for communication and information playing a critical role in enabling direct influence on political and social matters [39,141].Participatory design has been used as a tool to support activism, where participant voice can be an impetus for change in local settings such as hashtag activism in Bangladesh to advocate for victims of riverbank erosion [160].Community arts and design projects also seek to emphasize the voice of a particular community through qualitative, highly participatory exercises, such as the rich tradition of maker space work in HCI [117,133].Telhan et al. combined voice as both medium and dialogue as they sought to have their community voice research led by a team of community researchers [133].Another example of this ambivalence, where audio (speech) submissions from a community and their agency to make contributions are both characterized as voice, is seen in [139].Brown et al., while working with children with autism, had a commitment to participant voice, even when working with participants who were non-verbal or had difficulty communicating their feedback [19].
Takeaways
In "Why Voice Matters", Nick Couldry distinguishes between voice as a process and voice as a value [29].This distinction between habits and practices expresses people's capacity to create their own narratives (process) and a sociocultural orientation or worldview that privileges and discriminates in favor of such processes (value) [29].Our voice as input and dialogue builds on this to suggest that the process can be changed by the value perspective of the question put to the stakeholder community.Our reflections on current practices in ICT4D suggest a need to separate voice as a medium because of its salience, privileging it over other forms of communication for less literate populations.
We argue that the three types of voice are best when combined [79,159].Voice as a medium respects less literate participants' contributions, as input that could be constrained to simple survey responses becomes nuanced when captured by talking with people.Voice as a medium democratizes dialogue between researchers and community members making agenda-setting authentic and useful.Understanding voice as a dialogue can, in a project, also make conversations with stakeholders more authentic: the practitioners ask meaningful questions while seeking input, and because they place the community member on an equal footing with the practitioner, the discussions have a respectful framing.Voice as an input could be viewed as an inferior version of voice as dialogue.However, from a practitioner's perspective, voice as an input allows for broad engagement with local communities, as far more people can be surveyed than "work-shopped".This validites to the insights gained as many practitioners are involved in projects that work with communities of tens of thousands where there is no realistic way to share control with all of them.Voice as input allows them to have some influence on the process.Combining input and dialogue means taking broader community perspectives into account through literature or data gathering in the formation of community voice-driven projects.
Finally, we restate that our categorization can be problematic, as many HCI4D projects fit into multiple categories.Equally, however, existing uses of voice can share mutually exclusive ideals.In other words, there are tensions around how "voice" is understood, which should not surprise us considering the contextual applications in which community voice is understood and the divergent communities from which voice is solicited.However, ICT and HCI for development work have been criticized a lot as their paradigms such as participatory and co-design are profoundly entrenched in "Western" epistemology by European or North American workplaces through embedded values, frameworks and models [60,152].Winschier (2006) confirms that traditional PD was not suitable to apply with marginalized communities (such as rural African communities) [151] and highlights the need to integrate local and cross-cultural design approaches, such as stroytelling, inclusive decision-making, and participatory community meetings [105,106,154].Hence, Winschier and Bidwell, who broadly work for rural African and Indigenous communities, echoed many times the need for a redesign and meaningful lens to understand and interact with community voice in participatory design [153].
A FRAMEWORK TO UNDERSTAND BENEFITS OF COMMUNITY VOICE
Having tried to understand community and voice separately, we now construct a framework to delineate the benefits that can be gained by hearing a communities' voice.We developed the framework based on our experiences running numerous development projects and as designers grounded in experience centered design [81].Despite many development organizations, we collaborated to acknowledge the importance of community voice, but the projects did not hear it consistently.
Where we did hear it, the voice felt diluted, incomplete, or at the periphery of the project.We propose that a better understanding of these benefits can help reflexively engage with a community, identifying where project needs have subsumed the communities, where voices are constrained, or where they are being discounted.The framework was derived by identifying a range of benefits in literature and our own work and practice, workshopping them with the authors' research groups, and then collecting, organizing and refining them to arrive at a complete, condensed set describing the four benefits to hearing community voice: understanding context, creating empathy, leveraging local skills and expertise, and building trust.
Prior work has tried to systematize community voice capture using a range of methods ranging from methodologically focused ones such as storytelling, [114], dialogical probes [121], and respectful spaces [73] to socio-technical approaches incorporating touchscreens [109], photography [48], and mobile apps [110].Participatory design has also proposed frameworks to hear voices within vulnerable groups in the Western world including vulnerable young adults [68] and children with autism [76,149,150].Despite work in this area, most explorations call for more work in this space to explore ethical considerations [45,111,155], postcolonial feminist solidarity [61], power imbalance and political implications of development work [48,155], wider contextual understanding [45,155] and intervention sustainability [110].
Our framework differentiates itself by taking a broader perspective, focusing on the benefits of community voice that practitioners can look for in their own work.We offer a framework to critique the methods drawing on the qualitative research tradition of reflexivity [40].A reflexive approach, by which we engage with the output of the project in process, facilities methodological refinement [40], and better rigor in data collection [32], and helps address ethical conundrums as they occur [49,111].We do not recommend specific methods because HCI4D work is diverse, but we try to help practitioners identify absent benefits in a process.This strikes us as more valuable as it is hard to realize that something you are unaware of should be there when it is entirely absent.
Understanding context
Understanding context is critical to successful HCI4D projects and design work in general [81,158].Understanding context can be especially challenging for practitioners in HCI4D because contextual understandings arrived at in the lived environments of relatively privileged, educated and wealthy organizations do not reflect in underserved communities [143,144] and we must be aware what we insert into our work.Context in development projects is enormous [4,92].Even when designing for tightly constrained, familiar settings, with access to numerous stakeholders, we are unlikely to find a single person who understands all the contextual factors.Instead, we rely on our own tacit understanding to fill the gaps.Community voice addresses the problem of scale and lack of individual understanding by allowing designers to take input from many individuals, allowing community stakeholders to focus on the process and explain, in simple oral accounts, the most pertinent parts of their context.This understanding should be developed didactically with the community because it can correct practitioners and even guide their attention to ensure that they do not become focused on trivial factors [17].
The specific contextual factors considered in development work could be reduced to all the observable realities of day-to-day life in those settings including culture, societal norms, natural environment, built environment, legal considerations, and economic factors to name a limited subset.However, context goes beyond this.As Dopson and Firlie (2008) suggest, we should see context as an interactive process that changes continuously and occurs in the environment in which an organization sits and acts rather than being a backdrop [34].They explain context as a process at two levels: (i) outer or external context which refers to the social, economic, political, and competitive environment in which organizations and actors work and (ii) inner context where history, culture, and religious issues shape interventions.When understood as a dialogue, community voice supports the exploration of its contextual model because it is an ongoing process as well.Outer context can be understood through input from the local community and inner context through voice as a medium's tight interweaving of narrative and personal storytelling.In contrast, Lau et al. model the pragmatic contextual "barriers and facilitators" of interventions [74] arranged within a four-layer circle framework covering factors in the external context, institutional factors, professional factors and intervention level factors [74].Although more reductive, it appeals to a practitioner perspective on community voice as the barriers and facilitators mesh with the reality of realizing change in a project, when more nuanced understandings of context might seem a luxury.
Understanding Context Takeaways
• Context is outer: the local community, understood with their input, and inner: stakeholder communities, understood through voice as medium and dialogue.• Practitioners have to insert their western assumptions into projects when trying to understand context, pretending they are neutral will harm a projects chances of success.• Allowing community voice to guide the practitioner and project's attention to contextual factors corrects the most misaligned assumptions, going some way to addressing the problem.
Creating empathy
Empathizing with others means we develop a capacity to put ourselves in their place becoming better equipped to listen to them and gain insights into their lives [157].Tremblay and Harria explain empathy as "one's capacity to gain a grasp of the content of other people's interests, and to explain what one thinks, does, or feels in relation to our capacity to respond to others ethically" [135].Sultana makes a connection between emotions, subjectivity, and lived experience and argues for emotional political ecologies as "resource access, use, control, ownership and conflict are not only mediated through social relations of power, but also through emotional geographies where gendered subjectivities and embodied emotions constitute how nature-society relations are lived and experienced" [124].We consider empathy in this context from a pragmatist-dialogical perspective [157] where it is an attitude and a skill allowing practitioners to understand the lived experiences of others and respond from their own lived experiences and insights while also being able to engage in dialogue with them negotiating complex social situations and hierarchies.Hearing the voices and lived experiences of others is essential when creating empathy regardless of the theoretical perspective adopted, as it creates a virtuous cycle because empathizing with others makes it easier for them to give voice to their lived experiences since they know it will be accepted by the listener.We predominantly understand subjective lived experiences through storytelling as people express experiences and practitioners interpret them to provide meanings to the world [37].The medium of voice is particularly important, as stories, reasoning, emotional reactions, and values are relayed through verbal and non-verbal communication.Methodologically, subjective lived experiences are the building blocks by which empathy is created.Subjectivity has been described as "one's understanding of self and of what it means and feels like to exist within a specific place, time, or set of relationships" while they recognize that emotions "may often be triggered in response to power structures, and are frequently experienced in relation to whether one violates or meets expectations related to social norms" [88].There is a complex, dynamic interplay between experience and expression "life as experienced, how the person perceives and ascribes meaning of what happens, drawing on previous experience and cultural repertoires; and life as told, how experience is framed and articulated in a particular context and to a particular audience" [37].The medium of voice offers fertile starting grounds for these accounts as it allows people to tell their stories in their own words.To tackle voice poverty while focusing on power dynamics among marginalized rural Africans, Bidwell (2010) suggests designing an advanced storytelling process by framing design dialogically [54]), using cell phones to localize storytelling [14].Others have gone as far as to suggest that storytelling in marginalized communities is an inherent good that gives community members a sense of well-being [78].
On a more cautionary note, Bruner pointed out that stakeholders' "narratives are not transparent renditions of 'truth' but reflect a dynamic interplay of life, experience, and 'stories' and these can provide valuable insights into how people deal with certain situations, challenges and what they actually feel" [20].Ho adopts the opposite perspective, describing feelings as someone's contextual and situational experiences reflected in their emotional constructs to build meaning in their social relationships and everyday lives [57].Bruner's dynamics interplay the acknowledgment of a divergence between truth and internal feeling, or experience and expression, helping us understand the inner life of a stakeholder while Ho makes no claims at all that subjective feelings are grounded in emotion, only that they reflect them in some way.Both show that, taken in isolation, subjective accounts can undermine our understanding of external contextual factors.
Creating Empathy Takeaways
• Empathy, pragmatically, is the attitude and skills to understand and respond ethically to the lived experiences and interests of others by engaging in socially aware dialogue.• Subjective lived experiences are the building blocks of empathy, and storytelling and dialogue are fundamental components of eliciting and understanding these experiences.• Listening to accounts of lived experience is part of hearing community voice: how people talk and how they negotiate, conceptualize, and prioritize their challenges and needs.• Subjective accounts give unreliable insights into external contextual factors -community voice's multiple accounts allow triangulation of the 'truth' of them and lived experiences.
Leveraging local knowledge and skills
The local knowledge of communities and their skills can provide transformational input into development projects but, while local knowledge is widely touted by researchers in this area [25], practical documentation of its application in HCI4D projects is harder to come by.Gachanga (2005) observes that "despite acknowledgment of the important role Indigenous knowledge plays in sustainable development and peace building, many governments, donors, and NGOs appear to make little use of this valuable resource.Their recognition of Indigenous knowledge often amounts to little more than lip service, seldom translating into action or funding" [44].Skills that are frequently needed in the community include the ability to translate into participants' languages [162], identify potentially interested community stakeholders [33], find suitable sites to deploy interventions, navigate local holidays and traditions [102] and deal with the logistics of bringing equipment into a local community and leaving it there [99].In addition, more specific skills will be needed based on the specific nature of the intervention.We can broadly break these areas down into traditional knowledge, embedded knowledge, skills, resources and local expertise.
Local knowledge is often referred to using terms such as "traditional knowledge", "Indigenous knowledge", "lifelong learning" and "knowledge society" [41,145].More specifically, it is a collection of common and shared experiences and local concepts that are structured by the surroundings along with beliefs and perceptions to deal with problems, and generate new information [41].Warren's framing of Indigenous knowledge shows its value: "contrasts with the international knowledge system generated by universities, research institutions, and private firms.It is the basis for local level decision making in agriculture, health care, food preparation, education, natural resource management, and a host of other activities in rural communities" [146].Similarly, Sithole notes that little Indigenous knowledge has been captured and recorded for preservation, yet it represents an immensely valuable database [119].Their framing suggests that Indigenous knowledge is inherently unknown and new to research.These skills are passed on through word of mouth while local expertise is often identified through social networks.Battiste and Henderson, in describing the sharing of this knowledge, illustrate why community voice is particularly suited to learning about it: "through personal communication and demonstrations from the teacher to the apprentice, from parents to children, from neighbor to neighbour" [10].
Beyond helping with practical considerations of deployment or process, local knowledge provides a starting point for educational intervention as there is little point in teaching people what they already know.The importance of Indigenous knowledge in agriculture for disadvantaged groups is hard to overstate.For example, floating gardens are an ancient practice for growing crops, vegetables and spices in the wetlands of the southern floodplains of Bangladesh [7].Using local traditional knowledge for agricultural practice, in collaboration with a local development project, local communities have developed a technique to build floating platforms to cultivate crops, vegetables and farm fish.In 2015, the UN's Food and Agricultural Organization declared Bangladesh's floating gardens to be a globally important agricultural heritage system [126].Awori et al. investigate how digital technologies support practicing Indigenous knowledge and suggest directions for innovations that translate, formulate and support Indigenous knowledge in transnational contexts [5].
Local Knowledge and Skills Takeaways
• Local or Indigenous knowledge and skills are structured by common experiences, local concepts, beliefs, and perceptions and is not a static repository but an always evolving system.• The knowledge is inherently new to external parties and so it has transformational power for development projects to help addresses local problems.• The evolving knowledge is distributed between many people, transferred semi-systematically and orally so ongoing, dialogical community voice is the only way to capture it.
• Development interventions can support Indigenous knowledge, and it provides a critical
starting point for educational interventions.
Building trust and buy-in
HCI4D projects cannot make lasting, effective changes without trust and buy-in from the stakeholder community and, ideally, the wider local community [56,63,103].Buy-in has an even more substantial impact when accompanied by direct support built on local skills and knowledge.Trust is necessary because of the leap of faith that is needed to engage with projects and believe they will have an impact [46]; because they touch on sensitive topics that require personal disclosures; and because they can place participants in vulnerable situations [162].Trust strengthens optimism, engagement, and support for the project in the community.This effect is akin to the phenomenon of buy-in in participatory design which stresses the importance of selecting engaged co-designers [53,113].Trust acts as a moderator in discussions with participants, making it easier to develop empathy with them and access local skills and resources.In addition, if a project can foster trust it will be more likely to receive unexpected or unsolicated for support and direction [46].Trust is not a simple concept in HCI4D.For example, Cheema calls it "a basic consensus among members of a society on collective values, priorities, and differences and on the implicit acceptance of the society in which they live" [24] but the inherent conflict in communities discussed by Gujit [50] and Mohan [87] undermine this definition.Alternatively, Blind's model of trust, proposes two distinct types; "social trust" when a person has positive attitudes toward other members of their community and "political trust" when a person feels confident in and able to appraise or criticize the government and its associated institutions [15].If the stakeholder community is authentic, can we find social trust within it?If not and it is a convenient abstraction, can social trust even exist?More widely, do members of the stakeholder community have social trust in their local community or are they excluded or disempowered?In the context of development work, we can see that the project itself is a form of institution, and Blind's "political trust" would then imply that building trust means that stakeholders have confidence in both the work of the project and the viability of criticizing it.Seeking a community voice in dialogue helps realize both of these qualities.Allowing participants to have constant input into the project and demonstrating accountability shows respect from the project team for the local community setting up a virtuous cycle that encourages further engagement [46,93].Hearing community voice as a medium also builds trust and allows participants to set project agendas [80,98].
Trust and community buy-ins are widely supported in HCI4D literature and projects [47,84,96,140].However, gaining trust is pragmatically challenging as project leaders need to find ways to solicit and motivate early engagement with the project while at the same time managing community expectations and trying to achieve a genuine, positive impact.Beyond this, international development work is typically constrained by its funding; the work cannot continue outside of its scope without people in the stakeholder community taking up the mantle and ensuring its sustainability -something they will only do if they have bought into the concept.Examples of sustained buy-ins are relatively rare -in part because they happen after the project that would be most suited to documenting them has finished.When it does occur, we see computer skills taught in developing contexts [90], digital education centers [129], and citizen journalism [79] all outlasting the period of funding that established them.
Trust and Buy-in Takeaways
• HCI4D projects need trust and buy-in from the stakeholder and local community for effective and lasting change because it strengthens optimism, engagement, and support for the project.• Trust acts as a moderator in discussions with communities, making it easier to develop empathy with them and gain access local skills and resources.• Hearing community voice in dialogue is a virtuous cycle, it realizes accountability, shows respect to the local community, and gives confidence in the viability of criticizing the project.
APPLICATION OF COMMUNITY VOICE
We developed a community voice framework in response to the challenges that we encountered in our own work.In this section, we take a reflexive look at the projects that motivated the framework's development and show how its application was, as we started to formulate it, or could have been, when not fully formalized, beneficial.Table 1 has summary descriptions of the three projects: "Small Fish and Nutrition (Small Fish project)", "Improving Food Security of Women and Children by Enhancing Backyard and Small Scale Poultry Production in Southern Delta Region (Poultry project)" and "Participatory Research and Ownership with Technology, Information and Change (PROTIC project)".The projects ran between 2010 and 2021 in collaboration with large development organizations in Bangladesh.They were focused on intervention rather than research.Two were supported by donations and one by a philanthropic endeavor, and all of them at least touched on the challenges women faced in their rural communities.The projects had mixed Western/Bangladeshi management teams, technical support teams and technology development teams but fieldwork was conducted primarily by local personnel.The Small Fish project tried to diversify food sources for rural fishing villages, the Poultry project provided chicken and poultry sheds for women farmers and the PROTIC project tried to increase women farmers' access to agricultural information through smartphones.All the projects combined equipment and resource donations with information disseminated through posters, leaflets, and manuals for training leaders in the communities.In the PROTIC project, our role was to develop and implement an interactive mobile phone information system and responsive community "hub" in Bangladesh for isolated communities in sand islands (char) and coastal communities.In each case, we saw some successful improvements in social standing, economic capacity, and the decision-making power of women farmers in Bangladesh.In the Small Fish and Poultry projects we led in project design and high-level implementation.
The Small Fish project encountered several challenges that left us with questions about why certain issues had emerged.During the Poultry project, we were sensitized to the problems and so were able to interrogate them when they emerged but we did not understand why some of our efforts seemed to work and others did not.During the PROTIC project, we were able to begin to understand the challenges we faced as we developed the community voice framework and at the end of PROTIC, we formalized the framework.We go through each of the benefits of the community voice framework to reflect on how it could have helped us understand failures and successes in the work that motivated its development.
Contextual Understanding
In PROTIC, misunderstanding the context created burdens for our participants as a quarter of the women involved did not have electricity at home, and so had to pay others to charge the smartphones we gave them.The issues arose because the program decision makers missed elements of the infrastructure that formed the community's outer context because they went through two levels of intermediaries to find participants.Although their voices were heard by the local intermediary practitioners, this understanding did not move through the organizations managing the project.This speaks to the challenge of voice as a medium: while compelling and authentic, it is hard to share within and between organizations when teams cannot hear the voice directly.We needed representatives from each organization on the ground to hear and relay the voices or community stakeholders present in project management roles.
PROTIC also found out too late that in rural Bangladesh, nothing is seen as women's property, so their mobile phones were accessible to relatives.Additionally, as these women came from low-literacy backgrounds and had never used smartphones before, they were not aware of mobile security issues and practices.Thus, keeping a mobile phone outside for charging saw some of their personal information stolen and used to blackmail them, and their Facebook accounts had inappropriate, edited photos that made them look naked posted to them, causing serious social and familial tensions.In stakeholder's families, there is a complex relationship context where women need to share updates on what they do, with whom they communicate, and for what purposes.This inner contextual challenge was not captured early enough in the formulation of the project meaning that its goals were fundamentally misaligned with day-to-day lives.In-depth discussions that dig down to the level of the inner-context are the only way to address these issues and we needed spaces for our practitioners to hear voices in one-to-one settings where sensitive issues of inner context can be raised without power dynamic constraints.The Small Fish project encountered a simple but unanticipated challenge in the outer context when the dried fish it used to support nutrition were rejected by stakeholder communities because they only ate fresh fish.We were told about the relevant contextual factor of a fresh fish based diet, but we had not registered its significance and the fact that dried fish's flavor, texture, and density made stakeholders feel sick.
Empathy and Insight
In contrast to the contextual challenges, PROTIC did succeed in creating empathy and insight because we heard stories that spoke to our communities' feelings when dealing with extrinsic expectations and stories that built vivid impressions of their lives.We heard stories of forced marriage, giving birth, days of unpaid work at home and in fields, being ordered around by husbands and other family members, and limited expectations of ever having something of their own, recreation time, or good food and clothes.Moderating our understanding of these vivid stories, we could understand lived experiences through discourse on internal views and motivations.One participant might explain her belief that God created this specific situation for her which allows her to take a deep sense of satisfaction in the hard labor that her life involves.By going through her extrinsic and actual experiences, she has a strong emotional attachment to her husband, children and even the livestock with whom she spends her daily life.This perspective leads to a substantially different framing of her lived experience, which was not obvious to Western practitioners who come from privileged backgrounds.All three of our projects worked with stakeholders in communities where domestic violence is commonly experienced and a significant number of stakeholders excused this, feeling it was their husbands' right to discipline them and that it comes from a part of the relationship they see as loving or caring.In Small Fish, we offered training in the mornings.Stakeholders valued this, but it clashed with the time they would usually do domestic chores.If they came for the training, we placed them at risk of domestic violence.We should have offered the training at non fixed times to avoid this risk.In general, hearing these stories was difficult for researchers grounded in Western values.While we did not believe the violence was acceptable, we could understand that empowerment as a means to avoid it would not initially motivate all the stakeholders to participate in the projects; it was better to focus on empowering them to support their families, an almost universal value among our stakeholders.
Local Knowledge and Skills
In the Poultry project we failed to explore the community experts' local knowledge and skills.Traditional poultry houses and project-funded poultry sheds were very different.Though the new sheds looked modern, the ventilation was inappropriate and the floor design was unsanitary so chickens became sick or died from preventable diseases.The community shared their insights, saying that they generally build sheds with bamboo, wood, and straw, for better ventilation and used polythene underneath the floor for easier cleaning and sanitizing.Voice as an input was lacking: we needed to capture more data from the community, focusing on technical challenges.Our mistake had been modeling community engagement as being about buy-in but not respecting the community's insights and expertise in the practical problems of their day-to-day lives.The Small Fish and Poultry projects both encountered a similar problem when they failed to integrate local knowledge into training material.The training materials they created were inappropriate because they assumed, far from reality, no initial knowledge of farming practice and were overly academic and long-winded while not acknowledging practical insights from the community.In addition, in some places, the academic guidance even seemed incorrect where it clashed with local knowledge.For example, a recommendation that seeds could be planted at any time of the day did not perform as well as planting pre-dawn, and a crop that we recommended be planted in direct sunlight performed better in the shade possibly because the local environment was hotter than the place where the training materials had been developed.We also saw the importance of extra time and attention when moving from voice as a medium for communication to a written form of communication.Finally, we observed that our process needed to more proactively find local experts who had many years of experience farming to solicit their input.These mistakes all stem from subsuming communities' needs under the project, as our work favored glossy training brochures and modern looking infrastructure over well-grounded solutions.
Trust and buy-in
As the PROTIC project, neared its completion, participants began to worry about the cessation of the support they had come to rely on.The project had given them the opportunity to phone a local telecoms call center to ask questions about agriculture, use an Internet search engine, or attend in-person training sessions.The project had been better sensitized to community voice by reflecting on our work through the nascent idea of the community voice framework and had some successes because of is so the community had come to buy-in to it.As a result they took the initiative to address the endpoint and go beyond the scope of the project.Those who could write started to document practices, training, and common questions and answers so they would not be lost at the end of the project.The practice became widespread, and some women started to become community hubs that others would go to for advice and support in setting up their own farms.The value that the project offered and the extensive dialogue between practitioners and stakeholders, and between stakeholders, created deep trust and buy-in and even meant that the project started to create its own authentic community.Participants who had only been united through the project's artificial women farmers concept now had reason to talk with one another and share support.The project relied on voice as a medium to communicate with its participants but when the limits of voice as a medium in storing information became a problem, we saw that the buy-in motivated our community to work past it.In contrast, problems with our community voice process undermined trust during the initiation of the Poultry project, leading to serious problems.The commissioners did not consult with the local leaders and communities when they hired a vendor who had limited experience working with the stakeholder community.The chickens distributed were not brought from the same community where the project was implemented and most of them did not survive due to their susceptibility to local poultry diseases.Many of the project beneficiaries were frustrated because this risk was one that local people would have readily anticipated had they been asked, but because of a lack of trust, they did not feel empowered to volunteer their knowledge or correct our approach.This major failure wasted resources and was only corrected only by substantially more investment.
DISCUSSION
Community voice offers a tool to understand power dynamics but also suggests sensitivity to those dynamics and how they might play out.In our discussion, we reflect on how we can understand multiple communities overlapping with each other and the individuals within them holding their own values and goals.We start by discussing polyvocality as it applies to community voice and then consider how power dynamics play out through community voice in development work.Finally, we consider the historic lessons available to us about the usage of community voice and think briefly about where it might go in the future.
Community voice and polyvocality
Terminology matters.When using the term community voice, do we mean a collective singular voice or do we have a pluralistic understanding (community voices) [50,87,94]?We suggest that most projects have at least two communities, a "stakeholder community" and a "local community".The "stakeholder community" voice needs to be understood in a nuanced way.Many international development efforts construct artificial communities where dis-empowered groups are targeted for support but part of the reason they are disempowered may be not having a community: a tightly interconnected social communications network [107].This is not to say that these concepts lack value, but they do not always fit with, and are certainly not analogous to, existing local communities.Broadly speaking, we see two types of challenges in conceptualizing community voice: first, thinking of it too broadly, privileging gatekeepers or the local community for the sake of convenience and diluting the most important accounts of lived experience and undermining trust within the community; second, thinking too narrowly and limiting the potential to find local skills and knowledge or understand the contextual factors influencing the community's life.Our community voice framework suggests a more thoughtful approach to who your community is and opens the idea of redefining it as you work while committing to hearing community stakeholders' voices before major work, as part of understanding context and creating empathy.Thought of another way, input into the process is welcomed from all, but the medium of voice should be used with the stakeholder community and dialogue should allow them to exert control.However, engagement with the local community outside of direct stakeholders still has an enormous amount of value as they can contribute their local skills and knowledge and help understand external contextual factors.
In discussing polyvocality, we also need to acknowledge that some of the voices we hear are the researchers' own, and we should not ignore this, reify, or delegitimize them.It is crucial to turn to reflexive and critical accounts with listening and dialogue to engage with the role of authorial voice and subjectivity when conducting community-based development [13,35,75,108].To understand the role of researchers' voices in a more nuanced way we can look to the work of Taylor, who pointed to a shift to an "inside" from "right there", embracing researchers' subjectivity through their own voices [130].Alternatively, Le Dantec and Fox suggest accounting for this researcher voice with work before the work (community-based design work) "to create productive partnerships in community settings: developing relationships, demonstrating commitments, and overcoming personal and institutional barriers" [75].Taylor et al. focused on a mechanism of personal debriefing and reflexivity for design research documentation practice [132].
The PROTIC project considered a community too broadly and lost important insights into inner context as a result.The initial work sought out community leaders, successful women farmers who were wealthier and had higher social status, and their accounts of lived experience missed some of the problems other people were exposed to because of low socio-economic status.In the Poultry project, we had issues defining a community too widely and not recognizing and prioritizing the stakeholder group for that work.Agriculture extension officers within the communities we worked with were the primary source of information because they were easy to work with, had access to ICT and were able to drop in on meetings with the project team.They also provided quality input on many of the problems the women farmers experienced.However, they lacked expertise in the practical challenges of animal husbandry and our reliance on their voices meant that other members of the local community, and even the specific stakeholder community, did not trust us and were not empowered to correct mistakes the project made with poultry sheds and poultry purchases.
In contrast, the Small Fish project avoided problems of a community focus that was too wide or too narrow by reflecting on and changing our understanding of the stakeholder and local community voices based on what we heard.Initially, we talked with members of the stakeholder community and heard their accounts of health and nutrition challenges.Based on these accounts of lived experiences, we understood the many unique challenges they faced and identified local healthcare workers who had experience supporting people and an understanding of the medical reality of these problems.We expanded our understanding of the local community to include them, and talked with them to gain insights from their expertise about the specific problems and solutions available to the community.However, their input was broad, listing scores of problems, while talking with the stakeholder community clearly identified priorities for them.
Power dynamics in the community
We must carefully consider the most power-effective voices while identifying and prioritizing the voices of disadvantaged stakeholder communities to ensure that their voices are heard for decision-making [43].Influential people in local communities may be political leaders, community leaders, representatives of local development organizations, religious leaders, or school teachers and doctors.These people have inputs to share for project development but may be focused on maintaining hierarchical structures in their communities, so sometimes powerful voices can suppress disadvantaged community voices [64].However, they also have useful insights for the development of the stakeholder community and can add value.As a result, capturing their voices has two benefits: first, we can capture insightful perspectives, and second, we can map out whose voices could be a barrier to listening to a stakeholder community's voice and respecting their priorities for project development.
In the Small Fish project, during fieldwork, we found that our local NGO staff and gatekeepers had rich insights into women farmers' challenges and contexts which were very useful for project design.However, we also found that they influenced the stakeholder community to share specific problems and focus development on solutions to the issues they prioritized.Bidwell and Hardy similarly observed a dilemma in their fieldwork with rural villagers from South Africa and regional Indigenous Australia when they applied participatory design and ethnographic methods to amplify community voice [12].They suggest that in enabling local participation, more consideration should be given to power structures and time investment within a community.In the field, we ensured that they had limited access to group activities as part of creating safe spaces for dialogue and ensuring that we developed empathy.We should capture the most power-effective voices in a siloed manner, limiting their access to community discussions.By maintaining silos during community consultation, the stakeholder group can suggest potential project commissioners, individuals who are trustworthy and situated in the community and have insights that can produce better suggestions.
Cultural expectations of voices and gender
A particularly prevalent power dynamic that plays out between groups rather than individuals is community expectations of voice.In all of the projects we reflect on, there has been a strong sociocultural expectation that women have less of a voice to share their perceptions and less agency in decision-making than men.This is a widely reported challenge in international development [65,66,123] and any attempt to solicit community voices needs to actively counteract this if they want to gather rich accounts of lived experience from women participants.Other CSCW and HCI4D work highlights the importance of focusing on marginalized people's culture and language as a tool to keep face assumptions, cultural communication, and the potential repercussions in crosscultural design in check [45,83,111,152].Our experiences suggest that we should understand and consider sociocultural contexts and cultural expectations before designing community engagement with special care to capture and understand disadvantaged community voices, as well as foster trust-enhancing relationships with those stakeholder communities.
Based on our practical experiences, when working with disadvantaged stakeholders, we must actively place the stakeholder communities' voices at the center of the process to build trust.We argue that we can ensure representative community views by capturing voices from diverse groups.However, we emphasize that disadvantaged communities for whom a development project or program is initiated need to be prioritized when capturing voices.By centering these marginalized community voices, we can move toward a more equitable society in which marginalized groups can be an integral part of the traditional, multisectoral decision-making stakeholders' platform for informing programs and policies.Hence, when designing community participation, project commissioners, for example, should center disadvantaged communities.
Moving beyond research and practice rhetoric
Participatory development has become development's current day orthodoxy [69].Community voice needs to go beyond posturing and be actuated within research and project practice (cf.[36]).Traditionally, within development, numerous tools and techniques are subsumed within community voice approaches [82,95] and participatory researchers draw on non-literate and oral communication [101].Despite these accommodations, Mercer et al. state that "their [participants'] involvement in decision-making throughout the process is often questionable" [82].While participation has been lauded and the inclusion of local people in decision-making has been strongly encouraged, it remains a buzzword that is rarely fully realized [95].
The history of community voice is not as virtuous as we might expect.Colonial governments valued community voice "as a safety valve to silence colonial subjects demanding space" [91].Voice can be elicited in an instrumental manner, serving the interests of outsiders by outsourcing the need for project labor or time to the time-poor community.For instance, women with heavy domestic responsibilities may not be able to sustain large amounts of time away from home [147].Furthermore, non-participation should be considered a legitimate form of participation because marginalized groups may not voice their interests due to low expectations of change "born out of a general sense of powerlessness or earlier disappointments" [147].Despite this tainted past, community voice approaches have a positive effect within contemporary usage as a means of subverting the dominance of top-down strategies within international development and developing new ways of engaging local people in decision-making [69,82].
The future of community voice cannot be predicted but new technology and current research shows that there may be exciting avenues to both support and develop it.Erete et al. argued that social media platforms strengthen communities' voices by increasing avenues for civic engagement beyond offline activities [38].Others have looked at community media (e.g., participatory video [9,138]) and framed it explicitly as a way to enable communities to share their voice more effectively.Varghese et al. argued that social media platforms strengthen communities' voices by increasing avenues for found that community media can be a rich source of ongoing ongoing community voice through periodic community involvement in data collection [138].Srinivasan and Burrel explore mobile phone use among fishers in Kerala by exploring historical, geographic, political and economic conditions on the importance of price information to fishers and economists [122].Others like Saha et al. take a more holistic approach, including operational concerns around capturing citizens' voices, as well as taking into account the broader program and policy landscape,looking at how community voice can be incorporated within that [112].Campbell and Cornish argue that the need for "transformative communication" to hear community voice in meaningful ways is also crucial by emphasizing both the development of a community's voice and the importance of enabling environments to be heard [23].They suggest that "transformative communication" for voice needs to create democratic and accountable leadership and recognize the rights of disadvantaged people's social, economic and political empowerment.
Framework limitations
The projects we based this work on are located in regions of Bangladesh that have their own unique characteristics and challenges for international development work which might have influenced the framework's development.Most significantly, the need for an understanding of the benefits of polyvocal community voice might be especially pronounced when working in this context, as gender norms, religious norms, social hierarchies, and significant wealth-inequality are prevalent in the rural Bangladeshi communities that we worked with.Following from this, development organizations in this region might be particularly prescriptive in their engagements with local communities as well.Our research team included several people from Bangladesh, and they were, among other roles, primarily responsible for data collection, leading to two potential concerns.First, they did not have to overcome language barriers to work with the community, whereas in many other developing contexts, multiple languages are spoken in a single target community.Second, this might introduce its own oversight challenges in the on-the-ground discussions we have had with our stakeholders.Finally, in contrast with some other locales in which we have performed HCI4D work, the communities in Bangladesh held proportionate views on international development efforts, not particularly distrusting them while being aware of their shortcomings or weaknesses.
CONCLUSION
In this paper, we have discussed the oft-used metaphor of "community voice", critiquing the constituent terms of "community" and "voice".We followed this with a conceptualization of the different types of voice evident in HCI projects working with marginalized or underserved populations, followed by a breakdown of the benefits that hearing authentic community voices brings.We then critiqued our own projects and put forward considerations for CSCW and HCI researchers working with communities that are interested in engaging with a more nuanced understanding of community voice.
The history of CSCW and HCI for development is mixed, with as many prominent failures as successes.If CSCW and HCI for development researchers can make use of a more rounded concept of community voice, thinking of it as a process in their work that starts and ends with their target communities, we may see more projects reaching the golden standard of sustainable developmental interventions.The benefits of hearing community voice are not realized for a wide range of reasons of which misunderstanding its nature and benefits is only one part.However, we hope that CSCW and HCI for development researchers will be more motivated and better equipped to argue for the benefits of community voice in their work with an understanding of it in place.
Table 1 .
Background information on three development projects in Bangladesh in which the authors were involved between 2010 and 2021 summarizing their purpose, scale, and funding source.Note: The three projects are old and tier project websites no longer exist. | 2023-10-05T13:04:36.860Z | 2023-09-28T00:00:00.000 | {
"year": 2023,
"sha1": "8bf86b8c0610fbe911efbbe2d84ce0018b21dc91",
"oa_license": "CCBYNCSA",
"oa_url": "https://dl.acm.org/doi/pdf/10.1145/3610174",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bb5d3cdf831144fe15dc986f516ae4a31a282463",
"s2fieldsofstudy": [
"Computer Science",
"Sociology"
],
"extfieldsofstudy": []
} |
221655407 | pes2o/s2orc | v3-fos-license | 3-D macro/microporous-nanofibrous bacterial cellulose scaffolds seeded with BMP-2 preconditioned mesenchymal stem cells exhibit remarkable potential for bone tissue engineering
Bone repair using BMP-2 is a promising therapeutic approach in clinical practices, however, high dosages required to be effective pose issues of cost and safety. The present study demonstrates the potential of low dose BMP-2 treatment via tissue engineering approach, which amalgamates 3-D macro/microporous-nanofibrous bacterial cellulose (mNBC) scaffolds and low dose BMP-2 primed murine mesenchymal stem cells (C3H10T1/2 cells). Initial studies on cell-scaffold interaction using unprimed C3H10T1/2 cells confirmed that scaffolds provided a propitious environment for cell adhesion, growth, and infiltration, owing to its ECM-mimicking nano-macro/micro architecture. Osteogenic studies were conducted by preconditioning the cells with 50 ng/mL BMP-2 for 15 minutes, followed by culturing on mNBC scaffolds for up to three weeks. The results showed an early onset and significantly enhanced bone matrix secretion and maturation in the scaffolds seeded with BMP-2 primed cells compared to the unprimed ones. Moreover, mNBC scaffolds alone were able to facilitate the mineralization of cells to some extent. These findings suggest that, with the aid of 'osteoinduction' from low dose BMP-2 priming of stem cells and 'osteoconduction' from nano-macro/micro topography of mNBC scaffolds, a cost-effective bone tissue engineering strategy can be designed for quick and excellent in vivo osseointegration.
Cell culture
Murine mesenchymal stem cells (C3H10T1/2 cells) were cultured in DMEM high glucose medium supplemented with L-glutamine, 10% FBS, 1% antibiotic-antimycotic solution (containing 10,000 U Penicillin, 10mg Streptomycin and 25µg Amphotericin B per mL) and propagated at 37 °C in a humidified atmosphere of 5% CO2. Once the cells reached 80-85% confluence, they were harvested using trypsin-EDTA solution, resuspended in fresh culture medium at desired density and used for seeding the scaffolds.
Cell seeding and culture on 3-D mNBC scaffolds
Before cell seeding, the scaffolds were sterilized by autoclaving at 121 °C for 20 min, followed by immersion in 70% ethanol under UV for 2 h. Subsequently, the scaffolds were washed twice with sterile PBS and submerged in cell culture medium for 12 h at 37 °C. After incubation, the extraneous culture medium (i.e. unsoaked medium) was removed and cells were seeded onto the scaffolds at predetermined cell density as follows, 12-well sized scaffolds ( ̴ 18 mm diameter, ̴ 9 mm thickness) -1,00,000 cells/scaffold using 200 µL cell suspension 24-well sized scaffolds ( ̴ 12 mm diameter, ̴ 6 mm thickness) -50,000 cells/scaffold using 100 µL cell suspension 48-well sized scaffolds ( ̴ 8 mm diameter, ̴ 4 mm thickness) -30,000 cells/scaffold using 50 µL cell suspension Following cell seeding, the scaffolds were kept at 37 °C under a humidified atmosphere with 5% CO2 for initial cell attachment and the culture medium was added after 4 h of incubation to prevent unnecessary cell migration to the bottom of the plate. The plate was again placed in a humidified incubator with 5% CO2 at 37 °C for the duration of the experiments and the culture medium was replenished at each alternate day.
Cell attachment
For cell attachment studies, cell-seeded scaffolds (24-well sized) were incubated at 37 °C under a humidified atmosphere with 5% CO2 for 12 h. Following incubation, the scaffolds were washed twice with PBS and fixed using 4% formaldehyde for 30 min. After further washing with PBS, the scaffolds were dehydrated using a graded series of ethanol solutions (10%, 20%, 30%, 50%, 70%, 90%, 100%(2x) for 5 min at each step), immersed in 100% HMDS and left to dry at RT.
The dried cell-seeded scaffolds were sputter-coated with gold for 90 s and observed under FE-SEM (Carl Zeiss, Ultra plus, Germany).
MTT assay
For this, 12-well sized mNBC scaffolds seeded with 1,00,000 cells/scaffold were cultured for up to one week and MTT assay was performed at predetermined time points (1, 4, and 7 days postseeding) according to the protocol mentioned elsewhere [20]. In brief, MTT solution was added to the scaffold-containing wells at a final concentration of 0.5 mg/mL at specified time points, and the plate was incubated in the dark for 4 h under a humidified atmosphere of 5% CO2 at 37 °C. Post incubation, the solution in the wells was removed and the formazan crystals formed over the scaffolds were dissolved in DMSO by holding the plate on a shaker under darkness for 1 h, which was then followed by measuring the absorbance at 570 nm. Scaffolds that were not seeded with cells were taken as a control to ignore the non-specific adsorption of MTT to the scaffolds. Besides quantitative measurements, visual and microscopic images of the formazan crystals over the scaffolds were also obtained using a normal digital camera and a CCD camera equipped with an inverted light microscope (Zeiss, Axiovert 25, Germany), respectively.
Live/dead cell assay
C3H10T1/2 cells (at seeding density of 1,00,000 cells/scaffold) were grown on 12-well sized scaffolds for 1, 4 and 7 days. At defined time points, the scaffolds were taken out, rinsed gently with PBS, and stained with equal volumes of acridine orange and ethidium bromide at a final concentration of 5 µg/mL for 30 s. Scaffolds were then visualized and imaged under an inverted fluorescence microscope (Carl Zeiss Axio Vert.A1, USA) to assess cell viability and proliferation.
Cell infiltration
At 1, 4, and 7 days of culture, the cell-seeded scaffolds (48-well sized) were removed from tissue culture plate, washed twice with PBS, and fixed in 4% formaldehyde for 30 min at RT. Post fixation, paraffin embedding of the scaffolds was done according to the protocol mentioned elsewhere with minor modifications [21]. Briefly, the scaffolds were dehydrated by escalating grades of ethanol (10%, 20%, 30%, 50%, for 10 min at each step and with 70%, 90%, 100%(2x) for 60 min at each step, at RT), treated with xylene (100% for 60 min(2x) at RT) and infiltrated with paraffin wax at 60 °C for 60 min(2x). The scaffolds were then embedded in the wax and cut transversely into ̴ 0.5 mm thin slices using a microtome. Afterward, the slices were dewaxed using xylene, rehydrated using decreasing concentrations of ethanol (100%(2x), 90%, 70%, 50%, H2O(2x) for 5 min at each step at RT), permeabilized with 0.1% Triton-X100 for 20 min and then stained with DAPI (0.3 µg/mL) for 20 min. The sections were then visualized and imaged under an inverted fluorescence microscope.
Cell morphology and cytoskeleton arrangement
Fluorescent staining of actin filaments was done to examine the morphology and cytoskeleton arrangement of cells grown over mNBC scaffolds. Briefly, 3-D cell-scaffold constructs were washed thrice with PBS and fixed using 4% formaldehyde for 30 min at RT. Following further washing with PBS, the cells were permeabilized using 0.1% Triton X-100 for 15 min at RT and stained with phalloidin-FITC (50 µg/mL) for 4h at RT under darkness. The constructs were again washed with PBS (3x) and counterstained with DAPI for 20 min at RT. Subsequently, the constructs were washed several times with PBS to remove the unbound dye and the cells were visualized and imaged using Confocal Laser Scanning Microscope (LSM 780, Carl Zeiss, Germany). Stacks of confocal images were obtained by optical slicing of the scaffolds in Zdirection from top to bottom with 10 µm slice thickness up to 100 µm via Z-stack function of confocal microscopy.
Osteogenic studies
For osteogenic studies, four experimental groups of cell-scaffold constructs ( Fig. 1) were maintained: (i) Group PM: Cells seeded mNBC scaffolds cultured in proliferation medium (ii) Group PMB: BMP-2 preconditioned-cells seeded mNBC scaffolds cultured in proliferation medium (iii) Group OM: Cells seeded mNBC scaffolds cultured in osteogenic medium (iv) Group OMB: BMP-2 preconditioned-cells seeded mNBC scaffolds cultured in osteogenic medium Prior to seeding, C3H10T1/2 cells were divided into two batches. One batch of cells was preconditioned with BMP-2 by incubating the cells (1,00,000 cells/ml) in proliferation medium (DMEM + 10% FBS) containing 50 ng/mL BMP-2 for 15 min, at 37 °C in a humidified atmosphere of 5% CO2 . The other batch of cells was used as such without any kind of treatment.
Scaffolds (12-well sized) were then seeded with the respective batch of cells (2,00,000 cells/scaffold) as depicted in Fig.1, and cultured in proliferation medium for two days at 37 °C under a humidified atmosphere of 5% CO2. After two days, the culture medium was replaced with osteogenic media (DMEM, 10% FBS, 10nM dexamethasone, 50 μg/ml ascorbic acid, and 10mM β-glycerophosphate) in groups OM and OMB, while groups PM and PMB were provided with proliferation media up to the duration of the culture. The culture media were changed every second day.
Alizarin red S (ARS) staining
For ARS staining, the cell-scaffold constructs were washed twice with PBS, fixed in 4% formaldehyde for 30 min, and then rinsed thrice with deionized water followed by staining in ARS solution (1%, pH-4.3) for 20 min at RT. Excess stain was removed by submerging the scaffolds in deionized water overnight. The scaffolds were then dipped 20 times in acetone, 20 times in acetone-xylene (1:1) solution, followed by clearing in 100% xylene for 15 min. The constructs were photographed using a normal digital camera and the microscopic images were obtained using a light microscope equipped with a CCD camera.
The quantification of mineralization was performed using a colorimetric method [22].
Briefly, after clearing with xylene, the scaffolds were air-dried and added with 10% acetic acid (and crushed to some extent) followed by incubation at RT for 30 min under shaking to extract calcium bound ARS from the scaffolds. The mix obtained was transferred to a tube, vortexed for 1 min, and heated to 85°C for 10 min, then ice-cooled for 5 min. The slurry was then centrifuged and the collected supernatant was mixed with 10% NH4OH at a ratio of 10:4, which was then followed by absorbance measurement at 405 nm.
Visualization of Ca-P deposits under FE-SEM-EDS
After 21 days of culture, the cell-scaffold constructs were processed for FE-SEM-EDS analysis according to the protocol detailed in Section 2.6. The samples were then examined under FE-SEM (FEI Quanta 200 FEG), equipped with energy-dispersive x-ray spectroscopy (EDS) (EDAX TEAM Software), to visualize, map, and detect mineral deposits (Ca-P) over mNBC scaffolds grown under various groups.
Statistical analysis
Quantitative data are presented as mean ± standard deviation. Statistical comparisons were made using ANOVA followed by Tukey's post hoc test using Graph Pad Prism 6. The statistical significance was determined at p≤0.05.
Macro and micro morphologies of 3-D mNBC scaffolds
Macro/microporous-nanofibrous scaffolds are attracting widespread interest among researchers [23-25] due to their topological similarities with the natural ECM that facilitates high degree of cell adhesion and growth. Fig. 2 (a-c) depicts the macroscopic morphologies of the prepared 3-D macro/microporous-nanofibrous bacterial cellulose (3-D mNBC) scaffolds. The scaffolds represented are ̴ 9 mm in thickness and ̴ 18 mm in diameter. However, the size and shape of the scaffolds could be controlled by choosing the molds of specific dimensions to meet the desired size and shape requirements of the tissue. The micromorphology of the scaffolds was examined through FE-SEM ( Fig. 2 d,e), which demonstrated that mNBC scaffolds had a highly porous microarchitecture compared to the native BC membrane ( Figure S1) and displayed an irregular open-pore geometry with interconnected pore configuration formed by macro-pores (>100 µm, predominantly), micro-pores (<100 µm) and nano-pores (<100 nm).
Pore size is a pivotal factor to be considered when designing scaffolds for tissue engineering, as pores facilitate the migration and proliferation of cells to the core of the implant as well as support vascularization. However, there are conflicting reports on the optimal pore size of scaffolds for bone tissue engineering. Scaffolds with mean pore sizes ranging from 20-enhance the osteogenic potential of scaffolds [23,26,27]. However, some reports have shown that microporosity (pore size <100 µm) plays a significant role in enhancing the osteogenic potential of scaffolds [28][29][30]. In this regard, Murphy et al. [31] put forward that micro-pores play a beneficial role in initial cell adhesion, but in the long run, improved cell infiltration and migration provided by macro-pores outweigh this effect, therefore deemed optimal for bone tissue repair. Thus, the nanofibrous structure, pore size heterogeneity and pores interconnectivity of the prepared mNBC scaffolds may be conducive to cell adhesion, proliferation, infiltration, and migration, which may lead to excellent osseointegration and vascularization upon implantation.
X-Ray diffraction and thermogravimetric analyses of 3-D mNBC scaffolds
X-ray diffraction analysis of mNBC scaffolds was carried out to observe any physical changes in the polymer after scaffold preparation. As shown in Fig. 2 f, the diffractogram showed three characteristic peaks at 2θ = 14.6, 16.2 and 22.2°, corresponding to (110), (110) and (200) crystallographic planes of the cellulose lattice, respectively [18,32], which was similar to the XRD profile of the native BC membrane ( Figure S2). However, the crystallinity index (CrI) of the cellulose got decreased from 87.7 % (native BC membrane) to 54.66 % after scaffold preparation and the allomorphic category of the cellulose also changed from Cellulose Iα (triclinic lattice structure) to Cellulose Iβ (monoclinic lattice structure). This could be due to the disruption of cellulose chain assembly during the crushing process used for the preparation of scaffolds.
Thermogravimetric analysis of mNBC scaffolds (Fig. 2e) also showed a similar TG-DTG profile to that obtained with native BC membrane ( Figure S3) with three characteristic phases of weight loss [18]. However, the degradation of mNBC scaffolds started at ̴ 285 °C and 33.19 % mass was remained at 350 °C, while the native BC membrane was stable up to ̴ 295 °C and 43.69 % residual mass was present at 350 °C. Decreased thermal stability of mNBC scaffolds could be attributed to their lowered crystallinity index, as the thermal degradation behavior is reported to be persuaded by the crystallinity and orientation of the BC nanofibers [32].
Degradation behavior of the scaffolds
An ideal scaffold is required either to degrade or to be reabsorbed by the body after tissue regeneration. Being a polysaccharide, BC is unlikely to get affected by the proteases. Hence, the in vitro degradation behavior of mNBC scaffolds was examined in PBS and PBS containing lysozyme solutions, as lysozyme is present in almost all body fluids [33]. Although lysozyme mainly breaks the β-1,4 glycosidic linkage between the NAM and NAG units of peptidoglycan, it may also affect the β-1,4 glycosidic linkage of cellulose to some extent. Thus, mNBC scaffolds were incubated in PBS (pH 7.4) and PBS containing lysozyme (0.2%, pH 7.4) solutions separately, at 37 °C for 15, 30, and 60 days to assess the weight loss in scaffolds due to dissolution and degradation. No significant change in the weight of the scaffolds was noted in either of the solutions even after 60 days, as well as no deterioration in the scaffold's microarchitecture was noticed either, except little crumpling ( Figure S4), which indicates that the time needed for the degradation of mNBC scaffolds may be longer than the observation period.
A study by Martson et al. in a rat model reported that the cellulose sponges did not degrade completely even after 60 weeks. Although they were totally filled up with the connective tissues after 8 weeks of implantation, but the emergence of cracks and fissures, and slackening of the pore walls of the cellulose sponges were observed only after 16 weeks, hence they regarded it as a slowly degradable implantation material [34].
Since cellulose is generally degraded in nature by microbial enzymes through hydrolase attack on the β-1,4 linkages, but these enzymes are not present in the mammals. In this case, cellulose degradation is likely to occur by an amalgamation of chemical, biological and mechanical processes. In other words, its degradation is controlled by several factors, such as its crystallinity, aggregation state, surface area, shape and morphology of the scaffolds, and the availability of physiological enzymes that attack the β-1,4 linkages [34,35]. Thus, the highly porous interconnected geometry of the scaffolds and the reduced crystallinity of the cellulose after scaffold preparation (as mentioned in Section 3.1.2) may be advantageous for its degradation after implantation.
Cell attachment, proliferation, viability and infiltration
Cell behavior, such as adhesion, proliferation, spreading, and infiltration represents the initial phase of cell-scaffold communication which subsequently impacts the further events viz. differentiation and mineralization [36]. Although the biocompatibility of bacterial cellulose is well known, but the source of BC synthesis, the method used for BC scaffolds preparation, the scaffold characteristics and the cell type being used, make the preliminary cell-scaffold interaction studies essential for any further studies to begin with.
In this regard, Fig. 3 displays the representative electron microscopic images of murine MSCs (C3H10T1/2 cells) adhered to mNBC scaffolds 12 h post-seeding. The images showed that the cells adhered well to the scaffold and maintained an extended fibroblast-like morphology. High-magnification images revealed that the cells adhered to the scaffold with their pseudopodium anchored on the walls of BC nanofibers, an indication of typical cell attachment and growth process.
After adhesion, cells enter into the proliferation phase, hence, C3H10T1/2 cell proliferation on the mNBC scaffolds was investigated by MTT assay and the findings are summarized in Fig. 4a. MTT staining of cell-seeded scaffolds (Fig. 4a(i)) at various time points (days 1, 4, and 7) revealed that the cells were metabolically active (as indicated by purple color intensity) and gradually migrated throughout the scaffold during the time. Quantified data of MTT staining (Fig. 4a(ii)) showed a significant increase (P < 0.001) in the metabolic activity as a (Fig. 4b), reaffirmed the biocompatibility of mNBC scaffolds with hardly any detectable cell death. The cells continued to proliferate throughout the scaffold, as the results showed a rise in cell number over time, however, due to the 3-D nature of the scaffolds, the scattered cells could not be clearly imaged at a single focus but were visible on different planes. These findings are attributed to the highly porous interconnected geometry of mNBC scaffolds due to which the diffusion of nutrients and waste across the scaffold was ensured, rendering the cells to proliferate and be metabolically active.
Rapidly attracting, recruiting, and dispersing the surrounding cells through the 3D matrix is one of the key requisites for the success of implantable tissue engineering scaffolds [37]. Cell ingress into mNBC scaffolds has thus been investigated during culture for up to 7 days by DAPI staining of cell nuclei in scaffold cross-sections. As delineated in Fig. 5a, the majority of seeded cells were present at/near the scaffold surface and a very small percentage of cells could be seen into the depth of the scaffold on day 1. However, by day 7, cells had infiltrated, proliferated, and homogeneously disseminated throughout the entire depth of scaffold; indicating the potential of mNBC scaffolds for tissue in-growth as well.
The microscopic morphology of C3H10T1/2 cells on mNBC scaffolds was confirmed using CLSM, where the cells were stained with phalloidin tagged FITC for visualization of cytoskeletal processes and with DAPI to visualize nuclei. The results demonstrated a well spread, elongated, fibroblast-like morphology of C3H10T1/2 cells (Fig. 5b(ii)), consistent with the FE-SEM observation (Fig. 3). The cell infiltration inside the mNBC scaffolds was further confirmed by optical slicing of the scaffolds in Z-direction from top to bottom with 10 µm slice thickness through the Z-stack function of confocal microscopy ( Fig. 5b(i)). Reconstructed 3-D projection images of the scaffold on day 4 post-seeding exhibited adequate growth, proliferation, and infiltration of cells, further supporting the fact that the mNBC scaffolds are ideal for tissue engineering applications.
Osteogenic studies with BMP-2 preconditioned murine mesenchymal stem cells
Bone regeneration is regulated by a cascade of molecular factors, however, BMPs (bone morphogenetic proteins) play a critical role in initiating the fracture repair cascade and primarily act by triggering osteogenic differentiation of osteoprogenitors and recruiting MSCs to the injured area [3]. In particular, BMP-2 and BMP-7 are considered the most potent osteoinductive cytokines, which are approved by FDA for the clinical practices [13]. However, the controlled delivery of BMPs to the site of injured bone tissue continues to be an arduous task due to the variable release profiles of BMPs from the carriers [3,13]. For instance, adsorbing BMPs to the implant surface leads to an early, uncontrolled, burst release of GF when exposed to the physiological environment. Immobilizing BMPs to the surface of the implant maintains a sustained presence of GFs; however, due to arduousness in controlling the modification site, covalent binding may block active sites of the protein and thus impedes the bioactivity of GF.
Encapsulation and entrapment of BMPs evade the hitches associated with adsorption and immobilization, thus are the most popular way to deliver GFs; but many of these methods expose BMPs to harsh solvents, which may distort the conformational structure of the protein, thereby interferes with GF activity. All these hassles lead to the need for supraphysiological loading of Therefore, novel systems for BMP delivery and the alternative approaches to harness optimal BMP effect at low BMP concentration, are continue to receive attention.
Preconditioning strategies in stem cell therapy are currently catching the attention of researchers as a variety of preconditioning triggers such as sublethal insults (e.g. ischemia, anoxia, hypoxia), growth factors (e.g. SDF-1, ILGF-1, BMP-2), pharmacological agents (e.g. apelin, diazoxide, isoflurane) are found to increase the regenerative and repair potential of stem cells and stem cell-derived progenitor cells [14][15][16][17]. Thus, we preconditioned MSCs with a very low dose of BMP-2 (50 ng/mL) for 15 minutes prior to seeding on the scaffolds. Cell-seeded scaffolds were then cultured in complete media with/without osteogenic inducers (β-glycerol-2phosphate, dexamethasone and ascorbic acid) to determine whether BMP-2 preconditioning could modulate the stem cells behavior towards osteogenic differentiation. Two propositions led us to speculate that preconditioning of cells with BMP-2 could be sufficient to elicit osteogenic responses; (1) BMP-2 activity is mainly required at the initial stages of fracture healing, and (2) due to the short (in minutes) systemic half-life of BMP-2, its activity vanish gradually, even when administered locally in the scaffolds [41][42][43].
One of the hallmarks of osteogenic differentiation is the formation of extracellular mineralized deposits of calcium and phosphorus salts, in which the anionic matrix molecules bind with Ca 2+ and PO4 3− ions, and thereafter serve as sites for nucleation and growth, leading to calcification [36]. Alizarin Red S staining was used to probe these mineral deposits on various groups of cell-seeded mNBC scaffolds at different time points (7,14, and 21 days). Fig. 6(a-c) displays the optical and microscopic images of the ARS staining for the cell-seeded mNBC scaffolds and Fig. 6d depicts the respective quantitative data obtained after extracting ARS with 10% acetic acid. Control scaffolds (without cells) incubated in differentiation medium, showed no positive stain ( Figure S5), thus ruling out the point of dye absorption/adsorption by the scaffolds.
The results demonstrated a variation in the extent of mineralization with culture time, culture media, and cell preconditioning. At day 7, the scaffolds cultured in osteogenic medium (i.e. Group OM and OMB) exhibited areas of diffuse and nodular mineralization, where group OMB (i.e. seeded with BMP-2 preconditioned MSCs) was slightly more intense than group OM (seeded with unprimed MSCs), however, it did not reach the statistical significance (P > 0.05).
On the other hand, the scaffolds grown in proliferation medium (i.e. Group PM and PMB) showed a very low intensity diffused staining pattern, with no accountable difference between the groups, with reference to cell preconditioning. The similar trend was observed at day 14, but with relatively higher stain intensity in all the groups. However, some nodular mineralization was noticed in group PMB and the difference in the stain intensity of the scaffolds of group OMB and OM reached statistical significance (P < 0.01), suggestive of BMP-2 induced osteogenesis of preconditioned cells. At day 21, the stain intensity was much higher in all the groups, in fact, scaffolds of group OM and OMB completely turned red. Additionally, the stained deposits were witnessed throughout the scaffold depth from top to bottom, implying time dependence of cell mineralization to produce more Ca 2+ binding sites for ARS. Moreover, the scaffolds seeded with BMP-2 preconditioned cells showed a higher score of stain intensity compared to the scaffolds seeded with unprimed cells, under both proliferation (P < 0.01) and osteogenic medium (P < 0.001) which further propounds the role of BMP-2 preconditioning in influencing osteogenesis of primed cells. The highest score of stain intensity in group OMB throughout the culture period is obviously attributed to the presence of DAG (dexamethasone/ascorbic acid/β-glycerolphosphate) in the culture medium, as DAGs are reported to enhance BMP-2 induced osteogenesis [44,45]. Interestingly, it was notable that mNBC scaffolds were able to facilitate mineralization of murine MSCs after 3 weeks of culture, even in the absence of osteogenic stimulants (DAG) and BMP-2 preconditioning. This may be due to the topographical characteristics of mNBC scaffolds, which might have provided osteogenic cues to the cells, leading to calcification. Thus, with the aid of BMP-2 primed cells and the intrinsic ability of the mNBC scaffolds to induce osteogenesis, a cost-effective bone tissue engineering strategy can be designed for quick and excellent in vivo osseointegration.
A cross confirmation of the mineralized matrix over cell-scaffold constructs was done using FE-SEM followed by EDS, after 21 days of culture. The electron micrographs revealed that scaffolds were covered with the network of cells, extracellular matrix, and globular accretions of ~ 0.5-10 µm, suggestive of calcification (Fig. 7a) [46]. The size of the accretions was larger in the scaffolds of group OMB compared to the other groups, inferring enhanced calcification in group OMB. For verification of these accretive bodies to be of calcium phosphate, EDS mapping was performed (Fig.7b), which confirmed the accumulation of calcium phosphate at respective positions. The quantitative measures of EDS analyses also revealed the presence of calcium and phosphate in all the four groups of cell-scaffold constructs and the intensity of the peaks referring to these elements were found to be in the order OMB>OM>PMB>PM (Fig.7c). These results also depicted a similar trend in mineralization as obtained with ARS staining, suggesting that BMP-2 preconditioning of stem cells prior to seeding on scaffolds could be of potential benefit in tissue engineering of bone defects.
Next, the cell-scaffold constructs were subjected to phalloidin-FITC and DAPI staining to find out how cells were behaving morphologically after 21 days of culturing on mNBC scaffolds (Fig. 8) as cell shape, cell area and cytoskeleton arrangement provide a unique way to characterize the differentiation directions [47]. The results revealed excellent adhesion and growth of cells on mNBC scaffolds even after 21 days of culture, however, cells looked overlapped and stacked on one another, and were found lining the walls of the scaffold pores.
The cytoskeleton staining of cells grown under group PM displayed that some cells attained a broad shaped polygonal morphology while some were still elongated with spindle-shape (Fig. 8a, also refer Figure S6 for individual images of Z-stack). On the other hand, as expected, almost all the cells of group OM attained a broad polygonal morphology with a large increase in cell area ( Fig. 8a and Figure S7), suggestive of their commitment towards osteogenesis [48].
Cumulatively, these findings indicate that the combination of BMP-2 primed stem cells and 3-D macro/microporous nanofibrous bacterial cellulose scaffold could be a promising tissueengineering tactic to repair bone defects and nonunion, for which patient's own MSCs could be isolated, preconditioned with BMP-2, seeded on the scaffold and thereafter implanted at the repair site for quick and efficient bone regeneration. Since a low dose of BMP-2 was used in this approach and that too affects only the cells stimulated ex vivo; thus, it would avoid the off-target adverse effects of high dose BMP-2 therapy as well as would cut down the cost of the treatment. should be the safe dose of these factors? All these apprehensions need to be clarified before proceeding further in this direction.
Conclusions
Collectively, 3-D macro/microporous-nanofibrous bacterial cellulose (mNBC) scaffolds were prepared and applied to direct osteogenic differentiation of low dose BMP-2 primed murine mesenchymal stem cells, in order to develop an efficient and cost-effective bone tissue engineering strategy. The ECM-mimicking nano-macro/micro architecture of mNBC scaffolds provided an excellent environment for cell adhesion, growth, infiltration, and to some extent osteodifferentiation also, making it ideal for bone tissue engineering. Osteogenic studies demonstrated significantly enhanced bone matrix secretion and maturation when the scaffolds were seeded with BMP-2 primed cells compared to the unprimed ones and a synergistic effect towards calcification was seen when BMP-2 primed-cells-scaffold constructs were provided with osteogenic stimulants (DAG) during the culture. However, additional studies at molecular level are required to corroborate these findings, as well as further studies to elucidate the underlying mechanisms, including intracellular signal transduction pathways are warranted.
Nevertheless, these exploratory findings suggest that adopting low dose BMP-2 preconditioning of stem cells in a bone tissue engineering strategy may provide a promising solution to alleviate the economic and negative impact of high dose BMP-2 grafting on scaffolds, as well as may offer a paradigm to design strategies for stem cell fate direction in other arenas of regenerative medicine. | 2020-09-15T01:01:20.258Z | 2020-09-14T00:00:00.000 | {
"year": 2020,
"sha1": "0f3fd0bebb3ecd68c910a93676b98387c331b7f7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2009.06338",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "700298887ec3e6c32c6d81f09fb8b8f705a45fc3",
"s2fieldsofstudy": [
"Engineering",
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
]
} |
238527793 | pes2o/s2orc | v3-fos-license | Pregnant women perceptions regarding their husbands and in-laws’ support during pregnancy: a qualitative study
Introduction pregnancy is a stressful condition during which women require family and in-laws´ support. This study was aimed to explore the women´s perceptions regarding their husband and in-law´s support during pregnancy. Methods by using qualitative exploratory design ten pregnant women in third trimester of pregnancy and living in joint family system were recruited through purposive sampling technique from a village of district Nowshehra, Khyber Pakhtunkhwa, Pakistan. Approval for conducting this study was obtained from Ethics Review Committee of Khyber Medical University. Data were collected from the recruited participants through face to face in-depth interviews. Data were analyzed through thematic analysis. One hundred open codes were generated from the data. Through axial coding, extra and unnecessary codes were omitted and then eleven categories were identified from open codes. Results the identified categories were kept under three salient themes of lack of comprehensive support mechanism, physical and mental strain, and barriers to antenatal services. Perceived support of husbands and in-laws, needs and barriers to maternal and child health were discussed by the participants. Conclusion the study findings suggest that family relationship quality might not be improved by taking interventions i.e. making policies only but the incorporation of health professionals´ support with family member´s behavior can improve maternal health.
Introduction
Pregnancy is not a disease but a psychologically challenging period where a woman passes through several social, physical and psychological challenges in life [1]. During pregnancy, women need significant support from health care services, however, in patriarchal societies such as Pakistan, a lot of these decisions related to access to health services etc. are in the hands of husband and inlaws [2]. Women have limited autonomy and power of expression due to deeply rooted societal norms may lead pregnant women to depression and affect their pregnancy and fetal weight [3][4][5][6][7]. Worldwide, 10% of pregnant women experience stress and depression due to the autocratic style of their in-laws [8]. In addition, family values and beliefs, religion, level of education or awareness of the family members affect the psychological, physical and social wellbeing of pregnant women [5].
Mostly traditional families with rigid belief system think that medical care is not necessary in pregnancy and they don´t allow pregnant women to seek medical care due to which health seeking gets delay [9]. Delay in health seeking leads to undesirable health outcomes such as high fertility, undesirable and unwanted pregnancies, and medical complications in women [10]. During antenatal period, husbands and family members´ support is necessary to ensure healthy pregnancy outcomes [11]. Specially, husband´s presence at the time of delivery is fruitful because partner´s support strengthens and helps in reducing fear and anxiety during delivery [7].
A study reported that witnessing labor pain can help in family planning in future [8]. Social support in terms of emotional, cognitive guidance, positive feedback, and social reinforcement to pregnant women from their family members is associated with better mental health, buffering of risks and promotion of well-being [12]. Psychological support in terms of family members´ behavior and communication is another important factor which affects maternal mental health and boost up pregnancy outcomes. Lack of psychological support can lead to pregnancy complications i.e. deprived neurodevelopment of fetus, low birth weight of fetus, an increase in the rates of caesarean birth, prolonged and preterm labor which indicates that poor psychological support during pregnancy is strongly related to pregnancy complications [13]. Quality family relationship plays an important role in the pregnant women´s mental health and physical well-being [14]. There is strong disparity Pakistani cities as well as villages and gender inequalities including literacy, nutrition, employment and health care and determinants affect women´s health seeking behavior during pregnancy [15]. The current study was aimed at exploring the perceptions of pregnant women regarding their husbands and in-laws´ support during pregnancy.
Methods
A qualitative exploratory study design was used to explore the perceptions of pregnant women regarding their husbands and in-laws´ support during pregnancy. Using purposive sampling technique, pregnant women who were in third trimester of pregnancy, house wives, living in joint family in the village of district Nowshehra, Pakistan were selected. Study was completed in six months. Sample size was decided upon the data saturation to produce sufficient in-depth information that can highlight the blueprints, types and aspects of the phenomenon of interest.
Permission was taken from the University´s ethical review board before the commencement of data collection. Rapport was built with the participants and research purpose and author´s information was fully disclosed then consent was secured for face to face individual in-depth interview as well as for audio recording. Language convenient to the participants was used during the interview. Semistructured interviews were conducted individually. Interviews were audio recorded and field notes were taken.
Data was found saturated on tenth participant. Audio recorded information was then translated into English language and transcribed verbatim anonymously following each interview. Thematic analysis approach was used to analyze the data and analysis of the data was proceeding step by step [16].
Audio recorded information was listened one time and then data read and re-read and initial analytic induction was noted. Next, semantic and conceptual reading was done; all the data were coded, and the relevant data were extracted from the codes. Next, searching for the repetition or similarity in the initial codes was done. Next, themes were discovered from the coded data. The codes were then arranged relevant to each theme. Lastly, themes were reviewed and names were identified for each theme concisely and all the themes wrote in details which provided the readers a holistic view of the research.
Results
A hundred (100) open codes were found out from the data. In the process of axial coding extra and unnecessary codes were omitted and then 11 categories were identified from open codes. The identified categories were kept under three major themes of lack of comprehensive support mechanism, physical and mental strain, barriers to antenatal services. Perceived support of husbands and in-laws and needs and barriers to maternal and child health were discussed by the participants.
Lack of a comprehensive support mechanism: overall, participants disclosed a lack of a comprehensive support mechanism in terms of emotional support, physical support, psychological support, housekeeping support, and financial support during pregnancy from their husband and in-laws. The participants in this study lived in joint families with complex family dynamics and power hierarchies outside of their control. The behaviours of husbands and in-laws were reported to be stress causing pregnant ladies which was affecting their health during pregnancy. Interviewees explained that their husbands showed careless behaviour towards them. In joint family system, their husbands, as reported by these women, wouldn´t say on behalf of them when their in-laws scolded them and where the husband´s support was direly need in household chores during pregnancy. They felt helpless when no one listened to their health problems.
Overall, these experiences were reported to be painful for these ladies as they felt helpless when they needed support the most. Participants complained of their husbands´ irresponsible behaviours like not doing any work or having education but not actively searching for any job, as these housewives were financially dependent on their husbands, this would leave their wives and children to suffer at home regarding their healthcare, these women mentioned, for example, as: Lack of physical support was explained in a sense of expectations by participants´ in-laws. They said that their in-laws did not understand their feelings and pain and ask for routine work at home which was stressful for them physically as well as mentally. They added that their in-laws treated them like a maid they hired for themselves. They also said that they feel they are in the open air when they go to their parents´ homes.
"In-laws say we are not responsible for your pregnancy, do all the work at home, we cannot give you rest if you are not able to do work so don´t bring more kids." (Participant 2 and 4, gravida 3, gravida 4). "In-laws does not help me in household chores, sister-in-law eats a meal with us and does not collects utensils and does not help me in washing clothes conversely mother-in-law says to do household chores facilitates the delivery of the baby." (Participant 9, gravida 4). "Our routine is that son´s wife will do all of the work at home and their own daughter will not do anything. They treat their son´s wife like a maid they hire for themselves. I feel suffocated in my husband´s home when I go to my parents´ house I feel I am in the open air." (Participant 10, gravida 2).
The participants experienced a lack of support from both their husbands and in-laws. In-laws did not support them when their husbands do not earn. They added that if their husbands´ financial condition was poor so the in-laws were also did not ask them whether they were in need for their health or they need any diet.
"My husband plays with his phone and laptop and I try to talk to him, I try to spend time with him but he does not reply to my words, my mother-in-law says that although he is careless but it is the power of your prayers that you remain healthy before after delivery of the baby." (Participant 5, gravida 3). "My in-laws have the poor financial condition and they know that their son also does not do any job, but they even do not ask me if I need something like food or medicine." (Participant 4, gravida 4).
Others highlighted that family marriages are not good as their aunts were so loving and caring before their marriage but when they turn into mothers in-law their behaviors got changed which was also painful. They expressed that they might not need medical care if they have stress free environment: One of the participants´ views was against the views of all other participants; she told that her husband as well as in-laws was fine. Her husband was caring; he brought milk for her daily when she got pregnant. She added that whenever she felt any problem with her pregnancy her in-laws carried her to the doctor for checkup though they were financially poor. She said that she was satisfied with her married life. Physical and mental strain: one of the challenges faced by women was "physical and mental strain" which was due to fatigue and natural mood swinging and frustration, mothering and violence and abuse during pregnancy. Mood swinging and frustration was the most common cause of mental strain in participants. They highlighted that they feel mood changes when they get pregnant. Sometimes they do not want to hear and talk to anyone. They want mental peace during pregnancy, but their husbands do not understand them: "Woman gets frustrated during pregnancy, it is common. I also get frustrated, I want no one makes noise no one talk loudly, when children make noise I use abusive language to them because I dislike noise during this period. Everything returns to normal after delivery of the baby." (Participant 8, gravida 4). "Woman gets frustrated during pregnancy, but husbands don´t care for their mood." (Participant 1, 2, 5, 9 and 10).
Mothering is also an issue during pregnancy. Participants expressed that they need help in handling other children because they cannot manage feeding and cleaning of their young children during pregnancy but their husbands and in-laws do not help them beyond watching everything.
"To manage other children with a full term pregnancy is exhausting. My son wants to play all the time and I cannot walk easily with heavy abdomen but instead of helping my mother-in-law says not to bring other children if you are unable to control them." (Participant 2 gravida 3). "It is very difficult to clean and feed other children with heavy abdomen but mother in-law says that she is not responsible for my children." (Participant 4 gravida 4).
Violence and abuse was also reported as a bad habit of the participant´s husband which was risky for both mother and child. One of the participants highlighted that her husband beat her without any specific reason and used abusive language with her and children also.
"My husband beats me badly, sometimes bleeding starts from my nose." (Participant 6, gravida 3). "My husband was good in the first year of marriage, but someone bewitched him, and he started using abusive language to me and children. We have been married for seven years." (Participant 6, gravida 3).
Barriers to antenatal services: participants discussed a few barriers to antenatal services in terms of socio-economic status of the family and inappropriate behavior of husbands and in-laws. Participants said that their husbands were not actively searching for a job so they were not able to pay for their antenatal checkups. Another said that their in-laws did not allow their husbands when they want to seek private health services as public sectors were not satisfactory. One of the participants gave her views that she was not allowed to go and seek maternal health services if her husband was not home and they have emergency, but they will not go alone to see the doctor.
"If my husband is not home I am not allowed to leave home to seek medical care and my in-laws also do not go with me. Money is also a big problem for me to seek maternal health services." (Participant 2, gravida 3).
Participants talked about their in-laws authoritarian behavior. They stated that if their husbands give the whole salary to their mothers and if their husbands asked for money for antenatal checkup of their wives but their mother in-law did not allow them to spend money on their pregnancy. They added that they were in need to ask for money from their own parents instead of husbands or in laws because they were aware of pregnancy complications.
"My husband wish to give me doctor´s fees but his mother did not allow him to spend on my pregnancy; he asks for money from his mother for everything he needs because he puts his full salary in his mother´s hands. I bring money from my parents for my checkups because I know complications may occur during pregnancy as I have experienced them in previous pregnancy." (Participant 1, gravida 2).
Discussion
In the current study it was observed that good support from husbands and family members positively and poor support from them was negatively affect maternal and child health. It was suggested by the study participants that husband´s support provided emotional security, mental peace and improved physical health of the pregnant women. A study from Nigeria showed similar results that 86% of the women who were supported by their husbands showed less stress during pregnancy and they felt emotionally secure and physically healthy [17].
Another study in Brazil also showed that husbands supports and participation in women´ reproductive health produced feelings of confidence and safety in women [18]. Expectant mothers want to do rest in their last trimester but they have to do household chores without the help of any member from in-laws. The in-laws believed that routines tasks would facilitate the delivery of the baby. Inlaws shared their experiences with their daughter in-laws that when they were young they would have to do their routine household chores without anyone´s help. The participants´ husbands also supported their parent's views. Findings were supported by a study in South Africa that daughters in-laws were expected to do harder and longer works in the fields and at their homes and in other days also during pregnancy [19].
In Pakistani culture, it is found that decision making whether it is financial or about the health of expected mothers is on the basis of hierarchy and age but a study in Nepal showed that financial decision making was mostly in the hands of husbands in the nuclear family system [20]. Multiple challenges faced by expecting mothers were the cause of physical and mental strain in them e.g. incomplete bed rest and household chores, inadequate food, use of the abusive language of family members and husbands, violent behavior of husbands i.e. beating and verbal abuse, mothering tasks like feeding and cleaning kids. They felt tired due to services they were providing to their in-laws and managing their kids without the help of anyone.
Mood swing was another big issue found in many pregnant women which was frustrating for them when no one cared for their mood. Literature showed multiple findings concordant to this study [21,22]. They also expressed that fatigue during pregnancy was physical, psychological and emotional [21,22]. Studies also showed that women relied financially and emotionally on their husbands and family. Poor financial support from a partner did not meet the adequate food requirements for pregnant women and affected their physical health during pregnancy [23]. Barriers that were found in accessing antenatal services were poor financial condition and husbands and in-law´s inappropriate behavior. Husbands and in-laws took pregnancy as a natural process and didn´t feel the need to go for antenatal checkups until unless complications occurred. Some risks were also understood by the participants due to overlooking behavior of husband and in-laws i.e. miscarriage in the first three months and development of high blood pressure during pregnancy. Participants understood that delay in seeking medical care may arise problems like anemia and fetal anomaly during pregnancy, but they were unable to seek medical care due to lack of financial support from husbands. Participants who were aware of the risks depended on their parents for money to prevent those risks but others who couldn´t depend on their parents and in-laws were also not supporting them, were more prone to develop such complications during pregnancy and at the time of delivery.
The findings were concordant with the study in Nepal in which role of mother-in-law was highlighted as the resource person of the family and husband had to ask money from his mother which was resulted in delayed seeking care [24]. Findings were also consistent with a systematic review which reviewed 131 studies and found all the characteristics discussed in the present study around the world in which 16% of the studies were from Africa, 29% were from Asia and others were from Latin America, Middle East [25].
The strength of the study was that it provided a richer insight of pregnant women regarding their husbands and in-laws´ support during pregnancy. Participants who were willing to participate enjoyed to give their insights and participated actively. Researcher did not feel bore or exhausted at any stage of the study. The topic was very interesting and researcher felt joy when conducting interviews.
The weakness of the study was that the topic of the study was sensitive. Researcher faced resistance in conducting interviews because whenever the title was explained to the participant they felt uneasy to provide information about their husbands and in-laws´ behavior. Only highly educated participants were convinced easily, rest of the participants needed much struggle to be taken into confidence for providing their private information. Six participants provided data only about health professionals and health services access and refused to give information about their in-laws so their recorded interviews were discarded due to insufficient data.
Conclusion
Study findings showed that support of husbands and in-laws could affect maternal and child health both positively and negatively. The current study helps the nurses to explore pregnant women´s feelings regarding their husbands and in-laws and can treat the patient accordingly when they come to the antenatal clinics. By incorporating, support of husbands, in-laws and health professionals the comprehensive and effective support can provide optimum level of health to the expected mothers and their upcoming children to achieve the third goal in sustainable development goals. Study also suggests that nurse should provide culturally sensitive care to pregnant women. searched literature, data collection, analysis and interpretation and wrote report. They have also read and approved the final version of the manuscript. | 2021-08-27T17:22:08.281Z | 2021-08-09T00:00:00.000 | {
"year": 2021,
"sha1": "4667dfa6622d5c5c40de0419d137e447acd1c262",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.11604/pamj.2021.39.229.25659",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2dd17f4d5a31631964a6e4c56cc5a8015b33455a",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226202581 | pes2o/s2orc | v3-fos-license | Evaluation of drilling muds enhanced with modified starch for HPHT well applications
The use of carboxymethyl cellulose (CMC) in oil and gas well drilling operations has improved the filtration loss and mud cake properties of drilling muds. The introduction of starch has also reduced, for example, the viscosity, fluid loss, and mud cake properties of the drilling fluids. However, normal starch has some drawbacks such as low shear stress resistance, thermal decomposition, high retrogradation, and syneresis. Hence, starch modification, achieved through acetylation and carboxymethylation, has been introduced to overcome these limitations. In this study, modified starches, from cassava and maize, were used to enhance the properties of water-based muds under high-pressure high temperature (HPHT) conditions, and their performances were compared with that of the CMC. The mud samples added with acetylated cassava or maize starch exhibited the smallest filtrate volumes and filtrate losses within the American Petroleum Institute specification. Therefore, these modified starch-added muds could replace CMC as fluid loss agents since, unlike it, they can withstand HPHT conditions.
Introduction
Hydrocarbon exploration has become more challenging, especially during drilling in ultradeep waters and high temperature (> 300 °F) high pressure (≥ 10,000 psi) formations. The combined pressure-temperature effect alters the rheological properties of the drilling fluids, making the conventional drilling muds ineffective; the consequences include complex phenomena and problems such as formation damage, pipe sticking, sloughing shale, and uncontrollable kicks. Properly designed drilling fluids are required to overcome the challenges posed by high temperature and overpressure formations. Thus, modified starch has been introduced as an additive to reduce or even eliminate the deficiencies of conventional drilling fluids; it can enhance their properties in complex formations.
Drilling operations in unconventional reservoirs such as coal-bed methane and shale ones require appropriate drilling muds to prevent hole problems such as sloughing shale, fluid loss, and formation damage. The viscosity, yield point, and gel strength of the muds decrease as the formation temperature increases with depth. Changes of the mud properties result in thermal degradation of their solid, polymeric, and other components; nevertheless, the introduction of modified starch into water-based muds has reduced the HPHT fluid loss and improved their rheological properties. An optimised drilling fluid can form an effective mud cake able to prevent fluid loss. Furthermore, the mud rheological properties should be stable over a wide range of pressure and temperature to keep the cuttings in suspension when the circulation stops (Okumo and Isehunwa 2007;Sulaimon et al. 2017).
Starch can serve as an additive capable of enhancing and improving the mud viscosity and also control the fluid loss. It contains two important components: amylose and amylopectin (Taiwo et al. 2011). Amylose helps to enhance the drilling fluid properties, especially, viscosity and fluid-loss control. Sodhi and Singh (2004) reported that Betancur et al. (1997) observed that the normal starch presents limitations such as low shear stress resistance, thermal decomposition, high retrogradation and syneresis that reduce its industrial applicability; to optimise the fluid-loss and viscosity properties, starch requires some modifications. Therefore, this study aimed to modify native starch and use it to reduce fluid loss and stabilise the mud rheological properties in HPHT formations. The three main factors considered were the rheology, the HPHT fluid loss, and the pH of mud samples treated with different modified starches.
Several studies have been conducted in the past. However, most of the earlier studies were based on raw starches obtained from cassava, potato, maize, and guar gum (Aboulrous et al. 2013;Harry et al. 2016;Okumo and Isehunwa 2007;and Taiwo Joel and Kazeem 2011). Results have shown that the CMC outperforms these raw starches.
Materials
In the experiments, the following materials were mainly used: Distilled water, CMC, AC, CC, AM, and CM.
Starch acetylation
The acetylated starch was obtained through the method developed by Sodhi and Singh (2005), usually used for similar modifications. Distilled water was used for the dispersion of the starch with a required volume of 450 ml and the mixture was stirred for 1 h at 30 °C. Then, 3% NaOH was added to obtain a suspension pH of 8.0; simultaneously, acetic anhydride (12 g) was carefully added to the mixture whilst maintaining the pH value within the 8.0-8.4 range. The mixing continued for 10 min and the pH was adjusted to 4.5 by adding 0.5 M HCl. After sedimentation, the precipitate was washed twice with distilled water and once with 95% ethanol to remove the acid.
Starch carboxymethylation
The carboxymethylated starch was obtained via the wet method (Khalil et al. 1990). First, 100 g of starch was dispersed in an aqueous solution of isopropanol in the 80:20 ratio, followed by pH adjustment with a 2 M NaOH solution. Then, monochloroacetic acid (40% wt/vol) was added to the suspension, which was successively incubated at 30 °C under intermittent stirring. The carboxymethylation process was conducted under a nitrogen atmosphere to avoid polymer degradation.
Rheological properties characterisation
The rheological properties of the mud were evaluated using a Fann 35 viscometer. Dial readings were taken at six different speeds (600,300,200,100,6, and 3 rpm) and two different temperatures (80 and 120 °F). The measurements were conducted also for the modified starches: AM, CM, AC, and CC. Then, after hot rolling the various mud samples for 16 h, the rheological tests were repeated.
HPHT fluid-loss measurement
For the HPHT fluid test, both temperature and pressure can be varied to represent an expected downhole condition. The HPHT testing equipment has a heating jacket so you can heat up the drilling fluid sample to the expected wellbore temperature. Typically, the recommended temperature in the heating jacket should be above the estimated temperature of about 25 F-50 F. The test pressure is normally at 500 psi differential pressure. Normal test conditions are 150 F and 500 psi differential pressure, and the maximum allowable test temperature is 300 F with the standard equipment. We ensured that the pressure remains constant throughout the 30 min test period. The HPHT test is performed for 30 min, just like the API fluid lost test. The HPHT fluid-loss test was performed using an HPHT filter press (Ofite).
Changes of amylose and amylopectin contents in starch
The amylose and amylopectin contents in starch are crucial in increasing the viscosity and reducing the filtration-loss property. The percentages of amylose and amylopectin are, respectively, 22.9% and 77.1% in maize starch and 21.07% and 78.93% in cassava starch. Before the enhancement of viscosity, filtration-loss property, and mud cake thickness, the native starches were modified via acetylation and carboxymethylation. After these processes, changes in the amylose and amylopectin percentages were observed in both starch types (Tables 1,2,3,4). In particular, as regards the maize starch, the amylose content decreased to 20.36% in CM and 20.17% in AM, whereas the amylopectin one correspondingly increased to 79.64% and 79.84%. For the cassava starch, the amylose percentage increased to 22.25% in CC and 22.085% in AC, whilst the amylopectin one respectively decreased to 77.75% and 77.915%.
Modified starch rheology
The rheological properties were investigated to determine which of the modified starches could be a promising industrial additive and an alternative to CMC under HPHT conditions. Ten mud samples, containing different concentrations (5 or 10 g) of each additive (AC, CC, AM, CM, or CMC), were formulated and their rheological properties tested under HPHT conditions. Several parameters were tested, including plastic viscosity, gel strength at 10 s and 10 min, HPHT filtration loss, mud cake thickness and pH; the results are summarised in Tables 5 , 6 , 7 ,8 , 9, 10 , 11 ,1 2, 13 , 14 ,1 5, 16 , 17 ,1 8,19,20. The rheological properties at different speeds and the gel strength at 10 s and 10 min were obtained using a viscometer. The plastic viscosity μ p and yield point Y P were calculated as follows.
(1) p = 600 − 300 (2) Y P = 300 − p For the samples added with 5 g of modified starches, AC and CC exhibited, respectively, the highest viscosity and lowest viscosity at 600 rpm. Figure 5 shows the resulting rheological properties of the mud samples before hot rolling with a roller oven, which was measured again 16 h after hot rolling at 302 °F ( Table 6).
Effect of starch content
The rheological properties of the eight mud samples containing the modified starches (AM, CM, AC, or CC) were tested and compared based on their different amounts (5 or 10 g). The result revealed not much difference for the value of dial reading at 600 rpm for four of the samples added with 5 and 10 g of modified starch. The highest value was obtained with AC because, after carboxymethylation, it contained the highest amount of amylopectin. Taiwo et al. (2011) reported that the large size and the branched nature of amylopectin reduce the polymer mobility and orientation in an aqueous environment. After hot rolling for 16 h, AC still provided the highest value of dial reading at 600 rpm.
Tables 7 and 8 summarise the results for the samples added with 10 g of different modified starches. Before hot rolling, the highest viscosity and lowest viscosity at 600 rpm were obtained with AM and CC, respectively; however, after hot rolling, AM had decreasing viscosity. This indicates that AM cannot withstand HPHT since similar conditions break the polymer linkage of the starch. Nonetheless, the mud samples added with AC and CC showed an increase in viscosity after hot rolling, probably because their polymer linkage can withstand HPHT.
The abovementioned eight mud samples were also weighed by using a mud balance, revealing the same mud weight of 8.6 ppg. This indicates that different contents of modified starch do not affect the mud weight.
Effect of temperature
The rheological properties of the mud samples were also tested at different temperatures (80 and 120 °F). Also, the samples were aged in a roller oven at 302 °F for 16 h to simulate the high temperature of a wellbore and, hence, investigate the performance of the modified starches in real conditions, including their ability to maintain the polymer chain. Based on Table 9, the rheology value for CMC before and after hot rolling at 80 and 120 °F was significantly reduced because this polymer degraded after the exposition at 302 °F; at 80 °F, it decreased from 44 to 25 after hot rolling.
In theory, when mud experiences an increase in temperature, the cohesive forces in it are reduced, whereas the rate of molecular interchange increases. The cohesive force reduction would affect in decreasing of shear stress whilst the increased rate of molecular interchange would increase the shear stress. At high temperatures, the gas viscosity increases due to the kinetic energy gained by the particles from the increasing temperature, whereas the viscosity in the liquid decreased with the drag force. Under normal room conditions, the molecular structure of the liquid is strongly bonded by the van der Waals intermolecular interactions, which restrict its mobility. However, if the temperature is high enough, the bonds get broken because the particles have gained enough energy to break down the intermolecular forces, increasing mobility. Therefore, the performance of a mud sample treated with modified starch should be theoretically influenced by increasing the temperature. Figures 1 and 2 show an increasing trend for the mud samples added with AM or AC after hot rolling, indicating that the polymer in these modified starches can withstand high temperatures. As a consequence, their retained polymer bonds and, hence, the increased cohesive forces resulted in increased viscosity. Conversely, the samples added with CC and CM showed a decreasing trend in viscosity, meaning that their polymer cannot withstand high temperatures. However, when 10 g of additive was used, the samples added with AM and CM showed a decrease in viscosity after hot rolling, whilst those containing AC and CC exhibited a (Figs. 3 and 4). Hence, the highest increment after hot rolling was achieved with 5 g of AM as well as 10 g of AC, which identifies AM and AC as promising HPHT fluid-loss agents to replace CMC since this polymer, under HPHT conditions, degrades and its use results in decreased viscosity, affecting also the ability to suspend and remove cuttings.
Plastic viscosity
The plastic viscosity indicates the solid control, whereby an increase in value showing increase in solid volume or decrease in particle size. It also represents the ability of suspension of drilled cuttings and hole cleaning under dynamic conditions. An increase in the solid content of drilling mud results in higher μ p . Figure 5 shows that for the 5 g of additives after hot rolling, the plastic viscosity (μ p ) decreases for CMC, whilst the AM, CM, AC, and CC exhibited a corresponding increase, indicating that they can withstand HPHT conditions. In particular, AM resulted in the highest increment, in agreement also with the highest absolute value. This suggests that AM had a high solid content, which limited the mudflow and, consequently, increased both the viscosity and plastic viscosity. Besides, amongst the modified starches tested, AM presented the highest amylopectin percentage (79.83%) after the acetylation. The role of amylopectin in starch, as previously discussed, is to reduce the polymer mobility and orientation in aqueous environments. Also, AM had the highest.
For the 10 g of additives, Fig. 6 shows increased values for all the modified starches after hot rolling. This means that, after being exposed to high pressure and temperature, cp. This is to minimise the risk of differential sticking, to reduce the low penetration rate and the equivalent circulating density as well as to control the high surge and the swab pressure.
Yield point
The yield point indicates the ability of the mud to lift the cuttings from the wellbore to the surface. The API specification for high-performance muds is 15-25 cp. In theory, the higher the Y P , the better the cutting lifting ability. However, an excessive Y P value would lead to a higher pressure loss during the circulation of drilling mud. The results of 5 g show that the sample added with AM exhibited the highest Y P value after hot rolling, followed by that with AC, whilst a corresponding reduction was observed in the mud containing CMC, CC, and (Fig. 7). Therefore, the ability of AM and AC to lift the cuttings from the wellbore to the surface increased after the hot rolling; this was probably due to their gelling properties improved by their amylose content, before the acetylation. However, as regards the samples added with 10 g of different modified starches, only the one with CC exhibited increased Y P after hot rolling, whilst the remaining samples showed a corresponding decrease as shown in Fig. 8. The reason is that the polymer bonding specifically linear and branched inside of the CC which is amylose and amylopectin could withstand the HPHT; the retained linkage led to this increment after hot rolling. The CMC, AC, AM, and CM polymer, on the contrary, could not withstand the HPHT, resulting in a Y P reduction.
Gel strength
The gel strength indicates the capability of mud to hold cuttings in suspension under static conditions. The API specification for the difference between 10 s and 10 min gel strength is 20. When the difference exceeds 20, more pump power is needed to start the circulation after the static condition. As illustrated in Figs. 9 and 10, only the mud sample added with AM exhibited an increase after hot rolling, whilst the others showed a corresponding reduction after 10 s. Nevertheless, the difference between the gel strengths at 10 min and 10 s for AM exceeded 20, hence, more pump power would be required to initiate the circulation. This was similarly observed for CMC and AC. However, the samples added with CM and CC showed a lower
Filtration loss and mud cake properties
The filtration loss and mud cake properties of the samples were evaluated via HPHT filtration tests at 1000 psi and 302 °F for 30 min. The filtration loss test measures the quantity of water leaked out from the drilling mud under simulated high pressure. This water represents the fluid that penetrates the permeable formation during circulation or under static conditions. The mud or filter cake is the thin impermeable wall formed at the opening between the wellbore and the reservoir. The cake limits the possible fluid loss into the formation, which could otherwise damage the wellbore and, thus, reduce the effective permeability of the near-wellbore region. The maximum API threshold for filtrate loss in waterbased muds is 15 ml. If the filtrate volume exceeds this value, there is a risk of clay swelling at the sensitive zone and consequent formation damage, which could reduce the permeability of the area. The HPHT tests showed that only the AM-added sample exhibited a value (14 ml) close to the API specification. However, amongst the samples added with 10 g of modified starch, the sample with the AC showed the lowest filtrate volume (11 ml). The mud cake observation revealed that, when using 5 g of modified starch, the CC and the AM resulted in, respectively, the smallest and largest cake thickness (Fig. 11). For the samples added with 10 g of modified starch, the largest and smallest thicknesses were obtained with the AC and AM, respectively, as shown in Fig. 12. This indicates that mud added with 5 g of AM or 10 g of AC can be compressed despite the strong polymer linkage and the high solid content, which could even reduce the filtrate amount.
Environmental assessment of the mud added with modified starch
The results displayed in Figs. 13 and 14 show that the eight mud samples added with the different modified starches (5 and 10 g) were alkaline, with pH values above 7, indicating that the starch reduced the water hardness, which results in the observed pH increase. In other words, the starch polymer precipitated the calcium ions in the water, increasing the water pH (7) to 8-9. However, despite the addition of modified starch, the pH value remained under 9; this suggests that modified starch can be used to reduce the mud acidity, which could otherwise lead to the corrosion of the bottom hole assembly equipment during drilling operations.
Rheological model of the mud treated with modified starch
The plots of shear stress and shear rate shown by Figs. 15,16,17,18 revealed that all the mud samples added with the different modified starches (AM, CM, AC, and CC) exhibited the characteristics of Bingham plastic model (linear curves), which is preferred for drilling muds, rather than a Newtonian fluid behaviour (straight lines). This proves that the formulated muds had acceptable flowing characteristics. The slope and the threshold stress denote, respectively, the μ p and Y P of the mud. The gradient of the Bingham plastic model, which is the ratio of shear stress to shear rate, denotes the level of mud viscosity. The plots showed a gentle gradient throughout the dial speed, indicating that the mud samples had the characteristic of least resistance to flow due to interparticle friction. Besides, the yield point and also the point where the fluid will start to flow can be determined from the plots. From the shape of the plot, the fluid shows the
Conclusions
This study aimed to reduce the filtration loss and rheological properties of water-based mud treated with modified starch and to compare the achieved performance with that of the industrial fluid-loss agent CMC. The factors believed to influence both the filtration loss and rheological properties were the amylose and amylopectin contents. Results showed that the mud mixed with 5 g of AM met the API requirements in terms of both rheological properties and filtration-loss control. However, for the 10 g of modified starch, the sample with AC exhibited good rheological properties and filtration-loss control in agreement with the API specification. Under the HPHT condition, the mud samples treated with AM and AC showed the best rheology values, close to the API specification 13A 18th edition. Furthermore, the mud samples added with 5 g of AM or 10 g of AC exhibited the lowest filtrate volumes compared to those treated with the same amounts of modified starches. | 2020-10-31T13:53:12.647Z | 2020-10-31T00:00:00.000 | {
"year": 2020,
"sha1": "c39858072de055492d1be2d8ad366c8a5c7b978d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13202-020-01026-9.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "c39858072de055492d1be2d8ad366c8a5c7b978d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
251508805 | pes2o/s2orc | v3-fos-license | Towards the FAIRification of Scanning Tunneling Microscopy Images
In this paper, we describe the data management practices and services developed for making FAIR compliant a scientific archive of Scanning Tunneling Microscopy (STM) images. As a first step, we extracted the instrument metadata of each image of the dataset to create a structured database. We then enriched these metadata with information on the structure and composition of the surface by means of a pipeline that leverages human annotation, machine learning techniques, and instrument metadata filtering. To visually explore both images and metadata, as well as to improve the accessibility and usability of the dataset, we developed “STM explorer” as a web service integrated within the Trieste Advanced Data services (TriDAS) website. On top of these data services and tools, we propose an implementation of the W3C PROV standard to describe provenance metadata of STM images.
In this paper, we report on the activities carried out on a scientific archive of scanning tunnelling microscopy (STM) images with the objective of organizing it in a more structured and convenient dataset from a FAIR point of view [2]. Since the experimental technique has not substantially changed over the last 20 years, our effort towards the FAIRification of legacy data is relevant both for current research activity in STM and for guiding the FAIR-by-design workflow under active development following current standards [3,4]. To achieve this goal metadata is a key driver; in the following, we will present our approach in collecting metadata for our STM dataset and our initial effort in defining our own metadata schema.
The images were generated using an Omicron Variable Temperature STM (VT-STM) microscope [5] located at the Istituto Officina dei Materiali (CNR-IOM) in Trieste, Italy. In total, researchers generated about 420,000 images over twenty years of research activity, consisting in 228 GB of raw data. From this sample, an initial batch of about 110,000 STM images recorded in constant current mode was selected and curated in an organized dataset along with 59 instrument metadata for each image. These metadata alone provide valuable information about the conditions in which images were obtained and are useful to make data findable and accessible. Unfortunately, the type of materials that compose the sample, the most relevant information associated with STM images, has been historically registered on a paper logbook. In such a state, it is unfeasible to integrate this information into an automated data management system. To improve the scientific value and FAIRness of the dataset, we annotated images with this specific metadata with a pipeline that leverages human annotation, machine learning (ML) techniques, and instrument metadata filtering. After this labelling procedure, the final dataset consists of 7,287 STM images assigned to three categories of materials, with a total size of 4.7 GB and organized with data files and original instrument metadata files for each individual image, along with provenance metadata for the whole dataset.
Another crucial improvement for the accessibility and usability of the dataset consisted in the creation of a metadata explorer, developed as an integrated service within the TriDAS website [6], which allows users to visually explore images' metadata through interactive and downloadable plots. The core logic of the web service is initially designed around a subset of 11 image metadata, carefully selected together with nanoscience researchers to provide significant information about image characteristics and microscope settings relevant to image quality and context. The functionality of the web application, and its relevance as an interactive tool with the dataset, are then further improved to include images visualization on the browser without the need for any additional software. Besides these activities carried out to increase the dataset usability, we present an application of a provenance standard for the case study of STM images. Intended as a type of structured metadata, provenance tracks the origin and all the intermediate procedures applied to produce a data product, thus becoming fundamental for the reproducibility of the scientific experiment and for the analysis and interpretation of the results. During the FAIRification workflow, the W3C PROV standard [7] is applied to describe the provenance of metadata, from the original STM images to the ones curated and available on the TriDAS website.
We made available the dataset containing 7,287 STM images together with their provenance description [8] and all source code used in this paper [9].
STM Dataset
STM images presented in the dataset were recorded, over twenty years of research activity, by the Surface sTructure and Reactivity at the Atomic Scale (STRAS) research group of the CNR-IOM Institute in Trieste, using an Omicron Variable Temperature STM (VT-STM) microscope. Raw data are composed of forward and backward topography scan arrays stored in binary format in files with extension .tf0 and .tb0, and a .par file that contains instrument variables and other information in text format. For some of these topographic images, the related tunneling current images, stored in files with extension .tf1 and .tb1, are also present. By filtering metadata of images, we retrieved a reference dataset of 111,415 constant-current STM images from a vast collection of measurements. The structure and composition of the imaged surface cannot be recorded in an automated way, as such, it has been historically registered on a paper logbook. To obtain this crucial information for STM images, we developed a workflow based on human annotation, machine learning techniques and metadata information. The starting point was to manually label groups of images into different categories according to the sample material. Researchers, within the same day, typically measured samples of the same material, and, considering the typical workflow of the group, it is then reasonable to assume that samples should be of the same category also within a limited time period. Given these assumptions, we created a total of 188 plots composed of at most 100 images sampled from each month of activity. This collection was manually labeled and used to obtain a broad division of the dataset in 18 sample material categories, as shown in Figure 1.
Then we selected a subset of 10 images for each of three specific material categories, namely Gr_Ni100, Gr_Ni111 and N_Gr_Ni111 for a total of 30 images, as shown in Figure 2. In particular, Gr_Ni100 includes images taken on monolayer graphene grown by chemical vapour deposition on Ni (100). Due to the square symmetry of the substrate, the resulting layer is composed of patches of aligned graphene (aligned with the substrate crystallographic structure and showing a typical wavy 1D moiré pattern) and rotated graphene (identifiable in the images by a 2D moiré pattern) [10,11,12,13,14]. The Gr_Ni111 category contains STM images taken on monolayer graphene grown by chemical vapour deposition on Ni (111) single crystals. The layer is composed of patches of epitaxial graphene (in register with the substrate lattice), appearing as a triangular arrangement of spots, and rotated graphene identifiable in the images by the presence of a 2D moiré pattern [15,16,17,18,19,20,21]. Finally, the N_Gr_Ni111 category represents images taken on monolayer graphene grown by chemical vapour deposition on a Ni (111) single crystal previously doped with atomic nitrogen. During the growth, some nitrogen atoms present in the Ni bulk are trapped in the graphene mesh, doping the layer and originating characteristic defects, visible as dark triangles and bright clover-like features [22,23].
With the aim of associating the type of material composing the sample to a larger set of images, we developed an approach based on recent developments in representation learning [24] for image recognition. Representation learning techniques leverage only the availability of large datasets to train a model that automatically detects features of the images which are relevant for a detection or classification task. The pioneering work of Le Cun [25], as well as more recent progress in the field [26], led to convolutional neural networks, a family of deep-learning models particularly suited for image feature analysis thanks to their translation equivariance and locality properties.
In absence of a sufficiently large set of STM images in the dataset carrying information on the sample material, we focused on the technique of transfer learning [27]. Transfer learning consists in employing the weights learned on a network trained on a generic enough dataset, to target a compatible task on a Towards the FAIRifi cation of Scanning Tunneling Microscopy Images different set of images. A plethora of theoretical results [28], as well as applications to datasets of microscopy images [29,30,31], show that models trained on ImageNet [32] capture features that are relevant in an extremely heterogeneous set of image classification tasks.
From preliminary analysis, it emerges that a Resnet50 model trained on ImageNet [33] has sufficient expressive power for extracting relevant features in the specific case of STM images. More specifically, the representation extracted from the input of the last-but-one linear layer of the network, consisting of a vector of length 4,096, encodes attributes of the images that are sensitive to their thematic content. Formally, this very construction yields a non-linear map ( sending each image of 224×224 pixels and 3-color channels to the corresponding representation. Since the visual characteristics of an image and the nature of the material composing the sample are strongly correlated, images with similar representations are likely to correspond to the same material category.
Following this line of thought, we started from a set of 30 elements of the STM dataset, composed of 10 manually labeled images for each of the three material categories described above.
Given two images x 1 and x 2 , their similarity in content is well described by the cosine similarity between the corresponding representations defined in Equation 2.1: For each image x, the elements of the dataset on which the function S cox (x,_) assumes a higher value corresponding to putative images in the same class of x. For each of the 30 labeled images, 24 images were selected with this automatic method and manually verified. On the 720 images obtained following this procedure a further manual verification has been applied to avoid the following behaviours: choice of
Towards the FAIRifi cation of Scanning Tunneling Microscopy Images
images which are almost identical to the retrieval seed, choice of images in different material classes from the seed but visually similar as taken at a different scale. This procedure leaves us with a final collection of 290 images labeled with the corresponding material category recorded in 64 days. Using this collection, we selected images recorded in the same days and labeled them correspondingly, and after a final manual verification, we obtained the final dataset of 7,287 images.
Despite our strategy being tailored to the specific case of the STM Dataset, each aspect of the selection process, from the manual annotation to the ML procedure, can be generalized to similar contexts upon slight modification of [9], in particular when dealing with annotations of microscopy images required for a FAIRification workflow. A more detailed description of the methodology, the technical specification and the validation criteria of the entire pipeline is available in the master thesis of the first author of this article [34].
STM meta data explorer
The STM dataset is enriched with useful metadata that increase the findability of relevant images. However, it is fundamental to provide scientists with a web service to facilitate and simplify the search process. Here, we present STM Metadata Explorer, an easy-to-use and interactive web service developed as an integrated service within Trieste Advanced Data Services (TriDAS) to visually explore images' metadata through interactive and downloadable plots. The core logic of the web service is designed around the metadata that users can select through the platform to find the relevant images. We selected a small subset of metadata that provide significant information about image characteristics and microscope settings, listed and described in Table 1. The web service workflow is summarized in Figure 3 and allows users to visually explore images' metadata through a quantile plot for a single metadata field and a scatter plot that shows the distribution of images between two chosen metadata fields. In both plots, hovering on top of plot objects creates a pop up that shows information about that object: the number of images and value intervals for quantiles plots and the number of images and metadata values for each field in the scatter plot. The right toolbar lets users interact with the plots by moving, zooming in and out, and saving plots as images.
Towards the FAIRifi cation of Scanning Tunneling Microscopy Images
These features are useful for a first exploration of the dataset which then should be downloaded for further analysis and image visualization. To address this issue, the scatter plot features the selection of a specific metadata combination to retrieve a new page containing a table with metadata fields for each image in that subset. On this page, researchers can select, order, filter, and search images based on their metadata values. Moreover, the ID column consists of each image unique identifier in the database and, by clicking on it, the corresponding STM image is rendered and shown in a new page, where a download feature is included to obtain data, metadata, plot and provenance metadata for each image. Figure 3. The web service workfl ow on the TriDAS website. Users select metadata and based on the fi elds selected, can explore the images metadata through a quantile plot for a single metadata fi eld and a scatter plot that shows the distribution of images between two chosen metadata fi elds.
TriDAS is implemented in Python, use Bokeh [35] for data visualization, spym [36] to process and plot images, and Flask [37] framework as backend. The source code, as well as a list of all used software packages are publicly available [9] and reported in the provenance metadata.
A pplication of W3C PROV to STM case study
Provenance is a kind of metadata that describes the history of data from the original data sources to data products. Provenance information, that tracks the processes applied to data, from the origin to the final results, is critical to enable reproducibility [38] and reusability in scientific research experiments. In relation to these needs, we present an approach to describe the provenance of our use case on STM images by applying the PROV-DM [39], a generic data model of the W3C PROV standard [40].
Towards the FAIRifi cation of Scanning Tunneling Microscopy Images
As a first step, we designed the workflow of the principal events performed during the FAIRification process of STM images, from the raw data folder generated by VT-STM measurements to the final image that can be visualised on the TriDAS website.
For each of the above activities, we first identified the actors responsible together with the generated outputs, and secondly, mapped them with the W3C PROV core concepts described in Table 2.
Towards the FAIRifi cation of Scanning Tunneling Microscopy Images
Part of the terms we used in the provenance workflow has been already agreed upon among the NFFA-Europe community as they have been defined in the NFFA-Europe Glossary [41] developed in collaboration with the Joint Lab "Integrated Model and Data Driven Materials Characterization" (MDMC) of the Helmholtz Association of German Research Centers [42]. For the mapping, we considered three components of PROV-DM: entities and activities, derivations, and agents with their responsibilities.
Entities:
In PROV, an Entity is defined as "a physical, digital, conceptual, or other kind of thing with some fixed aspects" [39]. From PROV-DM core descriptions, we identified the following entities: Raw data, Reference dataset, Structured & FAIR dataset and Filtered image. In our case study, Raw data refers to the unorganized collection of 420,000 STM images acquired using the VT-STM microscope. Reference dataset groups together 110,000 images acquired in constant-current mode, while Structured & FAIR dataset includes 7,287 images manually labeled in three sample material categories. Finally, Filtered image corresponds to single images downloadable from the STM Metadata Explorer on the TriDAS website.
Activities: An Activity is "something that occurs over a period of time and acts upon or with entities" [43]. In our case, we mapped as Activities four events represented by: VT-STM measurements, Image selection & retrieval, Image labeling process, and Metadata selection. The first activity, VT-STM measurements, corresponds to image acquisition at CNR-IOM. It is followed by Image selection & retrieval, which describes the actions taken to obtain the Reference dataset from Raw data. The image labeling process is the pipeline used to enrich a subset of the Reference dataset with material composition metadata and finally, Metadata selection represents the workflow of the web APP to find a particular image of interest from the Structured & FAIR dataset.
Agents:
In PROV, an Agent [39] can be a person, an organization, software or other entity that has some responsibility for a given activity or entity. We identified STRAS research group, VT-STM microscope, Data scientist and Research user as prov:Agents and Analysis software as prov:softwareAgent. STRAS research group indicates the researchers of the laboratory where the Raw data were generated. Data scientist is the person responsible for the FAIRification of the dataset while the Research user is the person interested in the data collected from the Structured & FAIR STM dataset.
The roadmap of the FAIRification activities and the subsequent mapping with W3C components leads to the provenance workflow presented in the graphical illustration ( Figure 5).
PROV Activities are represented as lilac rectangles, PROV Agents as light orange pentagons and PROV Entities in light yellow ovals. The responsibility properties are depicted in pink. The workflow starts with VT-STM measurements attributed to STRAS research group and is associated with both STRAS research group and VT-STM microscope that acts on behalf of STRAS research group. VT-STM measurements generated Raw data that were used during Image selection & retrieval to generate the Reference dataset. The Reference dataset, which was derived from Raw data, was attributed both to STRAS research group and Data scientist. Analysis software acts on behalf of Data scientist and Research user. The image labelling process, associated with Data scientist and STRAS research group, used the Reference dataset to generate the Structured & FAIR dataset. Therefore, Structured & FAIR dataset derived from Reference dataset. At last,
Towards the FAIRifi cation of Scanning Tunneling Microscopy Images
Metadata selection associated with Research user used the Structured & FAIR dataset to generate a Filtered image that was attributed to the Research user.
As a final step, we conducted a practical implementation of the above workflow, by using a PROV Python Library for W3C Provenance Data Model [44]. The provenance document created in Python was then exported in a JSON representation for PROV, PROV-JSON, thus providing a compact and accurate representation of PROV that is particularly suitable for interchanging PROV documents, allowing reproducibility.
C ONCLUSIONS AND OUTLOOK
In this paper, we describe tools and services designed to improve the overall value of a scientific dataset of STM images by implementing different aspects of FAIR principles.
To address findability and accessibility, we used extracted metadata of each image as a filter to create a structured dataset from a raw data folder. We focused then on the annotation of images with sample material composition. As a result, we obtained a final dataset of 7,287 images of the surface of three materials, Gr_Ni100, Gr_Ni111 and N_Gr_Ni111.
We then created a web service to visually explore this information through intuitive graphical representations. The crucial component of this web service is metadata enrichment with information on sample composition, obtained with machine learning techniques. Moreover, we improved the usability of the dataset by including visualization and download functionalities directly in the web browser.
Towards the FAIRifi cation of Scanning Tunneling Microscopy Images
To address reproducibility, as well as interoperability and reusability we then focused on provenance metadata. The use of provenance standards is fundamental to achieve interoperability and encouraging the reuse of datasets. For these reasons, we applied to our case study the W3C PROV standard, which is a general, high-level standard for provenance. We used an open-source tool called F-UJI [45] to verify and assess the level of FAIRness achieved, which supports a programmatic FAIR assessment of research data based on a set of core metrics. The FAIR level result was 'advanced'. Even if we have a total score on findability and accessibility, the level of interoperability and reusability is moderate, showing some aspects we should improve in future work.
We foresee several directions for the future development of this case study: generalization of our provenance implementation, development of a domain-specific metadata schema for scanning probe microscopy, implementation of a FAIR-by-design workflow for the newly acquired data, continuous development of the STM Metadata explorer service and, more specifically concerning our case study, label propagation with semi-supervised learning [46,47].
The implementation details of this work, in particular the PROV implementation, are somewhat specific to the present STM case study, but, in principle, they can be easily generalized and applied to a large number of scanning microscopy experiments (SEM, AFM, etc.), with the possibility to include active provenance capture [48].
The FAIRification process described in this work is applied to the legacy data acquired in the past twenty years in a STM laboratory. For newly acquired data, we started to actively implement a FAIR-by-design workflow starting from data acquisition. This process includes the use of an Electronic Laboratory Notebook (ELN) for reusability and provenance and the development of an open-source Python package for data reading to improve accessibility and interoperability [36]. A key activity in data management, especially in light of compliance with FAIR principles, is the development and adoption of metadata schemas. Currently, a metadata schema for STM is missing, and no standards are adopted for data and metadata acquired with this technique. Motivated by this lack, we started a coordinated effort to develop a standard STM metadata schema [49] with the final aim, after sharing and approval by the involved scientific community, to make it a de-facto standard in the field, openly available for reuse and a further extension to other scanning (probe) microscopy techniques. With this respect, we are planning to continue the work presented in this paper by converting the obtained structured dataset to make it compliant with the new STM metadata schema, as soon as it will be defined, thus further carrying on its process of FAIRification. We finally mention that an open-source software [50] is already available for loading and performing extensive data processing and analysis on several STM data formats, including those reported in this manuscript.
The STM Metadata explorer presented in this work was developed to improve the usability of the dataset. We plan to add analytics to assess user experience to further develop the service towards user needs.
Finally, we plan to extend the labelling to the whole dataset by label propagation, a powerful semisupervised learning technique. Currently, the labelled samples are a small fraction of the total and its collection required extensive human annotation. Extending the labelling to the whole dataset will enable the development of more advanced services (such as advanced queries) and large-scale experimentation.
Towards the FAIRifi cation of Scanning Tunneling Microscopy Images
In conclusion, we believe that this work will inspire and engage a large scientific community in addressing the problems of data provenance, metadata schema development and, more in general, the FAIRification of scientific data. We are sure that this is an essential endeavour for the development of future research.
Towards the FAIRifi cation of Scanning Tunneling Microscopy Images
Mirco Panighel is a Post doctoral fellow at CNR-IOM in Trieste and his current research activity involves variable temperature scanning tunneling microscopy (STM) of graphene nano-structures in ultra-high vacuum (UHV) and the development of Python packages for scientific data analysis. He is part of the Data Management team in the NFFA-Europe Pilot project and is responsible for the Open and FAIR data implementation in the STRAS laboratory. ORCID: 0000-0001-8413-5196 Cristina Africh is senior research scientist at CNR-IOM and head of the Structure and reactivity at the atomic scale (STRAS) group at CNR-IOM. Her research focuses on the investigation of surfaces at the atomic scale, mainly by scanning tunneling microscopy. Cristina Africh is also coordinator of the NFFA-Europe interoperable distributed research infrastructure (IDRIN), whose interoperability relies on FAIR data management. ORCID: 0000-0002-1922-2557 Stefano Cozzini is presently director of the Institute of Research and Technologies at Area Science Park where he coordinates several scientific infrastructures and projects at national and international level. He has more than 20 years' experience in the area of scientific computing and HPC/Data e-infrastructures. His main scientific interests are scientific computing and machine learning techniques applied to scientific data management. He is presently actively involved in the master's degree on Data Science and Scientific Computing master at University of Trieste, Italy. ORCID: 0000-0001-6049-5242 | 2022-08-12T15:12:28.770Z | 2022-08-10T00:00:00.000 | {
"year": 2023,
"sha1": "d7b9369a78b6fc943dea9ea2f9f1594c8d337dec",
"oa_license": "CCBY",
"oa_url": "https://direct.mit.edu/dint/article-pdf/doi/10.1162/dint_a_00164/2070155/dint_a_00164.pdf",
"oa_status": "GOLD",
"pdf_src": "MIT",
"pdf_hash": "b6e8aa74d1a31c4f0cc27d15645c39bd6e855f13",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
228893258 | pes2o/s2orc | v3-fos-license | Enhancing business entrepreneurship through open government data
Article history: Received: July 20, 2020 Received in revised format: September 1
Introduction
With the advent of advanced technology and increased of its capacity to store and process data with the ability to search in it (Malkawi, 2016), the concept of open data emerged as a powerful tool, through which governments could achieve economic development and reinforce the rational governance. As a result, the data has become an expensive commodity and important capital asset, so governments have tended to adopt an open data approach, and developed strategies and plans to facilitate the benefits from the data revolution in achieving sustainable development. Governments, civil society organizations, and the private sector also aware the importance of this approach, which is based on disseminating government data and making it available for public via Internet for use and reuse by all society sectors. Many studies indicate that, most open data initiatives did not achieve their objectives well, because they do not take users' point of view into account, and the lack of using open government data. Researches indicated that open government data policy has many benefits, including: increasing the ability of governments to make new policies, engaging citizens, developing knowledge, increasing community participation, and many economic benefits like increasing competitiveness, improving the quality of products and services, reducing government spending, and stimulating creativity (Zuiderwijk et al., 2018;Deloitte, 2017;Malkawi, 2018). Therefore, any organization or industry regardless of its size will face countless challenges internally and externally, so, it is either struggling these challenges to survive and cope with the speed of progress and entrepreneurship, or melting and go out from market irreversibly.
Based on the above, entrepreneurship comes to open many opportunities for entrepreneurs and innovators. It is one of the most important economic forces the humanity known in the history; it enables individuals to search for opportunities where others find it as intractable problems. Entrepreneurship is a symbol of perseverance and achievement, and vital source of change in all aspects of society (Pahuja & Sanjeev, 2019). Entrepreneurship and innovation are essential and applicable tools in all aspects of life, it is not exclusive for the developed countries, and not as the belief that entrepreneurship and innovation are only for the prerogatives of the business community and the smart ones, it is not also just the arrogance of the people of the developed world to believe that they are the origin and source of all knowledge, and that entrepreneurship is limited only to some social classes (Cowdrey, 2018); it is an opportunity and the core of many economies in developing countries, and thousands of entrepreneurs establish thousands of progressive small and medium enterprises (SMEs) in these countries which need support from governments. All of this prompted researchers to conduct this study in an environment that relies heavily on SMEs as a result of the declining government role in employment in the public sector, lack of resources, and poor financial conditions.
Importance of the Study
In light of the global openness, ease of transmission and communication between countries and people, the spread of knowledge, the potentials for trade and industry around the globe, and with the vast resources of rich and industrialized countries and scarcity in poor countries, business entrepreneurship and SMEs become a vital strategy to deal with these difficulties. Jordan is one of developing countries which suffers from many difficulties like lack of natural resources, succession of crises and migrations from neighboring countries, and suffering from heavily debts, which made Jordan as one of the poor countries in the world. The availability of creative human resources (Malkawi et al., 2018) has enabled to establish SMEs that make a positive contribution to the national economy and reduce unemployment. Also, Jordan is going through difficult economically times, the business environment is not conducive for many businesses, and not to mention the unstable situation in the Middle East, as a result many large companies are closing down while others are downsizing. Therefor SMEs have become the most important actor in today's national economy; and these SMEs need support through legislations, money, reduce taxes, data, and other initiatives to encourage this sector. They are considered as one of the important sources for innovation and creation, and government depends on them to reduce unemployment, poverty, and raising the citizens' economic level, then economic position for the country as a whole. However, these businesses still do not have the experience to compete and grow in developing countries in general, and Jordan specially is one of these countries. These SMEs have no ability to accumulate data and knowledge, and no ability to buy and collect data, it is a new sector and needs to be supported, data is one of the major source to support these companies, therefor the availability of government data gives these entrepreneurial businesses and entrepreneurs a main source to compete and success through innovation, integration, marketing and so on. This needs to be studied, therefore, this study will show how open government data (OGD) affects entrepreneurial businesses in Jordan.
Problem Statement
SMEs face many challenges in today's business and one of these challenges is government data availability. Governments use many tools to publish their data for the benefit of private and public organizations, people, entrepreneurial businesses and entrepreneurs, civil service organizations and so on. The level of open government data in Jordan and its effect on entrepreneurial businesses and entrepreneurs is a big issue, which needs to be studied and measured. Therefor the problem of this study comes from studying the effect of open government data on entrepreneurial businesses in Jordan from Jordanian Irbid State Entrepreneurs' point view, so the study problem tries to answer the following question:
What is the effect of open government data on business entrepreneurship from Jordanian Irbid State entrepreneurs' point view?
Research on this topic and explore open government data and its effect on entrepreneurial businesses is very important issue in Jordanian (SMEs) to sustain and achieve competitive advantage.
Definition and importance of government data
Government institutions, such as ministries, public organizations, municipalities and others, produce and collect huge amounts of data during their work covering various aspects of life, such as health, legal, transportation, national statistical data, financial statements, laws, regulations, etc. The use of such data for a specific purpose does not preclude its use for other purposes in non-governmental organizations, civil society organizations and the private sector. Many countries, especially developed ones, have realized the importance of this, and have made strides in this area, such as the United States of America, Britain, Germany and others. The importance of open government data stems from its ability to achieve many benefits at the national level, such as economic benefits, citizen participation and the quality of government services. In addition, the availability of data to end-users, will enhance decision-making, active participation, and evaluation of various development programs. So many economic analyses showed the benefits gained from the private sector when using open government data, whether through the emergence of a large number of entrepreneurial businesses and SMEs, increasing the efficiency and effectiveness of operations, or in the form of creating new products and services to the market, which increases the volume of private sector activity, then revitalize the national economy in general. Therefore, governments seek to adopt open data approaches to increase transparency, participation and cooperation, then encourage innovation and achieve higher economic values. Many studies indicated that the availability of reusable government data leads entrepreneurs to benefit from it in their businesses whether in marketing, supply chains, innovation or to deal with everyday issues facing their SMEs (World Bank Summit for Open Data 2012Data -2017. Questions remain about this new concept, what is the optimal government strategy for open data? Why do some governments succeed in adopting an open data approach while others fail and continue to suffer? How does open data contribute to increase citizen confidence, participation and exploit it in economic development? Social networks and digital mobile devices have facilitated the creation of new services through easy access to data, analysis and utilization of it at anytime and anywhere (Malkawi & Halasa, 2016). Countries, which adopt this approach build the necessary strategies and adopt advanced technologies to facilitate using the OGD, and made it available technically, accessible, and readable to encourage use and reuse these data in their SMEs to get their benefits. But still the open government service providers and users of these data are facing many obstacles which limit continuous innovation, including a closed culture about data sharing, privacy policy, poor quality of data, technical constraints, data security policy, and others (Huijboon & Roek, 2011). Open government data (OGD) is defined as a machine-readable information, especially government data that provided to others for using and obtaining different benefits of it (Mckinsey Consulting Institute, 2013). It is also defined as digital data available for public use, it is accessible, process able, used and reused, and redistributed in no /or at very low cost (Smith& Sandberg, 2019), within the known conditions and principles of data availability, the most important of which: The data should be basic (primary), fully accessible, reachable, non-discriminatory, documented and verifiable, timely, non-proprietary, and manageable. Others define (OGD) as data owned by government institutions and published online (Smith & Sandberg, 2019). So, data is freely accessible, used, reused or distributed free of charge and without any legal, technical, or financial restrictions. They are provided: complete, timely, non-discriminatory, readable, and not restricted by a particular license (https://opengovdata.org/, Accessed 25/10/2019). Open government data (OGD) is a recent practice in the development of services and public administration around the world, with the exception of personal, confidential, and sensitive data affecting national security. Governments and organizations that adopt open data approach have a heavy burden in protecting privacy and intellectual property, as well as setting standards that facilitate data flow (Mckinsey Consulting Institute, 2013). Despite the gains made by many developed countries because of their adoption of open data approaches, this approach has not been exploited or exploited in narrow spaces in many third world countries (World Bank Summit for Open Data 2012-2017). Recently, governmental institutions have been interested in launching open data services in order to improve services for the public and for the benefits of other sectors like (SMEs). Several initiatives in the Arab world have emerged to adopt open data approaches, as Jordan, Saudi Arabia's initiatives, Tunisia and others. So, data is considered as a valuable national resource and a strategic asset for the government, its partners and the public, its value increases by the time, and managing these data as strategic assets is a vital issue for the government.
Principles of Open Government Data (OGD)
In general, open government data should comply with the following principles (OpenGovData.org) (http://opendatatoolkit.worldbank.org/en/supply.html) (World Bank Summit for Open Data 2012-2017): 1. Public: government agencies must approve the openness of data to the extent permitted by law and subject to privacy, confidentiality, security or any other restrictions. 2. Accessibility: Availability of open data (OGD) in an accessible, editable, and open way so that it can be retrieved, downloaded, indexed and searched. Open data structures do not discriminate against any person or group of people and made available to the widest range of users and to the widest range of purposes, to the extent permitted by law. 3. Machine process able, to be readable, data should be organized to allow automated processing. 4. Description: (OGD) is fully described so that data users have sufficient information to understand their strengths and weaknesses, analytical limitations and security requirements, as well as how to address them. This includes using of complete metadata, comprehensive documentation of data elements, and dictionaries. 5. License-free: (OGD) should be provided under an open license, with no restrictions on its use and reuse. 6. Completion: (OGD) is published in primary forms (i.e. as collected from the source), with the highest possible accuracy 7. Timely: Open data is provided as quickly as possible, to maintain the data value.
Benefits of open government data (OGD)
As with any public initiatives, open data includes cost and effort. As such, government officials often care about the benefits of open data compared to the required levels of effort (http://opendatatoolkit.worldbank.org). Like other commodities, data has great potential to provide benefits. Some referred to the data as the new oil, because both data and oil had intrinsic value which must be "refined" or transferred to get their full benefits. In case of government data is accessed and reused, it enables individuals, organizations, and even governments to innovate and collaborate in new ways. In general, the benefits of open government data include: 1. (OGD) promotes transparency and reduces corruption, and it enables the government to monitor budgets and government spending. 2. Improves the quality of services provided by the public sector, by providing raw data to individuals, receiving feedback about government activities, or using them in other services.
3. It leads to innovate and add economic value. Reuse of these data enhances social and entrepreneurial creativity, and serves SMEs alike, and thus achieving economic growth in general, in addition, it enables entrepreneurs understanding the market and provide products and services meeting all market requirements. 4. Improves efficiency, where open data reduces the cost of data acquisition for both government and private institutions specially for SMEs.
According to the Mckinsey Consulting Institute (2013), OGD, has generated considerable worldwide attention in its capacity to empower citizens, changing the government work, improving public service delivery, and give new economic benefits. According to the above report, only seven sectors can generate an additional value of (3-5 $ trillion) per year as a result of open government data, these sectors are (education, transport, consumer products, electricity, oil and gas, health, and finance), which has already led to the emergence of hundreds of entrepreneurs. In addition, the emergence of OGD, helping SMEs in market segmentation, identifying new products and services, and improving the efficiency and effectiveness of operations.
Businesses entrepreneurship 4.2.1 Businesses entrepreneurship: concepts and importance
The term "entrepreneur" was used during the seventeenth century to refer to a person entering into a contractual agreement with the government to provide the services and products stipulated and determined by the contract, and any resulting profits or losses bearded by the entrepreneur. During the seventeenth century, French economist Richard Cantillon developed one of the early theories of entrepreneurship, and regarded the entrepreneur as a risk-taking person. Where he noted the contradictions between supply and demand and options like buying at a cheap price and selling at a higher price. He defined the entrepreneur as a trader or farmer who "buys at a certain price and sells at an undetermined price and bear the risk for that" (Pahuja & Sanjeev, 2019). Alani et al. (2010) state that the entrepreneur term refers to a person who has the ability to discover and capture the opportunity, entrepreneur takes the risk to start the project then provides the necessary resources and capabilities to work and add value to the products, services, method and procedures. Therefore, the entrepreneur is working to find what is new and distinctive to meet the needs and desires of customers, where the result is either the acquisition of a tangible and intangible benefits, or exposure to moral and material loss. So, entrepreneurship has evolved from the traditional concept that means explore business opportunities before others do, then adopt informal and modern patterns in management that upgrading the organizational entrepreneurship (Akinyemi & Adejumo, 2018;Zidan, 2007). In the middle of the 20 th Century, Joseph Schumpeter was the first economist focusing on the role of entrepreneurship in economic development through innovation.
In his opinion, "The entrepreneur function is to reform or revolutionize the production process, by exploiting an invention process or a new technological method to produce a new commodity, or production a new one in a new way, finding a new supply source to buy material, or a new outlet for selling products" (Pahuja & Sanjeev, 2019). Hisrich and Petrts (2002) stated that the classical school defines the entrepreneur as a person who accepts risks and dealing with risk in uncertain circumstances, and who employed his administrative capacity to exploit capital to gain profit by increasing productivity. On the other hand, the economic school defines entrepreneurship as an element of production that regulates and coordinates the production process in uncertain circumstances. One of the entrepreneurs of this approach is the economist Adam Smith who pointed out that the entrepreneur is the person who owns or supplies the capital and is a mediator between producers and consumers. After that, the Austrian approach developed the concept of entrepreneurship, and explained its functions and roles, such as creativity, innovation and creative thinking. Which means creating a new product that surpasses the previous one and leads to create new demand in the market, then increasing entrepreneurs' benefits. One of the most prominent leaders of this school also is Schumpeter, who defined the entrepreneurship as innovation, creative thinking and presenting an unprecedented technological innovation (Hisrich & Petrts, 2002). Whereas "Entrepreneurship Network" refers to entrepreneurs, who organized formally or informally to increase the efficiency of their activities. Moreover, networking is considered an activity for entrepreneurs to obtain information about new entrepreneurial ideas and opportunities (Das & Goswami, 2019). Although the entrepreneur and entrepreneurship mean different things to different people, there is agreement that entrepreneurial behavior involves taking initiatives, organizing and reorganizing, process available resources in innovative ways, and accepting risk in uncertainties with potential of success or failure (Mariotti & Glackin, 2010). In addition, entrepreneurs are people who start their own business and work for their own benefit, and mostly they are the owners and employees of their companies (Mubarak, 2009). The entrepreneur behavior in uncertainty state about a possible profitability opportunity is often a judgmental decision, therefore, entrepreneurs decide to start working, even in risk and uncertain circumstances, yet they respond, innovate and interact through their entrepreneurial work with everything around them, and do their work in a dynamic and unremitting manner to increase their wealth (Mariotti & Glackin, 2010). Another concept related to entrepreneurship is entrepreneurial orientation which refers to the style, practice, and decision-making model of entrepreneurial organizations. We conclude from the above that entrepreneurship is a key concept for exploiting opportunities that competitors cannot observe. Many scientists agree that entrepreneurial orientation is a combination of creativity, proactive, and risk tolerance. Based on the above-mentioned, there are a number of features distinguishes entrepreneurs such as (Cowdrey, 2018): a clear and achievable vision, Self-awareness, confidence, Self -motivation, prepare to take calculated risks, the desire to listen to others, lack of fear of failure, and desire to hard working. Entrepreneurs or owners of (SMEs) are considered the solid foundation for any strong economy. East Asia, Brazil, Turkey and even major countries have reached their current position now because they started from scratch, and helped and encouraged small projects such as working from home, and providing care to those who have the opportunity to develop and grow a small business, then they continued to provide support to those who have the opportunity to be a medium-sized company until they now have giant entities in the world economies.
Dimensions of entrepreneurship
The most important dimensions of entrepreneurship addressed by theoretical literature of management are (Andriopoulos & Dawsonm, 2009): 1. Creativity: Creativity is a process that reflects a consistent trend to participate in/ and support new ideas, modernity, experimentation, and processes that may lead to new products, services or new technological processes. 2. Innovation: Innovation is the beneficial investment of new ideas and the process of transforming them into useful and usable products, processes and services. Innovation is strongly associated with business growth and most forms of economic growth in recent decades are due to innovation. New ideas create new businesses. 3. Risk-taking: risk taking is the combination of risk and opportunity, with focusing on avoiding or reducing risks, hedging them when looking for opportunities by understanding risks and how to deal with them. 4. Proactive and initiative: means proactive tendencies and self-initiation to start. Finding of proactive and initiator individuals is very important for management of organizations to discriminate, adopt, and support them (Lowe& Marriott, 2006). 5. Seize opportunities: Entrepreneurial businesses can gain opportunities through creativity and innovation in products and services, which enable capturing customers and attracting their attention and loyalty. Entrepreneurship is the pursuit of opportunity, regardless of permanent availability of resources. The more the organization rearranges its resources, responds to the opportunities available, invests them intelligently, and acquires them before the competitors, the more they progress and distinct over competitors (Lowe& Marriott, 2006). 6. Self-renewal: It reflects and translates the state of change, renewal, and transformation in organizations in an ongoing process, by renewing the main ideas on which the organization is based (Guth & Ginsberg, 1990).
The entrepreneurship in Arab countries
SMEs in the third world stay behind the competitive and powerful companies, because of the ability of the last large ones to own and use technology, share data, and cooperate with regional and global partners. Other reasons also related to their ability to invest in research and development, use of digital and technological platforms, and ability to invest in human resources (Hisrich & Petrts, 2002). Most Arab countries live in a poor economic and unstable political conditions. In addition, the current reality of entrepreneurship in the Arab countries as a whole is still in need of development and support in order to encourage young entrepreneurs to transform their ideas, initiatives, and opportunities into successful and productive projects. They need to provide them with a conducive and encouraging environment (Kuratko& Hodgetts, 2001). Jordan as one of these countries, which has efficient and skilled human resources who have high abilities and aspirations to improve their economic conditions; entrepreneurship is one and most important mean to do this (Malkawi, 2017; Al-Khasawneh & Malkawi, 2018). The success of entrepreneurial endeavors has a major impact on the economy of any country, they help to achieve individual aspirations and achieve goals such as financial gains, self-realization, and social identification. Since we still living in the age of enterprises development in the Arab world in general and Jordan in particular, entrepreneurship and entrepreneurs play a great role in economic development, so entrepreneurship and entrepreneurs need to be encouraged by educational institutions, government, and other civil community organizations. There are a large number of young entrepreneurs, who have no prior experience, they need assistance and facilitation for access to finance and administrative and technical support (Malkawi et al., 2017). For these reasons and others, increasing funding opportunities, improving infrastructure, providing less stringent labor systems, and open government data to them can contribute in improving the competitiveness of SMEs. Thus, this requires urgent action where studies indicated that, there are five factors hinder the development of entrepreneurship referred to (Moghaddam& Izadi, 2019): financial problems, market orientation, lack of data, poor and inappropriate business environment, and lack of government supportive policies, and this study comes to support one of the most important of these reasons which is lack of data.
Relationship between open data and business entrepreneurship
Open government data is increasingly linked to the creation of new products and services, as well as (2013) aimed to determine the contribution of open government data to economic values of seven main sectors (education, transport, consumer products, electricity, oil and gas, health, and finance) and concluded that open data contributes (3 -5 trillion dollars) annually in these sectors. The data liquidity and adoption of the open data approach encourage entrepreneurs to innovate, create, and serve small, medium and large companies in surveying the markets and providing products and services suitable for market needs. Malkawi (2017) aimed to know how to enhance entrepreneurship through E-Commerce adoption at (SMEs) in Jordan. The study concluded that: (SMEs) use E-Commerce at high rates, entrepreneurship also high, there is a statistical effect of E-Commerce on entrepreneurship as a whole and on all its components in Jordanian (SMEs), and recommended (SMEs) and entrepreneurs to expand using E-Commerce in their work. Walker and Simperl (2018) (2019) indicated that the value added of big data is the ability to identify useful data and transform it into usable information. This study shows the need for new analytical thinking and computational skills for the new generation of entrepreneurs to deal with the big data challenges to create new opportunities from these data. The paper also identifies the role of big data and the transformation of possibilities into realistic opportunities.
Objective of the Study
The general purpose of this study is to find out the role of open government data on business entrepreneurship at Jordanian (SMEs)-Irbid State.
Hypothesis
The main hypothesis: There is a significant positive effect of open government data on business entrepreneurship at Jordanian (SMEs)-Irbid State.
Minor hypotheses are: P1: There is a significant positive effect of technological issues on business entrepreneurship at Jordanian (SMEs)-Irbid State. P2: There is a significant positive effect of quality of data on business entrepreneurship at Jordanian (SMEs)-Irbid State. Fig. 1 shows the proposed study (dependent and independent variables). (2015) and others to develop a questionnaire for collecting data from the study sample. Face and content validity were done by faculty members their major related to the subject of the study. Next, result items ordered randomly for each construct and the Likert 1-5 scale used for measuring the responses.
Open Government Data
Technological issues Quality of the Data
Study Population and Sample
The study population consisted of all SMEs in Irbid state-Jordan. A non-probability sample (suitable sample) was selected because of the absence of an integrated database of entrepreneurs and start-up entrepreneurs and the size of the community could not be determined. So (600) questionnaires were distributed to collect the field data from SMEs and (536) valid questionnaires of them were recovered (89%).
Data analysis
The aim of this study is to investigate the role of open government data on business entrepreneurship. We used two sub variables to reflect the role of open government data on business entrepreneurship including technology used to disseminate open government data and quality of open government data. This study predicts that open government data variables have effects on business entrepreneurship. To achieve the main objective of this study we performed several analyses as it appears in the next sections.
Descriptive statistics
In this section, we present demographic characteristics of the respondents and descriptive statistics regarding the research variables. As mentioned above (536) valid responses were recovered, where; 498 males, 38 females, 216 less than 30 years old, 194 between 30 and less than 40 years old and the rest were over 40 years. Diploma and less were 62, 428 bachelors and the rest were postgraduate. 54 of them less than 5 years' experience, 346 from 5 to less than 10 years, and 136 more than 10 years' experience. Table 1 shows the descriptive statistics of the variables investigated in this study as assessed by the respondents. The respondents' perception for each variable is assessed based on its mean. The mean is judged as low if (< 2.33), moderate if (>=2.33) and (<=3.67), and high if mean (> 3.67). Having this in mind, the respondents of this study generally assessed open government data as moderate with total mean (3.51). In addition, they assessed their open government data as high in term of technology used and low in quality of data opened. In addition, the level of business entrepreneurship was moderate as the total mean is (3.59). Table 1 shows descriptive statistics. As shown in Table 1 technology as a sub variable for open government data come in a high level with mean (3.73) and standard deviation (0.72), whereas data quality in a moderate level with mean (3.30) and open government data as a whole in moderate level (3.51) which needs more interest from government. Business entrepreneurship comes in a moderate level also with arithmetic mean (3.59). Appendix 1 shows this in more details. Before regression analysis, we examined for the normal distribution of the data, Multicollinearity issue, and the internal consistency of the variables to confirm the quality of data examined. We estimated the data Skewness and Kurtosis to assess the normality of data. As shown in Table 2, the values of Skewness and Kurtosis for each variable are less than the threshold of absolute value of (2). This suggests that our data is normally distributed. For Multicollinearity issue, Variance Inflation Factor (VIF) is a frequently used to inspect Multicollinearity. According to the rule of thumb, a VIF value of (5) and higher indicates a potential problem of Multicollinearity (Hair & Anderson, 2010). The results presented in Table 2 show that the VIF values vary between (1.176 and 2.251) which less than the cut-off value of (5). Therefore, the proposed path model has no Multicollinearity issue. We assessed the internal consistency by estimating the Cronbach's Alpha for each variable, the values range from 0.74 for OGD, 0.83 for business entrepreneurship, and 0.84 for the tool as a whole, which are above the threshold value 0.70. Therefore, our data are credible and can be used in regression analyses safely.
Hypotheses testing
To examine our hypotheses, we run two models. In the first model, we examine the main hypothesis, which predicts a positive and significant relationship and effect between open government data and business entrepreneurship. In the second model, we examine the effect of each dimension of open government data including technology and data quality. As shown in Table 3 have a significant effect on business entrepreneurship as a whole. The sign of the business entrepreneurship's standardized coefficient is positive. This suggests that the relationship is positive and significant at P< 0.05. This result offers a sufficient evidence to accept the main hypotheses in this study. The result of the second model, which examines the effect of each dimension of open government data on business entrepreneurship, is presented in Table 4. The overall F-test is significant at (p < 0.00), indicating that the dimensions of open government data are significant jointly. The model explains about 0.42 as reflected by R 2 value. Individually, the result shows that the standardized coefficients (Beta) of technology is not significant at p<0.05. This indicates that technology individually has no effect on business entrepreneurship. Accordingly, we reject the sub hypotheses (P1) that technology has positive effect on business entrepreneurship. Moreover, the same table shows that the standardized coefficients (Beta) of data quality is significant at p<0.05, this indicates that data quality individually has effect on business entrepreneurship. Accordingly, we accept the sub hypotheses (P2) that data quality has positive effect on business entrepreneurship. This suggests that as data quality increases, business entrepreneurship increase.
Results
The main results of the study are: | 2020-11-05T09:08:33.934Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "3d62229097ee1baf59a04c8b1449c969aa0d5e63",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5267/j.msl.2020.10.013",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "912558611a7e56223a4a1ee0195172a446632982",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Business"
]
} |
236899965 | pes2o/s2orc | v3-fos-license | A Multi-Criteria Decision Support System for Strategic Planning at the Swiss Forest Enterprise Level: Coping With Climate Change and Shifting Demands in Ecosystem Service Provisioning
Sustainable forest management plays a key role for forest biodiversity and the provisioning of ecosystem services (BES), including the important service of carbon sequestration for climate change mitigation. Forest managers, however, find themselves in the increasingly complex planning situation to balance the often conflicting demands in BES. To cope with this situation, a prototype of a decision support system (DSS) for strategic (long-term) planning at the forest enterprise level was developed in the present project. The DSS was applied at three case study enterprises (CSEs) in Northern Switzerland, two lowland and one higher-elevation enterprise, for a 50-year time horizon (2010 to 2060) under present climate and three climate change scenarios (‘wet’, ‘medium’, ‘dry’). BES provisioning (for biodiversity, timber production, recreation, protection against gravitational hazards and carbon sequestration) was evaluated for four management scenarios (no management, current (BAU), lower and higher management intensity) using a utility-based multi-criteria decision analysis. Additionally, four alternative preference scenarios for BES provisioning were investigated to evaluate the robustness of the results to shifting BES preferences. At all CSEs, synergies between carbon sequestration, biodiversity and protection function as well as trade-offs between carbon sequestration and timber production occurred. The BAU management resulted in the highest overall utility in 2060 for different climate and BES preference scenarios, with the exception of one lowland CSE under current BES preference, where a lower intensity management performed best. Although climate change had a relatively small effect on overall utility, individual BES indicators showed a negative climate change impact for the lowland CSEs and a positive effect for the higher elevation CSE. The patterns of overall utility were relatively stable to shifts in BES preferences, with exception of a shift toward a preference for carbon sequestration. Overall, the study demonstrates the potential of the DSS to investigate the development of multiple BES as well as their synergies and trade-offs for a set of lowland and mountainous forest enterprises. The new system incorporates a wide set of BES indicators, a strong empirical foundation and a flexible multi-criteria decision analysis, enabling stakeholders to take scientifically well-founded decisions under changing climatic conditions and political goals.
Sustainable forest management plays a key role for forest biodiversity and the provisioning of ecosystem services (BES), including the important service of carbon sequestration for climate change mitigation. Forest managers, however, find themselves in the increasingly complex planning situation to balance the often conflicting demands in BES. To cope with this situation, a prototype of a decision support system (DSS) for strategic (long-term) planning at the forest enterprise level was developed in the present project. The DSS was applied at three case study enterprises (CSEs) in Northern Switzerland, two lowland and one higher-elevation enterprise, for a 50-year time horizon (2010 to 2060) under present climate and three climate change scenarios ('wet', 'medium', 'dry'). BES provisioning (for biodiversity, timber production, recreation, protection against gravitational hazards and carbon sequestration) was evaluated for four management scenarios (no management, current (BAU), lower and higher management intensity) using a utility-based multi-criteria decision analysis. Additionally, four alternative preference scenarios for BES provisioning were investigated to evaluate the robustness of the results to shifting BES preferences. At all CSEs, synergies between carbon sequestration, biodiversity and protection function as well as trade-offs between carbon sequestration and timber production occurred. The BAU management resulted in the highest overall utility in 2060 for different climate and BES preference scenarios, with the exception of one lowland CSE under current BES preference, where a lower intensity management performed best. Although climate change had a relatively small effect on overall utility, individual BES indicators showed a negative climate change impact for the lowland CSEs and a positive effect for the higher elevation CSE. The patterns of overall utility were relatively stable to shifts in BES preferences, with exception of a shift toward a preference for carbon sequestration. Overall, the study demonstrates the potential of the DSS to investigate the development of multiple BES as well as their
INTRODUCTION
Forest ecosystems play a key role for biodiversity and ecosystem services (BES) provisioning (UNCCC 2015;United Nation CBD 2020). Over the past decades, the portfolio of demands for BES has increased considerably, most recently by the rising awareness for the role of forests in climate change (CC) mitigation due to carbon sequestration (Luyssaert et al., 2010). Forest managers therefore find themselves in the difficult position to balance the political demands in biodiversity promotion and ecosystem service provisioning with other socio-economic demands . Besides the importance of forests for carbon sequestration, timber production and biodiversity, progressive urbanization leads to an increasing demand in recreation value of forests (e.g., Hegetschweiler et al., 2020). Furthermore, many forests offer protection functions, e.g., in mountainous areas where forests play a key role in protecting settlements and infrastructure against gravitational hazards, such as rockfall, avalanches and landslides (Frehner et al., 2005). Since these diverse demands in BES are often at conflict with each other (e.g., Mina et al., 2017), forest managers have to cope with significant trade-offs in planning (e.g., Langner et al., 2017;Blattert et al., 2018;Bont et al., 2019). Further complexity is added to the planning situation by the impacts of climate change on forest ecosystems, which will likely induce profound shifts in forest BES provisioning (Mina et al., 2017;Seidl et al., 2019). In this increasingly complex and diverse planning situation, sciencebased decision support is thus key for planning the sustainable management of multifunctional forests (Kangas et al., 2015).
For this purpose, various decision support systems (DSS) have been developed in forestry across the globe (Vacik and Lexer, 2014;Nordström et al., 2019) and are increasingly used to explore synergies and trade-offs in BES (e.g., Biber et al., 2020). Although a DSS can in principal be any system that aids decision makers, the term typically refers to model-based software systems which provide a user interface, a 'knowledge system' (database, models, etc.) and a 'problem processing system' (e.g., for calculating decision analyses) . Over time, DSS have been developed from systems for a single purpose (e.g., evaluation of sustainable timber production) to systems including multiple criteria (e.g., a wide variety of BES) and a modular construction (i.e., providing an integrative and flexible software framework) (Eriksson and Borges, 2014). A particular challenge is to keep the system easy to handle and to provide results in a condensed and transparent way for the decision maker (Vacik and Lexer, 2014). DSS in forestry are therefore mostly developed for a specific region and particular environmental, social, economic situation at a specific spatial and temporal scale of interest (Eriksson and Borges, 2014). Despite the various DSS existing worldwide, such systems thus need to be tailored toward the specific needs of local forest management and planning tasks .
In Switzerland, forests are characterized by a wide variety of forest types, reflecting the large elevational and environmental gradients from lowland to alpine conditions (Rigling and Schaffer, 2015). As in several European countries, forests in Switzerland are managed according to the principle of 'closeto-nature' forestry (Hanewinkel and Kammerhofer, 2015), which aims at reaching multiple ecological, economic and social goals in a sustainable way by applying management interventions which follow natural processes in forest ecosystems (Messier et al., 2013). The large variety of forest conditions is also reflected in a number of DSS and tools that have already been developed to aid forest practitioners (Heinimann et al., 2014), such as WIS2 for short-to mid-term silvicultural planning (Rosset et al., 2014). In recent years, a DSS has been introduced by Blattert et al. (2018) that allows combining forest ecosystem simulation and multi-criteria decision analysis (MCDA, see also Wolfslehner and Seidl (2010)) and that focuses on long-term planning (i.e., several decades) at the forest enterprise level. The MCDA approach allows to account for multiple, often conflicting criteria, to integrate explicitly stated stakeholder preferences, and to explore the performance of alternative assumptions, which leads to a rationale and structured decision process that can be communicated in a justified and transparent way (Wolfslehner and Seidl, 2010;Uhde et al., 2015;Schweier et al., 2019). Moreover, the MCDA approach allows to integrate a wide range of BES indicators as well as Swiss-wide relationships between simulated BES supply and the demands of the society (via so-called value functions) (Blattert et al., 2017). The DSS framework of Blattert et al. (2018) furthermore goes beyond previous assessments of BES in Switzerland (e.g., Bircher, 2015) by incorporating also social services (i.e., recreation function) and offering a more holistic perspective on carbon sequestration by assessing not only sequestration within forests ('in-situ') but also outside the forest system boundary ('ex-situ'). Particularly assessments of 'ex-situ' carbon sequestrations that are accounting for harvested wood products and substitution effects are highly relevant for future forestry (Nabuurs et al., 2017), but are still rarely included in DSS (Seidl et al., 2007;Blattert et al., 2020). This DSS framework is hence well-suited to explore not only shifts in the long-term BES provisioning, but also to conduct a more detailed analysis of potential shifts in management strategies on carbon sequestration and associated synergies and trade-offs with other BES. However, the DSS of Blattert et al. (2018) was based on a forest growth model developed for Northern Germany and was restricted to applications under present climatic conditions and hence not suited to explore developments under climate change.
In recent years, climate change has increasingly impacted forests in Switzerland (e.g., Brun et al., 2020) and is gaining rising importance in the strategic (long-term) planning of forest management (Streit et al., 2017). While climate change impacts on Swiss forests are predominantly negative at lower elevations (e.g., reduced growth, increased mortality due to drought and heat), the trend toward a prolonged growing seasons has positive effects on forest growth at higher elevations (Bugmann et al., 2014). These large-scale patterns of climate change impacts can however be significantly modified by smaller-scale environmental heterogeneity, e.g., due to the effect of aspect, slope or orographic rainfall (Whiteman, 2000;Zou et al., 2007). Climate change impacts at the forest enterprise level can consequently be complex and site-specific, particularly in mountainous areas (Mina et al., 2017;Thrippleton et al., 2020).
It is therefore important to develop a DSS for Swiss conditions, which: (1) is built on a strong empirical basis reflecting the large gradient of climatic and environmental conditions, (2) is suitable for strategic (long-term) planning at the forest enterprise level under both lowland and mountainous conditions, (3) covers a large variety of BES relevant for Swiss forestry, including the service of carbon sequestration by accounting for 'in-situ' as well as 'ex-situ' sequestration, and which (4) is able to investigate different climate change trajectories to provide scientific decision support in complex planning situations.
Here, we present a prototype for a new DSS for strategic planning at the forest enterprise level that addresses these aspects. The system is based on the revised MCDA framework of Blattert et al. (2018) and a new climate-sensitive forest growth model developed for Switzerland (SwissStandSim, Zell et al., 2020). The DSS was applied to three representative case study enterprises (CSEs) in Northern Switzerland with different priorities in BES provisioning (two lowland and one mountainous enterprise) for a 50-year time-period (2010 to 2060). Particularly, our research objectives were to: (1) identify and quantify synergies and trade-offs between BES for different management strategies accounting also for the effect of increasing carbon sequestration on other BES, (2) identify the management strategy that best provides multiple BES, and (3) analyze shifts in BES provisioning under climate change ('dry' , 'wet' , 'moderate' scenarios). Since the implications of shifting demands (i.e., weighting preferences) in BES provisioning is highly relevant for decision making (Langner et al., 2017), we also analyzed (4) four alternative BES demand preferences.
Case Study Enterprises
Three case study enterprises (CSEs) in Northern Switzerland were selected for the DSS application, with different demands in BES provisioning (Figure 1). Enterprise 1 ('Wagenrain' , abbreviated as WAG) is located in the Swiss lowland plateau and focuses mainly on timber production. Enterprise 2 ('Bülach' , abbreviated BUE) is also located in the lowlands of the Swiss plateau, and focuses more on the recreation service due to its close proximity to urban areas. Enterprise 3 ('Gottschalkenberg' , abbreviated GOT), in contrast, is located in the Northern Pre-Alps at higher elevations and has a specific focus on biodiversity and protection against gravitational hazards (mostly erosion and landslides). The specific environmental conditions (geology, soil, climate, vegetation) of each CSE are summarized in Table 1, an overview of stand structure and tree species composition is provided in the Supplementary Appendix 1.1.
Decision Support System
The DSS aims at providing forest managers with information about BES development in Swiss forest enterprises under changing management and climate conditions, thereby highlighting particularly synergies and trade-offs among BES. At the spatial scale, it is designed for applications at the enterprise level (typically several 100 ha) with the representation of individual stands. At the temporal scale, it provides information at 5-or 10-year intervals and can be used for future projections of several decades. In the present study, a time horizon of 50 years (2010 to 2060) was considered.
The system consists of three core components, which are further described in more detail below: (1) a database (data on climate, soils, stands and management settings), (2) a forest growth model (SwissStandSim, Zell, 2018;Zell et al., 2020) and (3) a multicriteria decision analysis (MCDA) system (Blattert et al., 2018), which evaluates the BES indicators calculated from the simulation results (see Figure 2 for a conceptual figure of the DSS structure). In its current prototype stage (v1.0), the DSS is fully functional, but does not provide an interactive graphical user interface (GUI) yet.
Database
The datasets required for the application of the forest model comprise (1) environmental data (soil conditions, elevation, slope, aspect, climate, nitrogen deposition), (2) stand-level data of forest structures and composition (diameter, height and species of individual trees) and (3) management settings (defining time, intensity and type of management interventions), see also Zell (2018) and Zell et al. (2020).
For topographic information, a digital terrain model for Switzerland (200 m resolution, Swisstopo, 2010) was used to derive mean elevation, slope and aspects for each stand. Soil conditions (soil depth, water holding capacity, water permeability, nutrient availability) at each stand were derived from the Swiss soil suitability map (FSO, 2012), following the approach of Zell et al. (2020). For the climate data, the Swiss wide climate data of Brunner et al. (2019) were used for the respective location of the forest enterprises, see section 'climate scenarios' below for further information. Nitrogen deposition required by the forest growth model was estimated from the Swiss Nitrogen deposition maps (FOEN, 2015b) for each CSE and assumed to remain constant for the considered timeframe.
For the simulation of the forest growth with SwissStandSim, individual tree data (i.e., species, diameter, height) is required at the stand level (Zell et al., 2020). Since individual tree data is typically not available at the level of the entire enterprise, this information was derived from local-scale forest enterprise inventories for WAG (Bont et al., 2020), BUE ('Kt. Zürich, ALN, Abt. Wald, forest inventory 2016') and GOT ('Kt. Zug, Amt für Wald und Wild, forest inventory 2009'). Forest enterprise inventories were conducted using the method of Schmid-Haas et al. (1993), comprising data of individual tree records (e.g., species, diameter) measured within circular plots, which are located along a dense regular grid (see Table 1 for further details). Due to the 5-year resolution of the DSS, it was assumed that the last inventories of the CSEs (which were measured between 2008 and 2016) provide representative datasets for generating stand structure and composition of the year 2010 (i.e., the starting point of the simulation).
For each forest enterprise, the local-scale enterprise level inventory data and information from the Swiss national forest inventory (NFI) were used to predict locally adapted complete stand descriptions using the approach of Mey et al. (2021). The approach consists of four steps: (1) forest enterprise inventory data was used to calculate stand-level summary statistics, (2 and 3) the summary statistics were adjusted to account for young trees (based on NFI-data, since enterprise-level inventories did not measure trees smaller than 12 cm diameter) and were subsequently used to predict stand-level diameter distributions data, and (4) tree species composition was assigned based on the forest enterprise inventory data. A detailed description and an evaluation of the stand initialization approach is provided in Supplementary Appendix 1.1.
Forest Growth Model
The forest model SwissStandSim is an empirical, climatesensitive forest growth model, which was developed for singleand mixed-species forests in Swiss lowland and mountain forests (Zell, 2018;Zell et al., 2020). The calibration dataset of SwissStandSim covers 374 stands located throughout Switzerland Frontiers in Forests and Global Change | www.frontiersin.org with observation timespans of 15 to 112 years from the experimental forest management network (Forrester et al., 2019).
In terms of forest management, the user can select different forest management types (e.g., thinning from below, thinning from above, crop tree selective thinning) and define beginning, end and time intervals as well as intensity of interventions (Zell, 2018). SwissStandSim represents processes of individual tree demography explicitly via species-specific statistical models for regeneration (ingrowth), growth and mortality. These demographic models consider climatic factors as explanatory variables, i.e., mean annual temperature, precipitation and a moisture index, which feature non-linear effects and contain interactions (Zell, 2016). Changes in climatic conditions thus have species-specific impacts on growth, ingrowth and mortality, which are described in further detail in Zell (2018) and Zell et al. (2020). The forest model considers 11 species and species groups, covering the main tree species in Switzerland, i.e., Fagus sylvatica, Acer sp., Quercus sp., Picea abies, Abies alba, Pinus sylvestris and Larix decidua. Furthermore, douglas fir (Pseudotsuga menziesii) as well as species groups of 'long-lived broadleaved' , 'short-lived broadleaved' and 'other conifers' were represented (see Zell, 2016;Zell et al., 2020).
Simulations can be carried out for timespans of several decades with a temporal resolution of 5 years. For the present study, all simulations were carried out for the timeframe of 2010 to 2060, i.e., a time span of 50 years, corresponding to a typical timeframe of several decades for long-term strategic forest management planning (Segura et al., 2014).
Management Strategies
Four management strategies were defined at all CSEs: (1) 'no management' (NO), which serves as a baseline scenario, (2) a 'business as usual' (BAU, current management strategy), which aims at a multifunctional use of the forest, (3) a 'higher intensity' (HIGH) management strategy, aiming at an increased timber use as an essential component for future bio-economy, as well as (4) a 'lower intensity' (LOW) management, which can be interpreted as a focus on a more biodiversity conservation-oriented strategy. In respect to carbon sequestration, the management strategies can also be regarded as a gradient from more 'in-situ carbon storage' in the forest ecosystem ('NO' management strategy), to an increased focus on 'ex-situ carbon storage' in wood products and substitution effects for the 'HIGH' intensity management strategy.
The BAU strategy (i.e., management type, intervention times and intensity) was defined in collaboration with the respective forest managers of each enterprise ( Table 2, see also Supplementary Appendix 1.2). The corresponding 'LOW' and 'HIGH' management strategies were defined using the same management type as in BAU, but adapted, so that the amount of basal area removed equals −50% ('LOW' intensity) and +50% ('HIGH' intensity), relative to the BAU strategy (see Table 2). Further details about the management strategies are provided in Supplementary Appendix 1.2.
Climate Scenarios
For the simulation of present and future climate conditions, downscaled climate datasets by Brunner et al. (2019) were used for each CSE and aggregated to 5 year averages (annual mean temperature, precipitation sum and moisture index), as described in Zell (2018). The climate data by Brunner et al. (2019) were based on representative concentration pathways (RCP) and downscaled using a regional down-scaling approach based on quantile mapping (Gudmundsson et al., 2012). For simulations under present (historic) climate conditions, climate data (dataset CC22, see Table 3) from the reference period 1981 to 2010 (CH2018, 2018) was used and expanded to a 50-year climate time series by randomly resampling climate data. When compared with data from nearby climate station for each CSE, the downscaled historic climate datasets showed a good agreement with measured climate data of the same reference period. For the climate change scenarios, three scenarios recommended by Brunner et al. (2019) as representing a typical 'dry' (CC1), 'medium' (CC22) and 'wet' (CC7) future climate were used for FIGURE 2 | Structure of the decision support system (DSS), consisting of three main components, a database (1), a forest growth model (2) and a problem-processing system, comprising of a set of biodiversity and ecosystem service (BES) indicators calculated from the simulated forest characteristics (3.1) and a multi-criteria decision analysis (MCDA, 3.2).
the time 2010 to 2060 (see Table 3). An overview over the climate change scenarios is provided in Table 3, for technical details about the datasets, cf. Brunner et al. (2019).
BES Indicators
The effect of different management and climate scenarios were analyzed for five selected groups of BES indicators: timber production, biodiversity, recreation value (visual attractiveness), protection against gravitational hazards and carbon sequestration, using the indicator-based analysis framework of Blattert et al. (2017). The framework comprises 21 individual BES indicators applicable for entire Switzerland, which were calculated based on the simulated forest stand structures for each timestep. Below, a short summary of all BES indicators is given, a detailed description can be found in the Supplementary Appendix 1.3.
Timber production indicators included the amount of timber volume harvested (m 3 ha −1 year −1 ), the productivity of the stand (annual net volume increment), the sustainability of timber use (expressed by the ratio between harvest and productivity) and the growing stock of the stands.
Biodiversity indicators comprised species diversity (expressed by the Shannon index, Shannon and Weaver, 1949) and structural diversity (PostHoc index, Staudhammer and LeMay, 2001) at the level of alpha diversity (representing diversity within each stand) and a gamma diversity (representing diversity at the enterprise Values for T and AP indicate the change in mean annual temperature (in • C) and annual precipitation sum (in %) from 2030 to 2060 relative to mean values from the historic reference period level, i.e., the landscape scale), see also Jost (2007) and Sebald et al. (2021). Furthermore, the amount of deadwood as well as the number of habitat trees (defined as number trees per ha with a diameter of > 70 cm) were considered, which are particularly important structural attributes for biodiversity conservation (e.g., Bütler et al., 2020;Haeler et al., 2021). Recreation value was considered in terms of forest visual attractiveness, which can be linked to stand structural attributes (Edwards et al., 2012). Based on the framework of Edwards et al. (2012), the considered indicators were: size of largest trees (m), variation in tree size (PostHoc index), extent of canopy cover (i.e., percentage of ground covered by canopy), visual permeation through stand (expressed via the stand density index, Daniel and Sterba, 1980), variation in tree species (Shannon index), residues from harvest and thinning as well as deadwood from natural mortality (deadwood volume in m 3 ha −1 ).
For carbon sequestration, the amount of carbon in living aboveground and belowground tree biomass was calculated using species-, region-and elevation-specific allometric equations from the Swiss National Forest Inventory Herold et al., 2019), as well as deadwood originating from natural mortality and harvest residues. Using an adapted approach of Blattert et al. (2018), furthermore carbon stored in three harvested wood product pools (sawnwood, wood-based panels, paper and paperboard, classification by UNFCCC, see IPCC, 2014a) as well as substitution effects were considered, with energy wood substituting fossil fuel emissions and construction wood substituting emissions by fossil-fuel intensive construction materials (e.g., concrete, steel) (Taverna et al., 2007). Since the developed DSS is intended to be a tool for forest enterprises aiming at the comparison of potential impacts of different forest management strategies on the provision of BES, we define the system boundary at the level of the respective forest enterprises. A detailed description of the carbon sequestration is provided in Supplementary Appendix 1.4.
MCDA Approach
To evaluate the effect of changing management strategies and climatic scenarios, a multi-criteria decision analysis (MCDA) approach was employed in the DSS. It is based on multi-attribute value theory (MAVT), which is particularly suitable for a small number of criteria (in our case five BES) and well-defined decision makers (in our case one decision maker per enterprise, i.e., the forest manager). MAVT aims at quantifying a partial utility value for each individual criterion and helps to identify the best performing management strategy that maximizes multiple BES simultaneously (overall utility) (Kangas et al., 2015;Uhde et al., 2015). Therefore, the relationship between the simulated BES indicators and its provided societal demands were quantified via value functions, which represent human judgements of supply and benefit of BES (Ananda and Herath, 2009). In the present study, the value functions of Blattert et al. (2017) were used to convert the result of the BES indicators into utility scores between 0 and 1 (with 0 indicating the lowest and 1 indicating the best provision of an indicator). MAVT is a compensatory MCDA approach (i.e., a utility decrease of one indicator can be compensated by an increase of another indicator), making it particularly suitable to analyzing trade-off situations (Blagojević et al., 2019).
To obtain the partial utilities for each BES group and overall utility at the enterprise level for each scenario (climate and management), an additive function was applied. In this approach, decision makers can express their preferences by assigning weights (λ) to the different indicators and the BES groups (Kangas et al., 2015). The compensatory nature of the MCDA framework therefore applies to the level of individual indicators as well as to the level of BES groups. Notably, the framework is flexible and allows integrating additional indicators and BES groups, which can become important in further DSS applications.
All weights (λ a, λ a,i ) were defined by the forest managers of each CSE using the simple multi-attribute rating technique (Kangas et al., 2015) and are shown in Table 4 (BES group weights) and Supplementary Appendix 1.5 (individual indicator weights). Further details about the stakeholder preference elicitation approach is provided in Supplementary Appendix 1.6.
Robustness of MCDA to Shifts in Weighting Preferences
Four alternative weighting scenarios were applied for the BES groups (λ a ) to explore the robustness of the MCDA results to shifting demands in BES provisioning (see Table 4). The weighting scenarios were (1) 'Current weight' , with stakeholderdefined BES weights representing the current management priorities in the CSE, (2) 'Equal weight' , a default scenario for comparability, where each BES receives the same weight, (3) 'Focus weight' , an increase in weight of the priority BES to 0.5 at each CSE (with priority at GOT focusing at biodiversity, see Table 4), (4) 'Carbon weight' , an increase in the weight of carbon sequestration to 0.5. For 'Focus weight' and 'Carbon weight' , all non-focus BES weights were adjusted to maintain their relative importance as defined in 'Current weight' by the stakeholders. All individual indicator weights within the BES groups (λ a,i ) were maintained as defined by the stakeholders (see Supplementary Appendix 1.5). The detailed BES weights for each scenario are provided in Table 4.
Analysis of BES Development and Trade-Offs
For further analyzing the development of the BES as well as their synergies and trade-offs, partial utilities of each BES group (i.e., timber, biodiversity, recreation, carbon sequestration, protection) were compared for each CSE, management strategy and climate change scenario. To quantify the direction and the strength of trade-offs and synergies between different BES, 'LOW'; business as usual: 'BAU'; High intensity: 'HIGH') at the three case study enterprises (WAG, BUE, GOT) under present climate conditions. Bars display temporal development and colors indicate different BES. Partial utility is scaled between 0 (lowest utility) to 1 (highest utility) for comparability. Note, that the absence of bars indicates a very low partial utility.
Spearman's rank correlation was used, due to it is robustness against outliers. The trade-off analysis was based on the partial utilities of the respective BES (see Figure 2), including results of all management strategies under present climate conditions (results from future climate scenarios yielded the same patterns and were therefore not shown).
The DSS framework, as well as all analyses and visualizations were conducted with R version 4.0.0 (R Core Team, 2020).
Synergies and Trade-Offs Between BES Under Present Climate Conditions
Under present climate conditions, changing management intensities affected the development of all BES from 2010 to 2060, with strongest effects occurring for the BES carbon sequestration and timber production (Figure 3). An increasing management intensity led to a generally higher timber utility, but decreased the utility of carbon sequestration. Utilities of biodiversity, recreation and protection were less affected by altered management intensities and remained relatively stable over the simulation timespan. Only a minor negative effect of increasing management intensity was evident for biodiversity, as well as on recreation and protection function.
When comparing partial utilities of BES with each other, a consistent positive relationship between biodiversity and protection, biodiversity and recreation as well as between recreation and protection was found across all CSEs, indicating a synergy among these ecosystem services (Figure 4 and Supplementary Appendix Figure 2.1). Furthermore, a positive relationship between carbon sequestration and biodiversity occurred (Figure 4). In contrast, negative relationships between carbon sequestration and timber, as well as between timber and protection function (mainly against landslides) were found. Besides, biodiversity and timber utility were negatively correlated, but the degree varied among the CSEs (Supplementary Appendix Figure 2.1). Other relationships between BES were either weak or inconsistent.
Utility of Management Strategies
Under present climate conditions ('PC'), the highest overall utility for all BES (based on 'Current weights' , Table 4) at the year 2060 was found for the 'BAU' management strategy for the sites BUE and GOT (Figure 5). For site WAG, the 'LOW' strategy resulted in the highest overall utility. The lowest overall utility occurred under the 'NO' management strategy for all CSEs.
Effects of Climate Change on BES Provisioning
Under future climate change scenarios, relatively few changes occurred until 2060 at the level of overall utility (Figure 5, based on 'Current weight' of BES, see Table 4). Hence, no marked changes in overall and partial utilities were found for the three CSEs with increasing climate change intensity (i.e., between the 'wet' CC7 and the 'dry' CC1 scenario). Altogether, highest overall utilities were achieved by the same management strategies for the three climate change scenarios as under present climate conditions. Notably, these patterns remained relatively stable over time, with exception of the CSE WAG, where the BAU strategy performed best for a shorter time horizon (i.e., 2030, see Supplementary Appendix Figure 2.2).
At the level of individual indicators, the effect of climate change was more apparent, leading particularly to changes of basal area (up to −6% for WAG, −10% for BUE and +6% for GOT, Figure 2.4) and productivity (up to −7% for WAG, −19% for BUE and +15% for GOT, Supplementary Appendix Figure 2.4). Furthermore, slight changes occurred for deadwood and visual permeation under the 'dry' CC scenario (BUE: more mortality, GOT: less mortality under the 'dry' CC scenario, Supplementary Appendix Figure 2.3.3).
Robustness of MCDA to Shifting Weights
Shifting the BES priorities in the MCDA (i.e., BES group weights, see weighting scenarios in Table 4) resulted in changes in overall utility until 2060 (Figure 6). Nevertheless, for the 'Equal weight' and 'Focus weight' scenarios, the bestperforming management strategy remained the same as under the 'Current weight' , i.e., 'BAU' for BUE and GOT as well as 'LOW' for WAG (Figure 6), with exception of 'Focus weight' for WAG, where 'BAU' performed best. For the 'Carbon weight' scenario, substantial changes occurred for all CSEs, showing an increasing overall utility with decreasing management intensity.
DISCUSSION
Our study demonstrated the capacity of the new DSS prototype to provide enterprise-specific scenario estimates under changing climate and alternative management strategies, enabling stakeholders to take scientifically well-founded decisions for sustainable forest management. In particular, the new DSS offers: (1) a stronger link to inventory data and forest growth conditions in Switzerland by incorporating the new forest model SwissStandSim (Zell et al., 2020), the stand initialization approach by Mey et al. (2021) and the allometric functions for tree biomass and carbon content based on the latest national forest inventory Herold et al., 2019), (2) a climate sensitive framework, which is of increasing importance for strategic planning (Mina et al., 2017), (3) a widened portfolio of BES indicators, integrating also the socially important recreation function (e.g., Hegetschweiler et al., 2020), as well as an updated framework for harvested wood products and substitution effects for carbon sequestration (FOEN, 2020), and (4) a revised MCDA framework allowing stakeholders to perform trade-off analyses under different management strategies and weighting preferences. Altogether, the new DSS provides a flexible and dynamic tool for strategic (i.e., long-term) planning of sustainable forest management under changing climatic conditions and political/strategic goals. In the following sections, we discuss the DSS results in view of our research objectives and conclude FIGURE 5 | Overall utility, shown as the sum of weighted partial utilities (based on 'Current weight', see Table 4) for all biodiversity and ecosystem service (BES) indicators for four management strategies at the three case study enterprises (WAG, BUE, GOT) under present climate (PC) as well as three future climate change scenarios (CC1: dry, CC7: wet, CC22: moderate) for the year 2060.
with an overall discussion of limitations and potentials of the new DSS prototype.
Synergies and Trade-Offs Between BES Under Present Climate Conditions
We found on the one side synergies among BES for carbon sequestration and biodiversity as well as between biodiversity, recreation and protection function, and on the other side tradeoffs of timber production with carbon sequestration, biodiversity and protection function for the considered time horizon until 2060. Previous studies found similar synergies and trade-offs in European mountain forests, for instance Mina et al. (2017) and Irauschek et al. (2017), reporting a positive relationship between increasing carbon sequestration and biodiversity indicators (e.g., for the number of habitat trees and deadwood amount), as well as a negative relationship of carbon sequestration with timber use. Similarly, a larger-scale study by Gutsch et al. (2018) found a synergy between carbon sequestration and biodiversity, and a trade-off between timber use and biodiversity conservation in forests of Central-Europe. We furthermore found a synergy between biodiversity and recreation, which varied between the CSEs. These case-study specific effects are related to different tree species settings: while the lower elevation forests enterprises (WAG, BUE) have a relatively high species diversity (mixedbroadleaved stands), the higher elevation enterprise GOT is characterized by a lower species diversity (dominated by Picea abies). Furthermore, the weighting for biodiversity at GOT focused more on the number of habitat trees and deadwood availability, which was overall high and thus led to a high biodiversity utility. These aspects explained the relatively little variation for biodiversity utility at GOT, and hence the absence of a synergy between biodiversity and recreation at this CSE. At both other CSEs, the positive relationship between those FIGURE 6 | Overall utility for four alternative biodiversity and ecosystem service (BES) weighting preferences (Table 4), for four management strategies at three case study enterprises (WAG, BUE, GOT) under present climate (year 2060). For underlying weighting scheme, see Table 4. services was due to the beneficial effect of a higher natural deadwood amount as well as a high species and structural diversity for both, biodiversity (e.g., Müller and Bütler, 2010;Haeler et al., 2021) and recreation (Edwards et al., 2012). It has to be noted, however, that in case of biodiversity, e.g., including a wider range of indicators with a focus on lightdemanding species with habitats in more open forests (e.g., herbaceous and insect species, Hilmers et al., 2018) could significantly change the relationship or even revert a tradeoff into a synergy. The reported trade-offs should therefore be considered with care, keeping the case-study specific nature of the results in mind.
A noticeable trade-off between utility of timber use and protection function occurred in our CSEs, although to a relatively small degree. A similar negative effect was found by Mina et al. (2017) in a case study in the western Alps, where protection against rockfall and avalanches are of primary importance. In contrast to their study, protection function in our lowland and pre-Alpine CSEs mainly focused on protection from landslides and erosion. Since the landslide indicator is directly linked to canopy cover (Blattert et al., 2017;Irauschek et al., 2017), an increasing timber use led to a slight decrease in average canopy cover, which explained the negative relationship. Studies focusing on the longer-term relationship between forest development and protection against rockfall or avalanches however emphasize the importance of natural regeneration fostered by close-to-nature management in Swiss mountain forests (Frehner et al., 2005;Thrippleton et al., 2020).
Altogether, our DSS results are in good accordance with other studies investigating BES synergies and trade-offs in Central European and Alpine regions. The case-study specific results and utility-based framework of the DSS can help the forest managers to evaluate particularly the strength of trade-offs occurring in their enterprise under their specifically defined BES weightings.
Carbon Sequestration
In recent years, the role of forests in carbon sequestration and their contribution to comply with the Kyoto Protocol targets have received considerable political interest (Rogiers et al., 2015;Nabuurs et al., 2017). The Swiss forest strategy 2020 defined the goal to mobilize the sustainable harvest potentials and promoted the use of wood for construction purposes or substituting nonwood products and fossil energy carriers and account for its greenhouse gas mitigation potentials (Taverna et al., 2007;FOEN, 2013).
Over the considered time horizon of 50 years, our results implied an increasing utility of carbon sequestration by reducing management intensity, i.e., by storing the carbon 'in-situ' in above and belowground biomass of the forest ecosystem. Previous studies considering only 'in-situ' carbon sequestration (e.g., Mina et al., 2017;Gutsch et al., 2018) or 'in-situ' as well as 'ex-situ' carbon sequestration (Seidl et al., 2007) reported similar results in other regions of Central Europe. However, larger-scale assessments (with time spans ≥ 100 years) found that storing carbon in wood products and substitution effects may in fact be the preferable strategy in the long term (e.g., Werner et al., 2010;Weiss et al., 2020). A lower management intensity may lead to a large buildup of carbon in the standing stock in the short to mid-term (e.g., a few decades), which becomes however progressively instable and prone of turning into a large carbon source when stands disintegrate (e.g., due to disturbance impacts, Seidl et al., 2017). Furthermore, the substitution effect accumulates and becomes increasingly important over longer time horizons (Werner et al., 2010). However, it is also important for wood products to consider efficient resource utilization and long life-spans to improve their greenhouse gas footprint (Weiss et al., 2020). In view of the increasing disturbance frequency and intensity under future climate conditions as well as the rising importance of wood as a sustainable resource for bio-economy (Nabuurs et al., 2017), a sustainable use of the harvest potential accounting for trade-offs with other BES appears to be a prudent long-term strategy.
Our study demonstrated how the DSS can be used to evaluate alternative management strategies for carbon sequestration from the perspective of a forest enterprise. Further extensions are however necessary for a more comprehensive evaluation of the soil carbon pool, which is currently not considered, and which stores on average an equal amount of carbon as the living biomass in Switzerland . Although forest management effects on changes in the soil carbon pools are typically rather low compared to aboveground biomass (Jandl et al., 2007), some studies emphasize the importance of considering soil carbon pools in response to management (e.g., Pukkala, 2014). The DSS could be extended in this respect by coupling it to a dynamic soil carbon model, such as Yasso07 (e.g., Didion et al., 2014) or other soil biochemical models, which could also allow to address the important topic of nutrient removal by forest management (Wilpert et al., 2018).
To upscale the contribution of single forest enterprises to Switzerland and to evaluate the Swiss potential of forest carbon sequestration utility, system boundaries have to be adapted. A more sophisticated approach for the initialization of harvested wood product pool would for example be required (see e.g., Thürig and Kaufmann, 2010;Blattert et al., 2020). For the single forest enterprise, however, the current approach represents an important step toward considering the entire value chain from the forest ecosystem to wood products, energy and substitution effects (Nabuurs et al., 2017). Moreover, our DSS provides a helpful tool to reach the goals of the Swiss Forest Strategy (FOEN, 2013), i.e., assessing the sustainability of different management alternatives and accounting for their carbon sequestration potentials.
Utility of Management Strategies and Robustness to Shifting BES Priorities
The results of our study showed that the BAU management strategy performed best for the lowland site BUE and for the Pre-Alpine site GOT, while the strategy 'LOW' was best for WAG. Notably, these results were relatively robust to shifts in the BES weighting priorities, with exception of the 'Carbon focus' preference. Studies using a similar MCDA approach to evaluate management strategies in other regions of Europe found partly larger differences in overall utilities between alternative management strategies (e.g., Fürstenau et al., 2007) and in response to shifting weights (e.g., Langner et al., 2017). In comparison to most other European countries, forest conditions and silviculture in Switzerland are special due to the large proportion of mountainous forests and their protection function (FOEN, 2015a) as well as the long history of sustainable forestry and focus on multifunctionality (Forest law of 1874(Forest law of , 1902(Forest law of and 1965 via a close-to-nature management (Ott et al., 1997). This focus of multifunctional management is also reflected in the close similarity of the 'Current weight' and the 'Equal weight' scenarios (giving all BES equal importance) in our analysis (see Table 4).
Another aspect emphasized by our results is the importance of management for BES-provisioning. Although less evident at the level of overall utility, this was important for the social service of recreation, which we assessed in terms of visual attractiveness (Edwards et al., 2012). While a high degree of naturalness in forests (high species and size diversity, natural deadwood) is generally perceived as positive for visual attractiveness, a good visual permeation into the forest is frequently noted as very important as well (e.g., Gundersen and Frivold, 2008). In our study, an intermediate management intensity ('BAU') was found most suitable to prevent stands from becoming too dense (negative for visual permeation), which caused a decrease of recreation value, e.g., for the 'NO' management at WAG over time. On the other hand, a too strong increase in management intensity can impact recreation function negatively again, due to large amounts of harvest residuals and unnatural canopy openings (e.g., Kangas and Niemelainen, 1996). Our study therefore underlines the importance of a continuous management of forest in a sustainable, close-to-nature way, which best promoted multifunctionality in our case study enterprises.
Climate Change Effects on BES Provisioning
We found generally small effects of climate change scenarios on the simulated partial and overall BES utilities. This is at first glance surprising, given that the scenarios included a significantly drier 'high impact' (RCP 8.5) scenario (CC1, Brunner et al., 2019), which could be expected to cause substantial changes in forest structure and composition (Bugmann et al., 2014;IPCC, 2014b). However, it is important to differentiate between the level of individual indicators and the more aggregated level of partial and overall utility when considering climate change impacts.
At the level of individual BES indicators, climate change caused an increase in mortality at the low-elevation enterprise BUE for the 'dry' (CC1) scenario, which increased deadwood availability and decreased the number of habitat trees. In contrast, the higher-elevation enterprise GOT responded with higher growth rates and less mortality to the CC1 scenario. These elevation-specific results are in line with other studies on climate change impacts on forests in Switzerland (Rigling et al., 2013;Etzold et al., 2019;Huber et al., 2021) and in other European countries (e.g., Mina et al., 2017). Notably, the elevational patterns of climate change in this study are also related to the prevalent tree species, which show a distinctively different response to changes in climate (Zell, 2016). In the longer-term (i.e., time scales exceeding the focus of the present study), it is furthermore possible, that enterprises like the higher-elevation, currently spruce-dominated GOT experience a profound compositional shift from coniferousdominated to broadleaved-dominated stands, with considerable impacts on forest structure and ecosystem service provisioning (see also Albrich et al., 2020). The aspect of species shifts therefore represents an additional uncertainty which warrants further investigation in long-term applications of the DSS under climate change.
At the level of overall utility, the number of individual indicators and the hierarchical structure of the MCDA approach can however have a buffering effect on the overall results (see also Fürstenau et al., 2007). Our study included a wide set of indicators, many of which were less climate-sensitive at the considered timescale (e.g., tree species diversity), leading to a diminished climate change impact signal. Further, the MCDA approach is compensatory, which means that indicators with opposite developmental trends can cancel each other out (see e.g., Blagojević et al., 2019). This was for instance the case for the CC1 scenario on biodiversity in BUE, where climate change induced mortality led to a decrease of alive habitat trees (e.g., negative for species inhabiting the crown area, Bütler et al., 2020) and an increase in natural deadwood (e.g., positive for saproxylic species, Haeler et al., 2021). Further reasons for the relatively small climate change impacts were: (1) a relatively short time horizon (50 years), whereas most severe climate change impacts are expected by the end of the 21st century for Switzerland (CH2018, 2018); (2) the climate data used in the forest modeling, which was based on 5 year averages (Zell et al., 2020) and does not represent climatic extreme events; and (3) the problem of a lack of observation data regarding warmer and drier climatic conditions in the calibration range of the forest model, which can lead to underestimated climate change impacts under extreme scenarios (Adams et al., 2013). A similarly small effect of climate change was found in a recent DSS study by Lundholm et al. (2020), which was also based on an empirical growth and yield model, while studies using process-based dynamic models typically report more severe climate change impacts on BES provisioning (e.g., Fürstenau et al., 2007;Irauschek et al., 2017;Mina et al., 2017). Nevertheless, the simulated basal area changes found in our study are in good agreement with a recent study on climate change impacts on Swiss forests by Huber et al. (2021) using a process-based model and reporting basal area changes of a similar magnitude for 2070. Furthermore, it has to be noted, that despite recent progress, large uncertainties still surround the modeling of tree mortality under climate change (Adams et al., 2013;Hartmann et al., 2018). This is especially the case, when different disturbance impacts are considered, e.g., by windthrows, wildfires and bark beetles (e.g., Temperli et al., 2013;Thom et al., 2017). The aspect of vulnerability of stands to disturbances is therefore another key point for further developments of the DSS, and approaches like the windthrow and bark beetle vulnerability index by Temperli et al. (2020) could be integrated into the MCDA framework.
Importance, Limitations and Future
Potentials of the New DSS DSS are becoming increasingly essential tools for forest managers to cope with the complexity of the planning situation (Vacik and Lexer, 2014). In recent years, a large number of DSS for forest management worldwide have been reviewed (e.g., Borges et al., 2014). According to a review by Segura et al. (2014), key issues to improve the practical relevance of DSS are to strengthen the link to empirical data underlying the system and to increase multiple criteria for a comprehensive evaluation of forest BES. Our DSS fulfills these claims by providing a strong link to data from forest inventories at the national and enterprise level Mey et al., 2021) and forest growth in Switzerland (Zell et al., 2020), as well as by increasing a wide portfolio of BES indicators and value functions for Switzerland (Blattert et al., 2017), thereby providing a tool for a holistic evaluation of sustainable forest management. With its focus on strategic planning at the enterprise level, it complements other DSS in Switzerland (Heinimann et al., 2014) focusing more on operational planning (e.g., WIS2, Rosset et al., 2014) and model applications at the local to national scale (e.g., Huber et al., 2021). By integrating a utility-based MCDA framework, our DSS provided condensed results to the forest manager, thereby balancing the problem of increasing complexity and the need for simplicity in communication (Vacik and Lexer, 2014).
A limitation of the current prototype version of the DSS is the software implementation, which does not feature a simple userfriendly design yet, and can require adaptations for applications to different regions of Switzerland. Main barriers for a simple applicability of the system are in particular: (1) the differences in input data which partly exist between the cantons, especially in respect to local forest inventory datasets, (2) the rapidly increasing computational time with increasing enterprise size and increasing scenario assessments, which currently prevents a real-time application, (3) the incorporation of additional BES indicators and value functions, which requires programming experience. A further development of the DSS prototype is therefore recommended. Improvements comprise an automatic handling of input data (e.g., inventory datasets), a more efficient software framework for faster computation, and an extension of the indicator framework. Additional indicators could for instance address further aspects of timber production (e.g., sustainability indicators for forest operations, Schweier et al., 2019), forest infrastructure (Bont et al., 2019), biodiversity (e.g., species associated with different successional stages, Hilmers et al., 2018), recreation (e.g., Hegetschweiler et al., 2020) as well as soil-and water related indicators (e.g., groundwater recharge, Schwaiger et al., 2018). Further aspects of key importance for forest managers are information about the uncertainty of the DSS results (Knoke et al., 2016), as well as the monetarization of BES (Gret-Regamey et al., 2017). Fast progress in other scientific fields also offer ample opportunities for DSS (Vacik and Lexer, 2014), such as remote sensing which provides high-resolution data at the level of forest enterprises (e.g., Bont et al., 2020).
The procedure outlined in this study is well suited for evaluating different silvicultural strategies for an entire enterprise. If different management strategies are to be used at the same time, the question arises at which exact locations the respective strategies are to be implemented best. To solve such challenging combinatorial problems, optimization models can be used, as described for example by Knoke et al. (2016) for robust optimization or Pohjanmies et al. (2019) to reconcile economic and conservation objectives. For communication with stakeholders, a furthermore highly promising development are visualization systems at the stand and enterprise scale (e.g., Pretzsch et al., 2008). Due to the increasing level of immersivity of visualization systems, results of DSS can become more intuitive for forest managers (Fabrika et al., 2019), thus helping to shape their vision for strategic planning.
CONCLUSION
In this study, we presented a new multi-criteria DSS prototype for strategic (long-term) planning at the forest enterprise level for Switzerland, which offers various possibilities to explore the effect of alternative management strategies, climate change scenarios and shifts in political focus on multiple BES provisioning. The DSS provides a MCDA evaluation framework, which enables forest managers to assess the consequences of different management strategies and to evaluate synergies and trade-offs among management objectives. The DSS furthermore provides the scientific foundation for a transparent decision making and communication, which is of increasing relevance for forest stakeholder interactions. Due to its strong empirical foundation and flexible architecture, the DSS can be applied in lowland as well as mountain forest enterprises. Consequently, next steps will be the application of the DSS to further enterprises in Switzerland as well as extending the indicator and MCDA framework and improve its user-friendliness for forest planners in Switzerland.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
AUTHOR CONTRIBUTIONS
TT, JS, and ET: conceptualization of the study. TT, RM, JZ, and LB: data and DSS preparation for the CSEs. TT: simulation and data analysis and writing of the first manuscript draft. TT, CB, and LB: MCDA analysis. ET: funding acquisition. JS and ET: project administration. All authors contributed substantially to the writing of the article and approved the submitted version.
FUNDING
This study was conducted in the framework of the NRP 73 'Sustainable Economy' project SessFor funded by the Swiss National Science Foundation (SNF), 407340_172372.
ACKNOWLEDGMENTS
We thank Laura Ramstein, Julian Muhmenthaler, and Anton Bürgi from the WSL group Sustainable forestry, as well as Golo Stadelmann, Christian Temperli and Markus Didion from the WSL group resource analysis for valuable discussions about the development of the DSS. Furthermore, Raphaela Tinner and Sabrina Maurer (Canton Zug, forest management GOT) as well as Denise Lüthy, Anja Bader (Canton Zürich), Thomas Kuhn (forest manager BUE), and Anton Bürgi (forest manager WAG) are gratefully acknowledged for providing the forest inventory data, defining the management strategies and BES weights for the simulations and providing helpful feedback to the results. We would like to thank Marjo Kunnala and Nele Rogiers (FOEN) as well as Frank Werner for their help with the harmonization of the carbon sequestration approach with the framework applied by the Swiss Federal Office for the Environment. We also thank two reviewers for their helpful comments on an earlier version of the manuscript. | 2021-08-04T13:24:26.999Z | 2021-08-04T00:00:00.000 | {
"year": 2021,
"sha1": "362fa45f2db720e4ff569082f3f38bd22115bafb",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/ffgc.2021.693020/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "362fa45f2db720e4ff569082f3f38bd22115bafb",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
251845931 | pes2o/s2orc | v3-fos-license | Designing a new fast solution to control isolation rooms in hospitals depending on artificial intelligence decision
Decreasing the COVID spread of infection among patients at physical isolation hospitals during the coronavirus pandemic was the main aim of all governments in the world. It was required to increase isolation places in the hospital's rules to prevent the spread of infection. To deal with influxes of infected COVID-19 patients’ quick solutions must be explored. The presented paper studies converting natural rooms in hospitals into isolation sections and constructing new isolation cabinets using prefabricated components as alternative and quick solutions. Artificial Intelligence (AI) helps in the selection and making of a decision on which type of solution will be used. A Multi-Layer Perceptron Neural Network (MLPNN) model is a type of artificial intelligence technique used to design and implement on time, cost, available facilities, area, and spaces as input parameters. The MLPNN result decided to select a prefabricated approach since it saves 43% of the time while the cost was the same for the two approaches. Forty-five hospitals have implemented a prefabricated solution which gave excellent results in a short period of time at reduced costs based on found facilities and spaces. Prefabricated solutions provide a shorter time and lower cost by 43% and 78% in average values respectively as compared to retrofitting existing natural ventilation rooms.
Introduction
The COVID-19 existing crisis in the world is like the severe acute respiratory syndrome (SARS) epidemic in 2003. This newly identified problem has demonstrated the requirement for institutional and hospital preparedness. This preparedness should determine healthcare facilities where patient care can be provided with an appropriate standard of biosafety for other patients, healthcare workers, and the whole community [1,2]. Because COVID-19 can be transferred from person to person and cause life-threatening illnesses and may introduce severe hazards in health care settings and the community, it will require specific control measures. and can be categorized as highly infectious diseases (HID) [3].
The importance of having patients in an adequate hospital environment is a critical issue to limit disease spread in the hospital and/or community. Hospitals should include units with clinical facilities specifically designed to minimize the risk of nosocomial spread. Some countries have proposed a particular type of negative pressure plastic isolator containing one or two beds [4]. Others insert patients into negative isolation rooms that do not have facilities for highly infected diseases [5]. Also, during severe acute respiratory syndrome, they have created temporary isolation wards [6,7]. Any number of critical care beds, including surgical and specialty unit beds, should be at least duplicated to face the increasing number of patients of the COVID-19 pandemic. Hospitals should manage all produced services, beginning with triaging patients and progressing to high-critical care services such as ICU and/or OT. Best-case estimates suggest the situation of COVID-19 will stress bed capacity, hospital equipment, and providers of health care. Some countries have decided on quarantine, which refers to the separation of not infected individuals who have been exposed to COVID-19 and therefore have a potential to become ill, but isolation refers to the separation of individuals who are suspected and/ or confirmed of having COVID-19. All suspect cases should be observed in isolation zones in hospitals with specially designated facilities. People testing positive for COVID-19 will remain isolated till their second samples are tested negative to be discharged.
Transmission of airborne infections has been implicated in nosocomial outbreaks [8][9][10][11][12][13]. Due to dispersion via aerosols, it is thought that the high attack rates during norovirus outbreaks [14][15][16][17][18][19]. About 10-20% of all nosocomial infections are spread by this route [20] which equates to a high cost to implement. Recently, many methods have been developed by researchers to improve mechanical ventilation [21], by using personal protective equipment [22] and the use of Ultraviolet Germicidal Irradiation (UVGI) devices both within rooms [23,24] and in air-conditioning ducts [25]. By using UVGI lamps, the levels of infectious material present in the air have been reduced. Infectious diseases such as influenza viruses and norovirus are problematic in institutions such as hospitals and nursing homes and occur as distinct outbreaks over a short time scale. For example, 1-3 days is the incubation period for influenza, then patients may be infectious for 4-6 days [26].
As a result, has been considered of SARS, patients have been divided into 15% who develop pneumonia and 5% who require ventilator management. So, to reduce the risk of spreading disease, hospital rooms should have adequate technical facilities. Hence, dedicated intensive care beds should be identified as having multi-organ failure for progressing cases. Critical care facilities for dialysis, salvage therapy [Extra Corporeal Membrane Oxygenator (ECMO)], respiratory, renal, and multi-organ failure should be required.
One way to do this is with a negative pressure room, in which a lower air pressure allows outside air into the room; any air that flows out of the room must pass through a filter. By contrast, a positive pressure room maintains a higher pressure inside the treated area than that outside it. Clean, filtered air is pumped in; if there's a leak, the air is forced out of the room. Positive pressure rooms are usually used for patients with compromised immune systems, while negative pressure rooms are common in infection control, to ensure infectious germs don't spread via the heating, ventilation, and air conditioning (HVAC) system. In this paper, the main objective is to use the scientific basis and academic background with hospital top management to select the most optimum solution through the available resources. The main contribution is to use AI in deciding on which type of solution will be used. Adoption of these intelligent solutions gives the chance to facilitate hospitalization and decision-making.
Data and methods
Due to the uncertainty of spreading the new coronavirus, it passes through close contact with infected people via the viral droplets expelled when they cough or sneeze, or as air-borne diseases spread easily through the air, like tuberculosis or measles and chickenpox. And because most of the dedicated hospitals (infectious disease hospitals and/or fever hospitals) are not prepared for such circumstances and epidemic situations, a fast-implemented solution together with a general strategy and plan should be executed in a very short time.
In this paper, new fast solutions and scenarios were investigated to keep isolated zones while taking into account available location, space, and facilities and saving time. The required locations are the infectious hospitals and/or fever hospitals, which are the first line of protection that have medical staff trained for such cases and how they can deal with infected patients. Based on the geographic distribution of hospitals, the city's population, and the spread of the epidemic, the number of isolated zones can be estimated together with the training of new staff. If there are no such hospitals in the containment zone, the closest tertiary care facility in government, private, club, or university hospitals should be found. For space and facilities, a short survey/questionnaire should be filled out via hospitals as indicated in Table 1.
This questionnaire should be distributed to infectious and fever hospitals. After filling out the previous questionnaire in forty-five fever hospitals, the following results have been collected as indicated in Table 2.
Based on the various allocations of patient isolation, there are the following techniques: 1. Natural ventilation or isolation facilities (rooms) that have a large window on opposite walls of the door allow a natural unidirectional flow of air and air changes. The natural ventilation principle is to allow the flow of outdoor air by natural forces such as wind forces from one opening to another to achieve the desired air change per hour. 2. Individual isolation rooms with good ventilation. 3. Negative pressure rooms with 12 or more air changes per hour (ACH).
Positive cases (COVID-19 cases) should be isolated in a ward with good ventilation or a negative area. Suspect cases should also be kept in another separate ward. However, under no circumstances should these cases be mixed up. The isolation ward should have a separate toilet with proper cleaning supplies. Because the isolation facility aims to control the airflow in and out of the room to reduce the airborne infectious particles to a level that ensures prevention of the cross-infection with other people, the third type of isolation is the most effective solution to deal with infected and airborne cases.
According to a quick survey of the hospitals studied, there are no isolation rooms with anti-rooms, and the number of isolated areas (without anti-rooms) is small in comparison to total hospital capacities (5%) and is kept in separate buildings (10% of isolated rooms). There are no sealing rooms (capsules), no isolation rooms having air exhausted Negative isolation room without anti-room 5 Negative isolation room with anti-room 6 Isolation rooms in the same building 7 Isolation rooms in separating buildings 8 Air exhausted directly to the outside, without HEPA filtration 9 Having sealing room 10 Controlled isolation rooms (+ve / -ve) 11 Restricted access 12 Having Medical gases 13 Ability to provide information on intensive care capability 14 Waste management and treatment 15 Administrative control Table 2 Filling questionnaire for fever hospitals. directly to the outside, and there are no controlled isolation rooms (+ve/-ve) (can be inverted related to the requirement via automatic/ manual dampers). Less than 5% of the isolating rooms include medical gases, although 15% of rooms can introduce intensive care capability. As indicated in Fig. 1, a chart shows the percentage related to studied items. Although all hospitals have administrative control (controlling the patient and their relatives entrance manually via security staff and a CCTV system), they do not have medical waste management or restricted access [27].
Experimental setup
Due to COVID-19, patients should be kept in single rooms and no rooms are available for such a crisis, so the requirement of increasing the number of isolated rooms in a very short time is a very critical issue. To overcome this situation, it is required to process in multi directions as: -.
1. Convert all-natural ventilation rooms (if found) into negative isolation rooms by sealing the window. 2. Removing all non-essential furniture and ensuring the remaining furniture is easy to clean. 3. Every room may have its standalone air-conditioning (not be a part of the central air-conditioning, just an inlet for incoming air from outside). 4. The room may have its medical gas ((O 2 , MA, (VAC if applicable)) system to provide intensive care capability with patient monitor connection. 5. The video conference calls may relate to nursing calls to monitor and control the patients. 6. Negative pressure could also be created by having exhaust fans driving the air out of the room based on its air volume (5 Pascal minimum). 7. The isolation ward access may be through dedicated stairs/ lifter. 8. The isolated ward should have a separate entry/exit. 9. The isolated ward should be in a segregated area not accessed by outsiders frequently. 10. Higher rate of air exchange per hour, typically 12 air changes per hour. 11. HEPA filter for exhausting air. 12. Anti-room can be added for available spaces. 13. A mobile UV system will be used before and after patients inside the rooms. 14. UV lamps may be added to AC ducts to sterilize the inlet air. 15. UV lamps may be added to AC ducts to sterilize the outlet air if will be circulated or does not have a HEPA filter (dose per unit area and time should be considered).
Based on available data, it should select one of the two directions.
1. Converting the natural isolation rooms into isolation areas.
2. Constructing prefabricated units to be used as an isolation area.
To improve the analytical and predictive capacities of decision support systems (DSS), artificial neural networks (ANN) are being used more and more frequently. This is especially true for model-based and data-driven approaches. In complicated situations demanding quick decisions, ANN-based simulation models linked with DSS greatly improve decision-making [28]. ANNs are among the intelligent systems driving the performance of DSSs due to inherent algorithmic learning, fault tolerance, prototyping, and parallel processing capabilities, among other features. ANN has demonstrated its ability to be an effective tool in DSS in manufacturing, especially for classification and pattern recognition problems [29].
Due to their intrinsic algorithmic learning, fault tolerance, the ability for rapid prototyping, and parallel processing, ANNs are among the intelligent systems enhancing the performance of DSSs. In manufacturing, ANN has proven to be a useful tool in DSS, particularly for classification and pattern recognition issues [30]. ANN-models can present accurate solutions for poorly understood structured and unstructured decision problems [29]. artificial intelligence is projected to be a permanent research field featuring applications as intelligent DSS [31].
In the presented paper, a neural network model as a type of artificial intelligence has been studied before implementation to guide the managers in taking the decision. It will be implemented to find the optimal strategic situations without considering expert knowledge. A supervised learning rule using backpropagation is considered to train such a neural network. Although many parameters can be considered in making the decision. These parameters are minimum damage to the current building; warranty of working after implementation; geographic location; available spaces; connection to services tie-in; available facilities; required bed capacity; hospital equipment; city population; implementation time; and cost. In Multi-Layer Perceptron Neural Network (MLPNN), only four parameters have been considered, which are the most important parameters after the filtration step (parameters have a strong effect after evaluation). The first parameter is time, which is the most critical item that is considered in units per day for a maximum of one month to have a complete solution. The second parameter is the cost, which should be considered although it can be covered within a crisis to maintain respectable healthcare services. The third parameter is the facility which contains medical gases ((O 2 , Air, Vac), electricity source (normal, UPS), lighting in an emergency, air conditioning having negative pressure, and total exhaust. The third parameter is evaluated as an absolute number based on presented and found facilities within the range of (0-100) with step 10. The fourth parameter is the area/location and available space which satisfy the minimum requirements for spaces related to regulation, either in the hospital building or external (available spaces beside the hospital). This parameter has been assigned to (A (0-100 m 2 ), B (100-200 m 2 ), C (200-300 m 2 ), D (300-400 m 2 )). The feeding inputs of the model were handled into numbers, reflecting the actual values for each parameter. As indicated in Fig. 2, the flowchart of the presented study. Although many machine learning techniques can be used based on multilayer extreme/probability extreme learning machines [32,33], MLPNNs will be used in this study.
Multi-Layer Perceptron neural networks (MLPNNs)
MLPNN is widely used by medical researchers to classify brain signals due to its ability to learn and generalize in a small training group, quick operation, and easy performance [34,35]. Although MLP is considered one of the deep learning techniques, it may suffer from vanishing or exploding gradients sometimes, which does not happen in the proposed study. In a directed diagram, MLPNN is made up of three layers of nodes, as is quite common as default, each of which is connected to the next layer. Artificial neurons, or nodes, are the main processing elements in MLPNN. Each neuron j in the hidden layer adds its input signals after multiplying them by the strengths of the connection weights and computes its output as a function of the sum, as shown in equation (1).
Where f is the activation function that can be radial basis function, sigmoid, signum, or hyperbolic tangent. This function is used to convert the weighted sum of all signals that affect a node. Some approaches have been used to evaluate the extracted errors. The first one is the sum of squared differences between the desired and actual values of the output neurons E which is defined in equation-2: Where y dj is the desired value of output neuron j and y j is the actual output of the neuron. Each weight w ji is adjusted to reduce E as rapidly as possible. w ji is adjusted based on the training algorithm [35,36]. Another approach is the loss function (cross-entropy) which is used in this study to evaluate how well the model fits the data distribution. By using crossentropy, the error (or difference) can be measured as defined in equation-3.
where P is the predicted probability and y is the indicator (0 or 1 in the case of binary classes). As indicated in Fig. 3, the neural network consists of an input layer Fig. 5. Indicates the conversion of natural rooms into sealed negative isolation rooms prepared to be ICU and to be a standalone HVAC system. having four inputs (Time, Cost, Facilities, Space), a hidden layer, output layer (two states) to take the decision; either prefab or converting a natural room into an isolation room. The multi-layer perceptron neural network, as shown in Fig. 3, is one of the most widely used neural network models. MLP has been selected because it can learn linear and non-linear models and to deal with online models. It has not considered any assumptions regarding probabilistic information. As shown in Table 3, the available data and parameters of the studied hospitals indicate the required time of implementation of every site/hospital together with the cost dedicated for this implementation. Also, the calculated score of available facilities is in addition to the used area limit. The neural network was used to classify the recorded and estimated data into two target categories; converting the natural isolation rooms into isolation areas or constructing prefabricated units to be used as isolation areas. The activation function in the hidden layer was the sigmoid function. During using the neural network, the training samples were randomly divided into three kinds of samples: 70% for training, 15% for validation, and 15% for testing. Training samples (31 samples) were used to train the network, which is adjusted according to its error. Validation samples (7 samples) were used to assess network generalization and to halt training if generalization improvement stopped based on the number of iterations and performance indicators. validation can only be used during training. Testing samples (7 samples) were used to provide an independent measure of network performance during and after training. Although the neural network generally performs better with larger training datasets, it can be performed on the available data based on the pandemic situation to have an indicator for the required direction. As a research team examined the impact of training set size on NN classification accuracy in the early 1990 s [37]. Their conclusion demonstrates that NN only require significant data samples that can describe the general shape or picture of the case to get improved classification accuracy [38,39]. The accuracy of the model during training does not increase as the training set size increases [40].
The back-propagation training algorithm is the most used, which means the artificial neurons are organized into layers and send their signals forward, and then the errors are propagated backward. 4-10-1 MLPNN was the optimum model for the classification of the available features. NN aims to determine the correct direction to go through. After implementing NN, the output is to have a prefab direction instead of converting the current natural room into an isolated area. The result of the model is to go with the prefabricated method due to saving time by 43%, but the cost is roughly the same or a 12% reduction with a consumed time of less than one second. Here is the implementation of Tot Cases|1MPOP USA 140,256 16,678 2457 237 4435 133,364 2970 424 Italy 97,689 5217 10,779 756 13,030 73,880 3906 1616 China 81,439 3300 75,448 2691 742 57 Spain 80,110 6875 6803 821 14,709 58,598 4165 1713 Germany 62,095 4400 525 92 9211 52,359 1979 741 France 40,174 2599 2606 292 7202 30,366 4632 615 Egypt 609 33 40 4 132 437 6 Fig. 9. Confusion matrix of training, validation, test and all the system. the two scenarios. The two scenarios have been applied and compared to having the maximum number of isolation rooms to overcome the current crisis.
Converting the natural isolation rooms into isolation areas
In this scenario, twenty-four natural isolation rooms, as indicated in Fig. 4 (ten rooms attached with toilets), will be converted into a negative isolation zone for hemodialysis in the same building. The building has 20 rooms attached with their toilets and one bay area together with their services for doctor's room, staff room, store, dirty utility, and staff change area. As indicated in Fig. 4-b, a zoom-in of the two upper right corners shows the existing patient bed positions, door, and window opening.
Regarding the pathogens and to facilitate the implementation, isolation rooms may not require an anteroom although it serves as a controlled area for the transfer of supplies, equipment, and persons. It acts as a barrier against the potential loss of pressurization and where people can gown before entering or exiting the isolated area. Negative pressure rooms prevent air from the bathroom or patient area to escape into the corridor. It is achieved by exhausting a greater quantity of air than that of the inlet air. A well-designed exhaust system is necessary for the negative pressure isolation room. This system has been implemented and after covering the main problem of having a large number of isolated rooms in much tided time, it can be supplied by a pressure gauge and alarm system to alarm when pressurization has not been achieved. All walls are PVC without separation and antibacterial metallic false ceilings have been selected and anti-bacterial vinyl is for all floors except toilets. As indicated in Fig. 5, the door opening has been inverted, the patient bed rotated by 90-degree, the window sealed, and standalone mechanical ventilation implemented by centralized (inlet can be standalone system) inlet air and dedicated fans to create pressure drop by -ve collected from patient space and toilet.
Prefabricated sections
On the other hand, a prefabricated section is a fast solution, as indicated in Fig. 6, to respond with urgency to the pressing need for hospital isolation rooms during the COVID-19 pandemic. It includes eight isolated patient rooms together with a reception area, doctor's room, staff change, services, toilet, and stores. The patient area has a restricted door, and the main building has stairs and a ramp with an admin/control area. This solution can be classified easily into airborne infectious isolation (AII) rooms with negative pressure and protective environment (PE) rooms with positive pressure if needed via automatic dampers for the inlet and outlet of AC.
As indicated in Fig. 7, (a) is the design for an isolated room as a fabricated scenario while (b) is the final image after implementation in the fever hospital. as indicated in Fig. 8 and Tables 4, and 5, the data has been collected within one week to indicate the increasing number of cases and the need to respond to this need in very limited time [41].
Results and discussions
According to Fig.8, where the total number of cases has been duplicated within eight days while altering the order of counties depending on the total number of instances, this is exactly what happened. Quick deployment is thus ideal; prefab solutions should be built with minimum labor and be pre-wired inside walls, with selfclosing doors as well as 100% fresh air intake and exhaust HEPA filtered air. A smooth antimicrobial coating, chemical and disinfectant resistance, noise abatement, and scratch resistance should be installed on the walls. In this study, two approaches have been achieved; the first approach was to transform the natural isolation areas into negative isolation areas, while the other was to create prefab sections with isolated patients and their services. To decide which approach to select, the NN model has been applied to select the best scenario. Four main parameters have been considered in this model: time, cost, facilities, and space. The result of this model is to go with the prefabricated method as indicated in the Fig. 9 confusion matrix (confusion matrix of training, validation, test and whole picture), which leads to the conclusion that the prefabricated solution is the best one. An extra step has been applied to fix three parameters and change one only. When the time had been changed by fixing all parameters (average value of parameters), the result was to use pre-fab to save time by 43%. While changing costs led to a 12% reduction by using pre-fab.
The two approaches have been implemented in real life. The result is to implement NTI into negative isolation wards in 21 working days with 97.2 k$, while the prefab technique with eight isolated patients and their services in 12 working days with 22.7 k$. A comparison between the two studied scenarios has been presented to highlight the results of the estimated model and implementation. Prefabricated solutions provide a shorter time and lower cost by 43% and 78%, respectively, in average values, as indicated in Fig. 10. The fever hospital was able to expand the number of isolation rooms in a short period thanks to these strategies. In terms of price, the NTI is predicted to be five times more expensive than the prefab.
Conclusions
Two quick solutions for isolating patient beds during the COVID-19 pandemic. The first solution depends on converting natural isolation areas in hospitals into negative isolation words in only 21 working days. The second one is done by prefabricating or building parts with eight isolated patients and associated services within 12 working days. Both methods were successful. The fever hospital was able to expand the number of isolation rooms in a short period. These solutions have helped the fever hospital to increase the number of isolated rooms in a very short time. The neural network is used in the presented paper as a type of artificial intelligence technique. Regarding the cost, the neural network is predicted to be five times more expensive than prefabricating owing to all essential utilities against the temporary solutions. The hospital was able to make the best possible choice with the aid of the analyzed neural network model. Despite the two methods being designed to have as many isolation rooms as possible, the prefabricate method is the most cost-effective (22 k$ instead of 97 k$ for converting natural isolation areas) and time-efficient option based on the research model and execution. | 2022-08-27T13:11:02.080Z | 2022-08-26T00:00:00.000 | {
"year": 2022,
"sha1": "6fd83674a7593b6a28e30c8aa7e8abffafca037d",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.bspc.2022.104100",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "65ece8583f6a0edfdecd40371666d0ac99aae36b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
233699784 | pes2o/s2orc | v3-fos-license | Prospective study on the clinico-hematological profile of dengue fever patients in Navi Mumbai
Dengue is currently regarded globally as the most important mosquito borne viral diseases presenting with varied symptomatology. Dengue virus causes a spectrum of illness ranging from in apparent, self-limiting classical dengue fever (DF) to life threatening dengue haemorrhagic fever (DHF) and dengue shock syndrome (DSS). Recently, it has emerged as important public health threat in urban areas. This is attributable to population migration to cities resulting in urban overcrowding and infrastructure construction in these areas providing unhindered opportunities for breeding of the vector. There is a seasonal rise in the number of cases especially during the months of May to September presenting to the emergency and outpatient departments which imposes an additional load to an already overburdened system especially for staffing, laboratory and acute ward admission. Doke et al mentioned that epidemiology and clinical presentation of dengue infection differs significantly across geographical areas in India and there is a need to systematically collect data from various regions and study the nature and course of dengue infections. This present study was done at a tertiary care hospital in Navi Mumbai, Maharashtra, to study clinico-hematological profile of patients with dengue fever. ABSTRACT
INTRODUCTION
Dengue is currently regarded globally as the most important mosquito borne viral diseases presenting with varied symptomatology. 1 Dengue virus causes a spectrum of illness ranging from in apparent, self-limiting classical dengue fever (DF) to life threatening dengue haemorrhagic fever (DHF) and dengue shock syndrome (DSS). Recently, it has emerged as important public health threat in urban areas. This is attributable to population migration to cities resulting in urban overcrowding and infrastructure construction in these areas providing unhindered opportunities for breeding of the vector. 2 There is a seasonal rise in the number of cases especially during the months of May to September presenting to the emergency and outpatient departments which imposes an additional load to an already overburdened system especially for staffing, laboratory and acute ward admission. Doke et al mentioned that epidemiology and clinical presentation of dengue infection differs significantly across geographical areas in India and there is a need to systematically collect data from various regions and study the nature and course of dengue infections. 3 This present study was done at a tertiary care hospital in Navi Mumbai, Maharashtra, to study clinico-hematological profile of patients with dengue fever.
Study design and sampling
This prospective observational study was conducted at DY Patil Hospital and Medical College, Navi Mumbai. The study was initiated after obtaining the permission from the Institutional Ethics Committee. The study was conducted for a period of one year (January 2019 to December 2019) from the date of obtaining the permission from the Institutional Ethics Committee. All patients were observed over their entire duration of their hospital stay (up to 7 days). We included adult patients of both gender (males or females) who were admitted with clinically and serological diagnosed dengue fever, consenting to participate in the study. Patients with other concomitant febrile illnesses including malaria, enteric fever, chikungunya fever, etc were excluded from the study. We also excluded patients with a known history of haematological disorders or drugs modifying haematological parameter and those with comorbid severe systemic disease or any other terminal illness.
Data collection and data analysis
Once identified as a study participant, based on the inclusion and exclusion criteria, a detailed history of every patient was taken after obtaining a written consent. They were then subjected to a detailed clinical examination and the observations were carefully noted. Patient's data was collected in the Case Record/Report Form (CRF) over a period of 7 days considering the fact that most of patients recovered in a week. For serological investigations, 2 ml patient blood was collected in red colored vacuatainer for IgM and IgG testing by Enzyme Linked Immunosorbent Assay (ELISA) method and for NS1-Ag (non-structural protein-1 antigen) test. For routine investigations, 2 ml venous blood sample was collected in EDTA tubes from the cubital vein from all the patients. Laboratory investigations like haemoglobin (Hb), total and differential leucocyte counts (TLC and DLC), platelet count, haematocrit (HCT) and liver function tests (LFT) were sent for all patients. Ultrasonography of abdomen was done for all patients.
Descriptive analysis was performed in open-source Epi-Info software. 4 Qualitative data were presented as frequency and percentage and quantitative data were described as mean and standard deviation.
RESULTS
A total of 80 dengue patients from 18-65 years of age were observed during the study, with a mean age of study population was 33±12 years, 66% being male patients ( Table 1). All patients presented with fever while 71.25% had myalgia. Retro-orbital pain, rash and vomiting was observed in 38%, 26% and 26% respectively, whereas 23.75% patients were having cough and bleeding from any site. Abdominal pain, joint pain, breathlessness and loose stools were associated with 20%, 6.25%, 5% and 3.75% of the patients respectively. Three fourths of the patients were diagnosed with dengue, 18.75% and 6.25% were diagnosed with DHF and DSS. respectively. Figure 1 describes the vitals of the patients as observed over first seven days of admission. On day 1, mean temperature was 37.55±0.61°C which reduced to 37°C at day 6 and was steady on day 7. Mean respiratory rate declined from day 1 to day 7. Pulse rate gradually decreased from day 1 (84.31±9.57 bpm) to 5th day (82.00±5.44 bpm). A raise in pulse was observed on 6th day (85.20±8.20 bpm) which reduced to 79.60±3.85 bpm on day 7. The blood pressure steadily increased over these seven days. As shown in table 2, highest incidence of hepatomegaly was 55% in patients on day 5. Hepatosplenomegaly was increasing from day 1(9%) to 6th (60%) and 7th (60%) day. Crepitations were decreasing during the observational period. First day, 23% cases showed crepitation's while from 5th day onwards no crepitations were observed. Figure 2 describes the haematological parameters of the patients. Mean haemoglobin levels and haematocrit started increasing from second day onwards, while WBC count and platelet count increased gradually from first day onwards. SGPT levels increased on day 7 to 68 IU from 59.21 IU of day 1 while rise in SGOT levels was observed on 7th day (135 IU) as compared to day 1 (90.75 IU). On day 7, rise in urea (28mg/dL) and serum creatinine (1.17 mg/dL) levels was detected. Initially urea and creatinine levels were 21.51 mg/dL and 1.09 mg/dL respectively. Chest X-ray did not show any pathological changes in 76% of the patients. Ultrasound abdomen on day 1 found 18.75% and 17.5% patients with ascites and hepatomegaly respectively (Table 3). Splenomegaly was diagnosed in 3.75% of patients while hepatosplenomegaly and fatty liver was observed in 8.75% and 2.5% respectively. Simple renal cyst was seen in one case. There were two deaths, both were cases of DSS.
DISCUSSION
A rising incidence of dengue fever outbreaks has been reported over the past few years from various states of India which constantly threatens the health care system with respect to associated morbidity and mortality, loss of work and out of pocket expenditure. Dengue is endemic in India and we conducted this study to investigate the clinical and haematological profile of patients presenting to our hospital. We observed that 21 to 40 years was the most common age group and the mean age of the patients was 33 years. Males comprised 66% of the study population. Oza et al reported the mean age of dengue patients to be 24 years with 62% being males. 5 Prasad and Kumari found 61% of their patients to be between the age of 18 and 30 years and 76% were males. 6 Very few studies from India have reported a higher proportion of female dengue patients. Nair et al reported 53% of their study population of dengue patients to be females. 7 Most frequent symptoms in the present study were fever (100%) and myalgia (71%). Loose stools are an uncommon symptom in dengue fever and was observed in 4% of the patients. Among systemic manifestations, we observed that hepatomegaly was the commonest on day 1 of admission. This continued till day 5, after which hepatosplenomegaly became the most common systemic finding. Nair et al also reported fever and body ache to be the most common symptoms. They authors reported symptoms of diarrhoea in 10% of the patients. Similar to our findings, Oza and colleagues reported fever and myalgia to be the most common presenting symptoms. A high incidence of gastrointestinal symptoms like nausea and vomiting were reported in a study from Kerala also and is attributed to hepatomegaly and serosal inflammation. 8 We observed that mean haemoglobin levels and haematocrit started increasing from second day onwards, while WBC count and platelet count increased gradually from first day onwards. None of the patients had haemoconcentration. Prasad and Kumari observed haemoconcentration in 50/120 (41.6%%) of patients with DHF. Khatroth et al observed raised haematocrit in 16.6% of patients at presentation. 10 Dengue fever and DHF are associated with the capillary leak syndrome that results in haemoconcentration.
CONCLUSION
Dengue continues to pose a serious challenge to the clinicians, microbiologists and health care workers. Almost all the patients included in our study showed both haematological and biochemical abnormalities. This study has revealed a varied clinical profile of dengue fever along with the typical symptoms and some atypical symptoms have also been observed. | 2021-04-17T15:44:20.204Z | 2021-03-23T00:00:00.000 | {
"year": 2021,
"sha1": "63915523837841baf478fd247303efe394fed920",
"oa_license": null,
"oa_url": "https://www.ijmedicine.com/index.php/ijam/article/download/2834/1965",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "63915523837841baf478fd247303efe394fed920",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263310390 | pes2o/s2orc | v3-fos-license | Secondary Whistler and Ion-cyclotron Instabilities Driven by Mirror Modes in Galaxy Clusters
Electron cyclotron waves (whistlers) are commonly observed in plasmas near Earth and the solar wind. In the presence of nonlinear mirror modes, bursts of whistlers, usually called lion roars, have been observed within low magnetic field regions associated with these modes. In the intracluster medium (ICM) of galaxy clusters, the excitation of the mirror instability is expected, but it is not yet clear whether electron and ion cyclotron (IC) waves can also be present under conditions where gas pressure dominates over magnetic pressure (high β). In this work, we perform fully kinetic particle-in-cell simulations of a plasma subject to a continuous amplification of the mean magnetic field B (t) to study the nonlinear stages of the mirror instability and the ensuing excitation of whistler and IC waves under ICM conditions. Once mirror modes reach nonlinear amplitudes, both whistler and IC waves start to emerge simultaneously, with subdominant amplitudes, propagating in low- B regions, quasi-parallel to B (t). We show that the underlying source of excitation is the pressure anisotropy of electrons and ions trapped in mirror modes with loss-cone-type distributions. We also observe that IC waves play an essential role in regulating the ion pressure anisotropy at nonlinear stages. We argue that whistler and IC waves are a concomitant feature at late stages of the mirror instability even at high β, and therefore, expected to be present in astrophysical environments like the ICM. We discuss the implications of our results for collisionless heating and dissipation of turbulence in the ICM.
INTRODUCTION
Several classes of astrophysical plasmas display fully developed turbulent states and a weak collisionality, in the sense that the particles' mean free path is several orders of magnitude larger than the typical radius at which they gyrate around the ambient magnetic field. These two characteristics alone can make the transport properties and global evolution of the astrophysical environment in question challenging and dependent on the local evolution at particles' scales. Therefore a detailed study of the behavior of these plasmas at the kinetic level becomes a necessity.
That is the case of the intracluster medium of galaxy clusters (ICM). The ICM is a hot, magnetized (Bonafede, A. et al. (2010)), weakly collisional and turbulent (Schuecker, P. et al. (2004); Zhuravleva et al. (2014);Hitomi Collaboration et al. (2016)) gas in the plasma state where the thermal pressure greatly exceeds the magnetic pressure (β ≡ 8πP/B 2 ∼ 10 − 100, P is the isotropic thermal pressure and B the magnetic field strength). In these conditions, departures from thermodynamic equilibrium, such as pressure anisotropies, are easy to achieve. For example, slow compression of the magnetic field increases particle kinetic energy perpendicular to the magnetic field such that the magnetic moment (or, the magnetic flux through the particle gyro-orbit) remains con-stant, leading to an excess of perpendicular pressure P ⊥ over parallel pressure P ∥ . However, pressure anisotropy cannot grow unchecked. Pressure anisotropies can easily destabilize microinstabilities such as mirror, firehose, ion-cyclotron and whistler (Schekochihin et al. (2005); Schekochihin & Cowley (2006)). The back reaction of these instabilities on the particles can maintain pressure anisotropy near its marginally unstable value, and are thought to play an important role in several aspects of ICM transport and heating (Kunz et al. (2011); Berlok et al. (2021); Drake et al. (2021); Perrone & Latter (2022a,b); Ley et al. (2023); Tran et al. (2023)).
In a similar vein, the solar wind and some regions of the Earth's magnetosheath and magnetosphere host plasmas that are also collisionless and turbulent. Even when the plasma β is lower than in the ICM (β i ∼ 1 − 10, β e ∼ 1), we can encounter some similarities. In particular, the plasma is also pressure anisotropic, and the same microinstabilities above mentioned are found to be present, usually in their fully developed, nonlinear stage (Bale et al. (2009)). Particularly important to this work is the presence of the mirror instability (Chandrasekhar et al. (1958); Rudakov & Sagdeev (1961); Hasegawa (1969); Southwood & Kivelson (1993); Kivelson & Southwood (1996); Pokhotelov et al. (2002Pokhotelov et al. ( , 2004) and its interplay with the whistler and (potentially) ion-cyclotron in-stabilities (Gary (1992), Gary & Wang (1996)). An example of this has been observed in these space plasmas, and termed whistler lion roars.
Whistler lion roars are short bursts of right-hand polarized waves, with frequencies below the electron cyclotron frequency (ω c,e ) commonly observed in the Earth's magnetosheath and magnetosphere (Smith et al. (1969); Tsurutani et al. (1982); Baumjohann et al. (1999); Breuillard et al. (2018); Giagkiozis et al. (2018); Kitamura et al. (2020); Zhang et al. (2021)), therefore identified as whistler waves. They have also been observed in Saturn's magnetosheath (Píša et al. (2018)) and the solar wind. They are observed in regions of locally low magnetic field strength (magnetic troughs, or magnetic holes) of magnetic fluctuations. These magnetic troughs are usually identified as structures produced by mirror instability modes, which are able to trap electrons with low parallel velocity within these regions due to the aforementioned invariance of magnetic moment (Southwood & Kivelson (1993)).
Several mechanisms have been proposed to explain the excitation of whistler lion roars. They usually invoke the pressure anisotropy P ⊥,e > P ∥,e that electrons generate while trapped inside the magnetic troughs (P ⊥,e and P ∥,e are, respectively, the electron pressure perpendicular and parallel with respect to the local magnetic field B). Other mechanisms have also been proposed involving counterpropagating electron beams inside these regions, and butterfly distributions in pitch-angle (Zhang et al. (2021); Jiang et al. (2022)). As the waves propagate out from the magnetic troughs, they are thought to interact with electrons, regulating the number of trapped electron inside magnetic troughs and also the global anisotropy of electrons in the magnetosheath. This way, there would be a causal connection between an ion-scale mirror instability with an electron scale whistler instability at nonlinear stages, providing valuable insight into the interaction of mirror modes with electrons.
The question arises as to whether a similar interplay can be expected in the ICM. Such behavior would imply a more complex scenario in which several microinstabilities would be causally connected and coexisting with each other, and several channels of turbulent energy dissipation would open, leading to a much richer dynamics.
Mirror instability and its consequences have been extensively studied using particle-in-cell (PIC) simulations of moderately and high-β plasmas, both hybrid (Kunz et al. (2014); Melville et al. (2016); Arzamasskiy et al. (2023)) and fully kinetic (Sironi & Narayan (2015); Riquelme et al. (2015Riquelme et al. ( , 2016; Ley et al. (2023)), up to nonlinear stages. Consistent with early theoretical works (Southwood & Kivelson (1993); Kivelson & Southwood (1996)), it has been demonstrated that mirror modes are efficient in trapping ions inside regions of low magnetic field strength during their secular growth (Kunz et al. (2014)). When mirror modes reach amplitudes of order δB/B ∼ 1, they reach a saturated stage and the ions eventually undergo scattering, allowing them to escape. This trapping process is similar for electrons, and it has been shown to have important consequences in the electron viscosity and thermal conduction of the plasma (Riquelme et al. (2016); Roberg-Clark et al. (2016). Interestingly, Riquelme et al. (2016) reported the observation of whistler waves in the nonlinear, saturated stages of mirror modes in their simulations, along with ion-cyclotron (IC) waves, although they did not pinpoint the cause of the excitation.
In this work, we use PIC simulations to investigate the nonlinear stages of the mirror instability at moderate and high-β, focusing on the abovementioned excitation of whistler and IC waves. We observe that, indeed, both right hand and left hand polarized, quasi parallel-propagating waves are excited at the end of mirror's secular growth and during its saturated stage, and provide evidence for their excitation mechanism associated to the pressure anisotropy electrons and ions within magnetic troughs of mirror modes. The right-and left-handed circular polarization of these waves lead to their identification as electron-cyclotron (i.e. whistlers) and ioncyclotron (IC) waves. We also provide some additional discussion about their nature. We describe the interaction of these waves with electrons and ions, and their effect on the regulation of the pressure anisotropy at late stages.
This paper is organized as follows. Section §2 describes our simulation setup and the runs we perform. Section §3 shows our simulation results starting from the excitation of the mirror instability, an early whistler burst and then the late excitation of the electron and ion cyclotron waves at nonlinear stages of the mirror instability. We also detail the mechanism by which these cyclotron waves are excited during the saturated stage of mirror modes, by tracking ions and electrons throughout the simulations. We also describe the subsequent interaction of these waves with the ions and electrons at late stages. In section §4 we discuss the dependence of our results on the mass ratio used in our simulations and show that they are fairly insensitive to it. In section §5 we present results of simulations at different initial ion plasma beta, and show these cyclotron waves are also present at lower and higher betas as well. Finally, we discuss the implication of our work in the context of galaxy clusters and present our conclusions in section §6.
SIMULATION SETUP
We perform fully kinetic, 2.5D particle-in-cell (PIC) simulations using TRISTAN-MP (Buneman (1993);Spitkovsky (2005)), in which we continuously shear a collisionless, magnetized plasma composed of ions and electrons (Riquelme et al. (2012)). The magnetic field is initially spatially uniform and starts pointing along the x-axis. A shear velocity field is imposed with v = −sxŷ (red arrows in fig. 1), where x is the distance along the x-axis and s is a constant shear rate. We solve the PIC system of equations using shearing coordinates, as implemented in Riquelme et al. (2012) (The suitability of this approach to studying ion Larmor scale phenomena is also discussed in Riquelme et al. (2015)). The conservation of magnetic flux implies that the y-component of the magnetic field B evolves as dB y /dt = −sB 0 , whereas dB x /dt = 0 and dB z /dt = 0. The action of the shear then Figure 1. The evolution of the simulation domain. Panel a: Initially, the box is straight, the magnetic field is initialized pointing in thex direction and a shear velocity field v = −sxŷ is imposed in the y-direction (red arrows). Panel b: The velocity field shears the box continuously throughout the simulation, amplifying the magnetic field and changing its direction in the process due to magnetic flux conservation. continuously amplifies the magnetic field strength such that its magnitude evolves as B(t) = B 0 √ 1 + s 2 t 2 . In our simulations, ions and electrons are initialized with Maxwell-Jüttner distributions (the relativistic generalization of the Maxwell-Boltzmann distribution, Jüttner (1911)) with equal initial temperatures T init i = T init e , and k B T init i /m i c 2 between 0.01 and 0.02. The physical parameters of our simulations are the initial temperature of ions and electrons (T init i = T init e ), the initial ion plasma beta, β init i , the mass ratio between ions and electrons m i /m e , and the ratio between the initial ion cyclotron frequency and the shear frequency, ω init c,i /s, that we call the "scale-separation ratio". The numerical parameters in our simulations are the number of macroparticles per cell, N ppc , the plasma skin depth in terms of grid point spacing, c/ ω 2 p,e + ω 2 p,i /∆x, and the domain size in terms of the initial ion Larmor radius, L/R init L,i , where R init L,i = v th,i /ω init c,i and v 2 th,i = k B T i /m i . These physical and numerical parameters are listed in Table 1. We fix c/ ω 2 p,e + ω 2 p,i /∆x = 3.5 in the simulations presented in Table 1.
In the bulk of the paper we discuss a representative, fiducial simulation with m i /m e = 8, β init i = 20 (thus β init = β init i + β init e = 40) and ω init c,i = 800 (simulation b20m8w200 in Table 1, highlighted in boldface). We vary the above parameters in a series of simulations, all listed in Table 1. Importantly, given the available computational capabilities, performing a simulation with realistic mass ratio m i /m e = 1836 becomes prohibitively expensive. Therefore, a range of values of ionto-electron mass ratio are presented in order to ensure that our results do not strongly depend on this parameter. The effects of varying these parameters are discussed in § §4 & 5.
In the absence of a scattering mechanism and/or collisions, the ion and electron magnetic moments µ j ≡ p 2 ⊥,j /(2m j B) and longitudinal action J j ≡ p j,∥ dℓ are adiabatic invariants (p ⊥,j and p ∥,j are the components of the momentum of a particle of species j perpendicular and parallel to the local magnetic field, respectively, and j = i, e), and therefore are conserved as the system evolves, provided that the variation of B is sufficiently slow compared to the particle cyclotron frequencies; in our case, s ≪ ω c,j , where ω c,j = eB/m j c is the cyclotron frequency of particles of species j, c is the speed of light, and e is the magnitude of the electric charge.
The continuous amplification of the magnetic field B implies that the particles' adiabatic invariance drives a pressure anisotropy in the plasma such that P ⊥,j > P ∥,j . In the very early stages of the simulation, we expect the evolution of P ⊥,j and P ∥,j to be dictated by the double-adiabatic scalings (Chew et al. (1956)). Soon after this stage, however, the pressure anisotropy acts as a free energy source in the plasma and is able to excite several kinetic microinstabilities after surpassing their excitation thresholds, which are proportional to β −α , (α ∼ 0.5 − 1) (Hasegawa (1969); Gary & Lee (1994); Gary & Wang (1996)). These microinstabilities break the adiabatic invariants and act upon the pressure anisotropy to regulate the anisotropy growth in the nonlinear stages.
In our simulations, and given our initial physical parameters (namely, β init i ≡ 8πP init i /B 2init = 20), we expect the dominant instability to be the mirror instability. Mirror modes are purely growing (i.e. zero real frequency), with the fastest growing modes propagating highly obliquely with respect to the mean magnetic field. Their most unstable wavenumbers satisfy k ⊥ R L,i ∼ 1, where R L,i is the ion Larmor radius. This instability presents Landau resonances with particles of very small parallel momentum, p ∥ ≈ 0, that become trapped in between mirror modes, and contribute to regulating the pressure anisotropy.
In addition to the mirror instability, we also observe wave activity that we associate with the ion-cyclotron (Gary (1992)) and whistler (Gary & Wang (1996)) instabilities at ion and electron scales, respectively, during the late stages of our simulations. Ion cyclotron (IC) modes are left circularly polarized and have real frequency below the ioncyclotron frequency ω c,i , with modes of maximum growth rate propagating parallel to the mean magnetic field B. Similarly, whistler modes are right circularly polarized and have real frequency below the electron cyclotron frequency ω c,e , with modes of maximum growth rate also propagating parallel to B. As we will see, this wave activity is associated with the ion and electron trapping processes that mirror modes generate.
3. RESULTS Figures 2 and 3 summarize the evolution of magnetic field fluctuations and particle pressure anisotropy over time. Figure 2 shows the fluctuations in the magnetic field δB ≡ B − ⟨B⟩ (where ⟨·⟩ denotes a volume average over the entire simulation domain) in its three different components at two different times: t · s = 0.4 (first row, panels a,b and c) and at t · s = 1.4 (second row, panels d, e and f ). The black arrows in panels a-f denote the direction of the mean magnetic field ⟨B⟩ at those particular times. The components of δB are defined as parallel with respect to the main field ⟨B⟩ (δB ∥ , panels b and e), perpendicular to ⟨B⟩ in the plane of the simulation (δB ⊥,xy , panels a and d) and perpendicular to ⟨B⟩ in the direction out of the simulation plane (δB z , panels c and f ). Additionally, figure 2g shows the evolution of the energy in each of the three components of δB, normalized by B(t) 2 ; δB 2 ∥ (blue line), δB 2 ⊥,xy (red line), and δB 2 z (green line). Figure 3a shows the evolution of the ion pressure anisotropy ∆P i ≡ P ⊥,i − P ∥,i for run b20m8w800, and the dashed gray line shows the approximate instability threshold for the mirror instability (Hasegawa (1969);Hellinger (2007)). We can see that the ion anisotropy surpasses the mirror threshold very early in the simulation, and reaches its maximum value at t · s ≈ 0.5 (we will call this stage the anisotropy overshoot hereafter). We will show that this is consistent with the beginning of the secular growth of mirror modes (Kunz et al. (2014), Riquelme et al. (2016)). Figure 3b shows the same for the electron pressure anisotropy, which we will show relaxes by efficient scattering.
Mirror Instability Evolution
Since mirror modes are highly oblique, their evolution is well represented by the time trace of δB 2 ∥ shown in fig. 2g. We identify both a linear, exponentially growing phase until t · s ≈ 0.45, and a subsequent nonlinear, slower growing secular phase, consistent with the different evolutionary phases of the ion and electron pressure anisotropies described above. Besides the break in the mirror mode's evolution at t · s ≈ 0.45, a second break in the secular growth occurs around t · s = 0.6 followed by a shallower slope of growth. We will show that this break coincides with the excitation of both whistler and IC waves in δB 2 ⊥,xy and δB 2 z , implying that whistler and IC waves, albeit smaller in amplitude, modulate the evolution of mirror modes during nonlinear stages.
3.1.1. Linear, exponentially growing mirror phase After an early CGL phase of the pressure anisotropy ∆P j (j = i, e, see fig. 3), fig. 2g shows the excitation of the mirror instability starting at t · s ≈ 0.35, mainly in the parallel component of the magnetic fluctuations, δB ∥ (blue line), consistent with theoretical expectations (Southwood & Kivelson (1993); Pokhotelov et al. (2004)). Figure 2g also shows that δB ∥ grows first and it has the largest amplitude throughout the entire simulation, meaning that the mirror instability is indeed the dominant instability. Figure 2b (i.e. δB 2 ∥ ) shows the linear, exponentially growing phase of mirror modes at t · s = 0.4, where small filamentary structures of high local magnetic field amplitude start to emerge and slowly grow, in between wider regions of low local magnetic field amplitude. The obliqueness of the modes is readily apparent, as well as the fact that the mirror generated magnetic fluctuations lie mainly in the (k,B) plane (they can be seen in δB 2 ⊥,xy too, but not in δB 2 z , as expected from linear theory (Pokhotelov et al. (2004))). The oblique nature of mirror modes can also be seen in fig. 4a, where we show the power spectrum in space of δB ∥ at t · s = 0.4. The solid and dashed lines represent the directions parallel and perpendicular to the mean magnetic field ⟨B⟩, respectively. Therefore, we can see that at t · s = 0.4, the power is mostly concentrated between wavevectors 0.44 ≲ kR init L,i ≲ 1.35 and angles of 52 • ≲ θ k ≲ 77 • , where θ k ≡ cos −1 (k · ⟨B⟩/kB) is the angle between mirror modes' wavevector and the mean magnetic field ⟨B⟩.
It should be emphasized that the ion-cyclotron wave activity only starts at t · s = 0.6, and not before. There is no sign of an early excitation of the ion-cyclotron instability competing with the mirror instability for the available free energy in ∆P i . Instead, at earlier stages, only the mirror instability is excited, consistent with our initial conditions of high-beta (β init i = 20), where the mirror instability is expected to dominate (e.g. Riquelme et al. (2015)).
The absence of ion-cyclotron waves early in the simulation (0 < t · s < 0.6) is clearly seen in fig. 5a, where we show the power spectrum in time and space of δB z (ω, k ∥ ) + iδB ⊥,xy (ω, k ∥ ) at early stages: 0.3 < t·s < 0.5. This particular combination of the two perpendicular components of δB allows us to disentangle the parallel-propagating waves (with respect to the main magnetic field ⟨B⟩, e.g. ion-cyclotron and whistlers), and also their left-handed and right-handed circular polarizations (Ley et al. (2019); Tran et al. (2023)). In this case, the left-hand circularly polarized wave activity is shown for ω > 0, whereas right-hand circularly polarized wave activity is shown for ω < 0. We readily see that, apart from the ω ≈ 0 power consistent with mirror modes appearing in δB ⊥,xy , there is no left-handed polarized wave activity throughout 0.3 < t · s < 0.5, only right-handed polarized waves, which corresponds to an early excitation of the whistler instability, as we will see in section 3.2.
Nonlinear, secular mirror phase
At t · s ≈ 0.45, we can clearly see the beginning of the secular growth of the mirror instability, where the modes reach nonlinear amplitudes, and keep growing but at a slower rate. This evolution is consistent with previous works (Kunz et al. (2014); Riquelme et al. (2016)). The evolution of the ion pressure anisotropy ∆Pi/P ∥,i for run b20m8w800 is shown as a solid green line. The dashed green line shows the double-adiabatic evolution of ∆Pi/P ∥,i (Chew et al. (1956)). The dashed gray line shows the approximate threshold for the mirror instability: 1/β ∥,i (Hasegawa (1969)). The dotted-dashed orange line shows the threshold for the IC instability from Gary & Lee (1994) for γIC /ωc,i = 10 −2 (γIC is the IC growth rate). The red dashed line shows the bestfit to ∆Pi/P ∥,i = Aiβ α i ∥,i from t · s = 0.7 to t · s = 2.0, with Ai = 0.544 ± 0.003 and αi = 0.445 ± 0.003. Panel b: The evolution of the electron pressure anisotropy ∆Pe/P ∥,e is shown as solid orange line. The dashed orange line shows the double-adiabatic evolution of ∆Pe/P ∥,e . The dashed blue line shows the best-fit to ∆Pe/P ∥,e = Aeβ αe ∥,e from t · s = 0.7 to t · s = 2.0, with Ae = 0.036 ± 0.0002 and αe = 0.341 ± 0.003. The dashed gray line shows the linear threshold for the anisotropic whistler instability from (Gary & Wang (1996)) for growth rate γW /ωc,e = 0.01. (γW is the whistler growth rate).
Interestingly, the mirror secular growth is interrupted at t · s ≈ 0.6, and the slope of δB 2 ∥ breaks. This is also approximately where the ion pressure anisotropy experiences its fastest decline ( fig. 3). Mirror modes continue to grow, but at a much slower rate. This is consistent with the saturation of energy in the subdominant components δB 2 ⊥,xy and δB 2 z (solid red and green line in fig. 2g, respectively), which also presents a distinct pattern of oscillations. This activity is a clear evidence of a new burst of waves with components mainly in the direction perpendicular to δB, and we will see that they are consistent with both electron cyclotron waves (whistlers) and ion cyclotron waves excited by electron and ion populations, respectively, that become trapped within mirror modes (see sec. 3.3). Figure 2e shows a late, nonlinear stage of the mirror instability, at t·s = 1.4. At this time, the regions of high magnetic field of mirror modes (e.g. red filamentary structures seen in fig. 2b) have grown significantly and merged with neighboring structures to form wider and sharper regions of high local amplitudes (δB ∥ /B ∼ 0.9), whose sizes are comparable to regions of low magnetic field. At this stage, most of the power is concentrated in wavevectors 0.2 ≲ kR init L,i ≲ 1.1, and angles 57 • ≲ θ k ≲ 85 • (see fig. 4b).
After reaching its overshoot, the ion anisotropy starts to decrease towards marginal stability. However, this decrease stops around t · s ≈ 0.65 at ∆P i /P ∥,i ≈ 0.18, well above the approximate mirror threshold (dashed gray line, (Hasegawa (1969); Hellinger (2007))). The anisotropy then reaches a marginal stability level that is above the mirror threshold, similar to some previous works using both hybrid and fully kinetic simulations (Sironi & Narayan (2015); Melville et al. (2016); Ley et al. (2023)).
In order to better characterize the evolution of ∆P i , we fit a relation ∆P i = A i β αi ∥,i from 0.7 ≤ t · s ≤ 2 (In our simulations, the shear motion continuously amplifies B, therefore β ∥,i also evolves.). As shown in fig. 3a, our best-fit parameters are A i = 0.544 ± 0.003 and α i = −0.445 ± 0.003. The obtained exponent is consistent with marginal stability threshold given by the ion-cyclotron instability for lower β i (Gary & Lee (1994)). Indeed, the threshold for the IC instability, ∆P i = 0.53β −0.4 ∥,i , is plotted as dotted-dashed orange line in fig. 3a for γ IC /ω c,i = 10 −2 (Gary & Lee (1994)), and we can clearly see the similarity with our best-fit threshold, even at this higher value of initial β init ∥,i . This observation was also reported in Sironi & Narayan (2015), and we will see that, indeed, we do observe ion-cyclotron waves as part of the saturated phase of the mirror instability that starts at t · s = 0.6. The presence of ion and electron cyclotron waves coexisting with mirror modes at late, nonlinear stages of the mirror instability has been reported in previous works (Riquelme et al. (2016); Sironi & Narayan (2015); Ahmadi et al. (2018)). In §3.3, we argue that a natural explanation of the source of these cyclotron waves is pressure anisotropy of ions trapped within nonlinear mirror modes. Figure 3b shows the evolution of the electron pressure anisotropy ∆P e ≡ P ⊥,e − P ∥,e for run b20m8w800. Initially, the electrons develop their own pressure anisotropy alongside ions and for the same reasons. The anisotropy follows double-adiabatic (CGL) scaling (dashed orange line) until t · s ≈ 0.4, when it has already reached a value significantly larger than the theoretical threshold for the growth of whistler modes, marked by grey-dashed lines (Gary & Wang (1996)). Around this time, the whistler instability starts to grow, as seen by the time trace of δB 2 z in fig. 2g, which is
First Whistler
. Panel a: The power spectrum of δBz(ω, k ∥ ) + iδB ∥,xy (ω, k ∥ ) in the entire simulation domain and between 0.3 < t · s < 0.5. The frequency is normalized by the initial electron cyclotron frequency ωc,e, and the wavevector is normalized by the plasma frequency ωp,e over the speed of light c. The solid black line shows the linear dispersion relation ωr(k) for the whistler instability according to our linear dispersion solver, whereas the dashed black line shows its growth rate γ. Panel b: The power spectrum in space of δBz(kx, ky) at t · s = 0.4. The wavenumbers kx, ky are normalized to the initial Larmor radius of the electrons, R init L,e . The solid and dashed black lines represent the direction parallel and perpendicular to the main magnetic field at that time. a rough proxy for whistler waves, and also because there are no left-handed IC waves as shown in fig. 5a. At t · s ≈ 0.45 the whistler modes saturate and enter a regime of quasisteady amplitude, which lasts until t · s ≈ 0.53. During this t · s ≈ 0.4 − 0.53 period, ∆P e is rapidly drawn down by frequent scattering, reaching a more slowly decreasing regime between t · s ≈ 0.53 and 0.6. The draw down of electron anisotropy happens at a time when the ion anisotropy is still growing. This lasts until mirror modes are sufficiently high amplitude to start trapping the electrons (t · s = 0.6).
The presence of whistler modes at t · s = 0.4 can be seen mainly in the perpendicular components of δB, namely, δB ⊥,xy and δB z , figures 2a and 2c, respectively. They propagate quasi-parallel to the main magnetic field B in a fairly homogeneous way inside the simulation domain. This quasi-parallel propagation can also be seen in fig. 5b, where we show the power spectrum in space of δB z (k x , k y ) at t · s = 0.4 for run b20m8w800, and the solid and dashed black lines indicate the directions parallel and perpendicular to the main magnetic field ⟨B⟩ at t · s = 0.4. The power of δB z (k x , k y ) is concentrated at parallel propagation and wavevectors 0.6 < kR init L,e < 1. We show the whistler wave frequencies in the power spec- fig. 5a. We can see that the power is localized in the region ω < 0, i.e. right-handed circularly polarized waves, consistent with the whistler polarization, and within frequencies 0.02 < ω/ω c,e < 0.05. As mentioned above, no IC activity is present during this time period.
We also calculated the theoretical dispersion relation of the anisotropic whistler instability using a linear dispersion solver assuming an initial bi-maxwellian distribution of electrons (Tran et al. (2023)), using the initial parameters and values of T ⊥,e , T ∥,e directly from the simulations. The dispersion relation ω(k) is shown as a solid black line in fig. 5a, whereas the instability growth rate is shown in dashed black lines. We can see that the power in right-hand circularly polarized waves is consistent with the whistler dispersion relation.
This way, the early evolution of the electrons is determined by an early burst of whistler modes associated to the initial electron pressure anisotropy growth. We will see that, once electrons start to become trapped in between mirror modes at t · s ≈ 0.6, another burst of whistler activity happens, this time associated with the trapping process within mirror modes during their secular and saturated phase.
Whistler and Ion-cyclotron Excitations
At the end of its secular growth, when mirror modes have reached sufficiently high-amplitudes, we simultaneously observe right-hand and left-hand circularly polarized wave activity, which we identify as whistler and ion-cyclotron waves, respectively. We will see below ( §3.3) that these whistler and ion-cyclotron waves propagate mainly in regions of locally low magnetic field (magnetic troughs). The source of this wave activity is identified to be the pressure anisotropic population of ions and electrons mainly due to trapped parti-cles inside the magnetic troughs. The whistlers and ion cyclotron waves then pitch-angle scatter both the trapped and untrapped particles, contributing to regulation of the global anisotropy. Figure 6 shows different spectral properties of the late burst of waves excited from t · s ≈ 0.6 onwards. Figure 6a shows the power spectrum in time of δB z (ω) + iδB ⊥,xy (ω) between 0.5 < t · s < 1.1, so we can see both left-hand (solid blue line) and right-hand (solid orange line) circular polarizations. The power spectrum peaks at low-frequencies, consistent with the nature of the dominant mirror modes (mainly appearing in δB ⊥,xy ). Additionally, we can clearly see a secondary peak at around ω ∼ 0.2ω c,i , with a spread that goes from ω ∼ 0.1ω c,i to ω ∼ 0.3ω c,i , in both left and right hand circular polarizations. This constitutes the characteristic feature informing the late burst of wave activity. This peak resembles observations of whistler lion roars in the Earth's Magnetosheath (see e.g. figs. 1 and 2 of Giagkiozis et al. (2018), fig. 3 of Zhang et al. (2021) for right-hand polarized waves.). Figure 6b shows the spectrogram of δB z (ω) + iδB ⊥,xy (ω) in frequency and time, ranging 0.4 < t · s < 1.3, with positive frequencies representing left-hand circularly polarized waves, and negative frequencies denoting right-hand circularly polarized waves. Here we can also see the early burst of whistler waves starting at t·s ≈ 0.4 and peaking at t·s ≈ 0.45 (see section §3.2), followed by the burst of both left-hand and right-hand circularly polarized waves at t·s ≈ 0.53 and peaking at t · s ≈ 0.65. This coincides with the rise in amplitude of δB 2 z and δB ⊥,xy (see fig. 2)g, and the waves are continuously maintained throughout the simulation at around the same frequencies.
Finally, figure 6c shows the power spectrum of δB z (ω, k ∥ ) + iδB ⊥,xy (ω, k ∥ ) in time and space, at 0.5 < t · s < 1.1. Frequencies and wavenumbers are normalized by ω c,i and ω p,i /c, respectively. Here we can also see the power at low frequencies consistent with the dominance of mirror modes appearing in δB ⊥,xy . The burst of left and right hand circularly polarized waves can be seen concentrated around frequencies ω ≈ 0.2ω c,i and ω ≈ −0.15ω c,i , respectively. Their range in wavenumbers is 0.2 ≲ ck ∥ /ω p,i ≲ 0.5. Overall, the power spectra of both left and right hand polarized waves are very similar to those of ion-cyclotron and electron cyclotron whistlers, and we will identify these waves as such from now on. In the next section, we will confirm that the population of particles that excites these waves have anisotropic distributions that are IC and whistler unstable.
The morphology of IC and whistler waves can also be seen in figures 2d and 2f . The short wavelength, wavepacket-like structures are identified with whistler modes, which propagate mainly through regions of low magnetic field strength of mirror modes, as we can see from δB ⊥,xy ( blue shaded regions in fig. 2d). The IC modes, on the other hand, are identified as the longer wavelength, extended modes that can be seen in δB z . The IC modes seem to propagate through the entire simulation box, given their ion-scale wavelength, whereas whistler modes clearly propagate within mirrors' Figure 6. Panel a: The power spectrum of δBz(ω) + iδB ⊥,xy (ω) as a function of frequency. The frequencies are normalized by the initial ion-cyclotron frequency. The power spectrum of lefthanded circularly polarized waves (ω > 0) is shown as a solid blue line, whereas the power spectrum corresponding to righthanded circularly polarized waves (ω < 0) is shown as an orange line folded into positive frequencies. Panel b: Spectrogram of δBz(ω) + iδB ⊥,xy (ω) in frequency and time, at 0.4 < t · s < 1.3. The frequency is normalized by the initial ion-cyclotron frequency. Positive and negatives frequencies corresponds to left-hand and right-hand circularly polarized waves, respectively. Panel c: The power spectrum of δBz(ω, k ∥ ) + iδB ⊥ (ω, k ∥ ) at 0.5 < t · s < 1.1. Frequencies are normalized by the initial ion gyrofequency, and wavenumbers are normalized by the initial ion skin depth. Here also, positive and negative frequencies show left-hand and righthand polarized waves, respectively. Figure 7. The power spectrum in space of δB ⊥,xy (kx, ky) at t · s = 0.9. The wavenumbers kx, ky are normalized by the initial ion Larmor radius R init L,i . The solid and dashed white lines represent, respectively, the direction parallel and perpendicular to the main magnetic field at that time. magnetic troughs. This also resembles magnetosheath's observations of whistler waves within magnetic troughs (e.g. Kitamura et al. (2020)).
The peak frequencies observed in figure 6 for both ioncyclotron and whistler waves can be understood in terms of their dispersion relations. At high-β and kR L,e ∼ 1, and for quasi-parallel propagation, the dispersion relation for whistler waves can be written as (Stix (1992); Drake et al. (2021)) where d e = c/ω p,e and d i = c/ω p,i are the electron and ion skin depths, respectively. Knowing that d 2 i = R 2 L,i /β i , we can also write Similarly, at high-β and kR L,i ∼ 1, and for quasi-parallel propagation, the ion-cyclotron wave dispersion relation is approximately (Stix (1992)) and we can also write We can estimate k W , k IC by looking at the power spectrum of any of the perpendicular components of the magnetic field fluctuations. Figure 7 shows the power spectrum of δB ⊥,xy (k x , k y ) at t · s = 0.9, where the solid and dashed white lines denote the direction parallel and perpendicular to the mean magnetic field B at that time, respectively. Apart from the power in the perpendicular direction corresponding to the mirror modes, in the power parallel to B (i.e. along the solid black line in fig. 7) we can distinguish large wavenumbers centered at (k y R init L,i , k x R init L,i ) ≈ (0.75, −1.5) (and also at (−1.5, 0.75)), corresponding to whistlers, and also smaller wavenumbers centered at (k x R init L,i , k y R init L,i ) ≈ (0.5, 0.7), corresponding to ion-cyclotron waves.
The large wavenumber extent in k x , k y observed in fig. 7 gives us an approximate range of wavenumbers 1.5 ≲ k W R init L,i ≲ 3.2 for whistlers, implying frequencies 0 , consistent with the frequencies observed in the negative half of fig. 6c, corresponding to right-hand polarized waves. Similarly, the small wavenumber extent in k x , k y gives us a range of wavenumbers 0.4 ≲ k W R init L,i ≲ 1.1, implying frequencies 0.1 ≲ ω IC /ω init c,i ≲ 0.25, also consistent with the frequencies in the positive half of fig. 6c, corresponding to left-hand polarized waves.
2D Particle Distributions
The specific time at which ion and electron cyclotron wave activity saturates, which coincides with the end of mirror instability's secular growth (t · s ≈ 0.6), and the propagation of whistler waves within regions of low-magnetic field strength, give a hint towards uncovering the mechanism by which the whistler and IC waves are excited.
As a first step, we explore the evolution of the pressure anisotropy of ions and electrons at the time at which the IC and whistler waves are excited. At this time, mirror modes have achieved high amplitudes, and created sharp regions of high and low magnetic field strength, making the plasma spatially inhomogeneous. This implies that, in general, the plasma β of ions and electrons would not be the same at different locations in the simulation domain, making the anisotropy thresholds for the growth of the modes different in different regions. For this reason, a more appropriate method would be to measure the 2D distribution of pressure anisotropy, β ∥ and δB ∥ /B in the simulation domain. Figure 8 shows the distribution of ion and electron pressure anisotropy as a function of ion β ∥,i (panels a, b, c) and electron β ∥,e (panels g, h, i), respectively, and the distribution of δB ∥ /B versus β ∥,i (panels d, e, f ) and electron β ∥,e (panels j, k, l), respectively. These distributions are shown at three different times: beginning of the simulation (t · s ≈ 0, left column); end of mirror's secular growth and beginning of ion and electron cyclotron wave activity (t · s = 0.6, middle column), and a late stage well into the saturated regime of mirror instability (t · s = 1.4, right column). In the top row of fig. 8 (i.e. panels a, b, and c), the dashed gray line corresponds to the approximate mirror instability thresh-old 1/β ∥,i (Hasegawa (1969)), the dashed-dotted orange line corresponds to the theoretical IC threshold 0.53/β 0.4 ∥,i from Gary & Lee (1994) for γ IC /ω c,i = 10 −2 , and the solid black line is the best-fit to the global ion anisotropy derived in section 3.1 (see fig. 3a). In the third row of fig. 8 (panels g, h, i), the dotted-dashed black line shows the whistler instability threshold 0.36/β 0.55 ∥,e from Gary & Wang (1996), for γ W /ω c,e = 10 −2 .
Starting with the ions, we can see that, from a stable, isotropic distribution at the very beginning of the simulation ( fig. 8a), the ions become anisotropic enough to surpass both the mirror and the theoretical IC threshold from Gary & Lee (1994), as well as our best-fit instability threshold, as shown in fig. 8b. At this point (t · s = 0.6), we start to observe the excitation of ion-cyclotron waves that seem to interact with the ions and start driving them towards a marginally stable state. This can be seen in fig. 8c, where the distribution becomes bimodal, with one population of ions under both the IC-threshold and our best-fit threshold (centered at β ∥,i ∼ 5 and P ⊥,i /P ∥,i ∼ 1.2), meaning that they are driven towards marginal stability with respect to the IC threshold. Interestingly, there exists another ion population that is still unstable (centered at β ∥,i ∼ 18 and P ⊥,i /P ∥,i ∼ 1.4), therefore IC waves could then continue being excited even at this late stages. This could explain the sustained amplitude observed in δB 2 z and δB 2 ⊥,xy in figure 2g. Therefore, we can see that the unstable population has a higher β ∥,i , and the marginally stable population moves to lower β ∥,i .
For a similar value of P ∥,i , the difference in the values of β ∥,i between the unstable and marginally stable populations should imply a difference in the local magnetic field strength (recall β ∥,i = 8πP ∥,i /B 2 ). This gives us a hint on the location of the unstable and marginally stable populations in the domain, as mirror modes generate distinct regions of low and high magnetic field strength.
As we can see in figs. 8d, 8e, and 8f , the ions also separate into two populations now in δB ∥ /B. Starting from zero magnetic field fluctuations at the beginning (t · s ≈ 0, fig. 8d), we see how δB ∥ /B starts to grow at t · s = 0.6 ( fig. 8e), until we clearly see the bimodal distribution at t · s = 1.4, separating the two ion populations: the high-β ∥,i population located in regions of δB ∥ /B < 0 (i.e. low-B strength), and the low-β ∥,i population located in regions of δB ∥ /B > 0 (i.e. high-B strength).
We can therefore conclude that, after mirror modes develop and the IC waves are excited (t · s ≳ 0.6), the ions separate into two populations: one of low-β ∥,i , located mainly in high-B strength regions, and marginally stable to IC waves, and the second population with high-β ∥,i , low-B strength regions, and still unstable to IC waves. This suggests that the IC wave are excited by the unstable ion populations in regions of low magnetic field strength, and then interact with the ions in such a way that the ions move to regions of high-B strength and low β ∥,i . In sections 3.5 and 3.6 we will see that the population of ions that contribute most to the , t · s = 0.6 (middle column), and t · s = 1.4 (right column). The dashed gray line represents the approximate mirror instability threshold 1/β ∥,i (Hasegawa (1969)), the dotted-dashed orange line represents the IC instability threshold from Gary & Lee (1994) for γIC /ωc,i = 10 −2 (γIC is the IC instability growth rate), and the solid black line represents our best-fit threshold from section 3.1 (see fig. 3a). Second row: The distribution of δB ∥ /B versus ion β ∥,i for the same three times as in the top row. Third row: The distribution of electron P ⊥,e /P ∥,e versus β ∥,e in the simulation domain at the same three times as in the top row. The dotted-dashed black line represents the whistler instability threshold from Gary & Wang (1996). Fourth row: The distribution of δB ∥ /B versus electron β ∥,e for the same three times as in the top row. An animated version of this plot is available in the online version.
anisotropy that destabilize the IC waves are the ones that become trapped within mirror troughs. In the case of the electrons, we can see a similar evolution. From a stable, isotropic distribution at t · s ≈ 0 ( fig. 8d), we can see how part of it becomes now whistler unstable at t · s = 0.6 ( fig. 8e), after which the excited whistler waves interact with the electrons driving again part of the distribution gradually towards marginal stability, also generating a bimodal distribution similar to that of the ions. At t · s = 1.4 ( fig. 8f ), we can see that the electron population with low β ∥,e (centered at β ∥,e ∼ 5 and P ⊥,e /P ∥,e ∼ 1) is marginally stable with respect to the whistler threshold, whereas the electron population with high β ∥,e (centered at β ∥,e ∼ 18 and P ⊥,e /P ∥,e ∼ 1.2) is still unstable with respect to the whistler threshold. This also implies that whistler waves can still be excited at late stages in the simulation.
Analogously, the electrons also separate into two populations with respect to δB ∥ /B. Similarly to ions, we also see that the population with low-β ∥,e is located in regions of δB ∥ /B < 0 (low B strength), whereas the high-β ∥,e population is located in regions of δB ∥ /B > 0 (high B strength). In this sense, we also conclude that in the case of electrons, the unstable population is located mainly in regions of low-B strength and high-β ∥,e , where whistler waves are being excited, and the marginally stable population is located mainly in regions of high-B field and low-β ∥,e . This also suggests that whistler waves interact with electrons so they move to regions of high-B strength. We will also see in sections 3.5 and 3.6 that the electrons that contributes the most to the pressure anisotropy that destabilizes whistler waves are the ones that become trapped within mirror modes.
Physical Mechanism of Secondary IC/Whistler Excitation: Trapped and Passing Particles
In this section, we study the evolution of the ions and electrons that become trapped within mirror modes as part of the mirror instability's interaction with the particles. We characterize the pressure anisotropy and distribution functions of these populations at the moment of trapping, and provide evidence that they are able to destabilize parallel propagating modes that ultimately allow them to escape the mirrors and regulate the overall anisotropy.
As part of their evolution, and after reaching secular growth, mirror modes start to trap particles of low parallel momentum p ∥,j (j = i, e) in regions of low local magnetic field strength. The trapped particles bounce between these regions and conserve their magnetic moment in the process (Southwood & Kivelson (1993); Kunz et al. (2014)). In order to investigate the relation between this trapping process and the excitation of the these late IC and whistler waves, we select and track a population of ions and electrons throughout the evolution of the simulation, and study the trapped and passing (i.e. untrapped) subpopulations separately.
We select and track two populations of ions and two populations of electron having relatively small and large parallel momentum at a particular time in the simulation. This way, we make sure that we can capture particles that eventually be- p ,e /p ,e0 come trapped and others that remained passing. In our fiducial simulation b20m8w800, the two populations of ions that we track have parallel momentum −0.12 < p ∥,i /m i c < 0.12 and 0.3395 < p ∥,i /m i c < 0.3405 at t·s = 0.4. Similarly, the two populations of electrons have −0.2 < p ∥,e /m e c < 0.2 and 0.4599 < p ∥,i /m i c < 0.4601 at t · s = 0.4.
In order to study the behavior of the tracked particles when the IC and whistler activity starts, we ask how many particles become trapped and how many become passing during the interval of time at which this activity happens, which we denote by ∆τ LR . To answer this, we look at fig. 2g and define ∆τ LR as the interval of time 0.52 < t · s < 0.62, which covers the exponential growth that δB 2 z and δB 2 ⊥,xy undergo before saturating. This interval of time also covers the majority of the secular growth of mirror modes (see δB 2 ∥ ). Having this time interval well defined, we now must define the criterion by which we consider a particle to become trapped and passing during ∆τ LR , and for this we look at the evolution of their parallel momentum. Similarly to Ley et al. (2023), we define a particle as trapped during ∆τ LR if the median of its parallel momentum over ∆τ LR is smaller than one standard deviation over ∆τ LR . We then define a particle as passing if the median of its parallel momentum over ∆τ LR is greater than or equal than one standard deviation over ∆τ LR . This is a statement of small variation of p ∥,j over ∆τ LR , which in turn is a proxy for an oscillatory be-havior of p ∥,j , characteristic of a bouncing particle between mirror points. We confirm that this simple criterion gives excellent results separating trapped from passing particles. Figure 9 shows the evolution of the parallel momentum of a trapped and a passing ion (panels a) and a trapped and a passing electron (panels b), where the dashed vertical gray lines indicate ∆τ LR . We can see the the oscillation pattern in the evolution of the parallel momentum of the trapped ion during ∆τ LR and until t · s ≈ 0.7, when it escapes. The parallel momentum of the passing ion evolves without major changes as the ion streams through the simulation box. This behavior is consistent with previous works using hybrid and fully kinetic simulations Kunz et al. (2014); Riquelme et al. (2016).
In figure 9d we can also see the oscillating pattern of the parallel momentum of the trapped electron, indicating bouncing inside mirror modes, which ends at t · s ≈ 1.1, when it escapes. The parallel momentum of the passing electron does not vary significantly during ∆τ LR , confirming that it was streaming along field lines at least at that interval.
It is worth noting, however, what happens after ∆τ LR . Our criterion for identifying particles as trapped and passing was only within ∆τ LR , and after that period of time particles can continue evolving into the saturated stage of mirror modes, where they can escape, be trapped again or continue streaming unperturbed. Indeed, by looking at its parallel momentum, we can see that after escaping and streaming for a while, the trapped ion shown in figure 9a gets trapped again at t·s ≈ 1.1, bounces inside a mirror mode and escapes again at t · s ≈ 1.4. Similarly, we can also see that the trapped electron shown in figure 9b gets trapped again at t · s ≈ 1.2 and seems to stay trapped until the end of the simulation. Interestingly, the passing electron also gets trapped at around t · s ≈ 0.7, by looking at its parallel momentum, and then escapes again at t · s ≈ 1.2. Therefore, in a statistical sense, we can consider the particles as trapped and passing only over the particular period of time ∆τ LR that we chose, after which they can continue evolving and turn into passing or trapped again, as long as the mirror saturation persists in the simulation.
Physical Mechanism of Secondary IC/Whistler Excitation: Distribution Functions
In this section, we look at the evolution of the pressure anisotropy and distribution functions of trapped and passing ions and electrons defined according to the criterion described in section §3.5. We see that during ∆τ LR , both trapped ions and trapped electrons contribute most of the pressure anisotropy necessary to destabilize IC and whistler modes. We show that these IC and whistler waves interact in a quasilinear fashion with ions and electrons, respectively, and quickly regulate their pressure anisotropy such that their distributions evolve to a more isotropic state. Figure 10a shows the evolution of the pressure anisotropy of trapped and passing ions. We can see that the anisotropy of trapped ions initially follows a double-adiabatic (CGL, dotted blue line) evolution until t · s ≈ 0.5 (i.e. just start- ing ∆τ LR ), when the mirror modes start to trap them. We can readily see that during ∆τ LR , the trapped ions develop a significant anisotropy, peaking at around t · s ≈ 0.55. The anisotropy is quickly regulated and converges to the best-fit threshold that we derived in section 3.1 and show in figure 3a. Similarly, the pressure anisotropy of passing ions evolves in a relatively unperturbed fashion following CGL evolution (dotted red line) through the majority of ∆τ LR , until t · s ≈ 0.6, where it passes from negative values (consistent with passing ions having preferentially large parallel momentum) to a positive but, more isotropic state consistent with the best-fit threshold from fig. 3a.
The behavior of the pressure anisotropy of trapped and passing particles can be understood as follows. Mirror modes interact resonantly with ions and electrons according to the resonance condition ω M − k ∥,M v ∥ = 0, where ω M and k ∥,M are the frequency and parallel wavenumber of mirror modes, respectively, and v ∥ is the parallel velocity of the particle. The very low frequency of mirror modes, ω M ∼ 0, implies that the resonant particles are the ones having very low v ∥ (v ∥ < γ M /k ∥,M , where γ M is the mirror growth rate, Southwood & Kivelson (1993); Pokhotelov et al. (2002)). These are the particles that become trapped within mirror modes (Kivelson & Southwood (1996)). Consequently, all trapped particles have very low parallel velocity and, as a whole, they should naturally have a pressure anisotropy P ⊥,j > P ∥,j (j = i, e). Similarly, all passing particles have large v ∥ , and therefore they have a pressure anisotropy P ∥,j > P ⊥,j . In this sense, fig. 10 is consistent with the trapping argument described in Kivelson & Southwood (1996) (see their fig. 1).
The fact that both trapped and passing ions evolve into the average level of ion anisotropy shown in fig 3a shows that their trapped or passing condition corresponds to a transient state, that passes after a time comparable to ∆τ LR . Also, notice that the anisotropy of the two populations (and for the whole population for that matter) is significant enough to drive IC waves unstable (see section 3.3), and therefore this can provide evidence for the source of the IC waves that we see. If this is the case, their interaction with ions is the source of the quick regulation of the anisotropy that we see in fig. 10a. Interestingly, under this scenario, the regulation of the pressure anisotropy of passing ions, which happens at the same time as that of the trapped ions, should also be due to the interaction with these IC waves, meaning that the IC waves interact with both populations of trapped and passing ions simultaneously, and therefore regulate the global ion anisotropy. We confirm that this is the case by looking at the evolution of the distribution functions of trapped and passing ions.
In the case of electrons, we observe a similar evolution in figure 10b. Initially, both trapped and passing electrons detach from their respective CGL evolution (dotted blue and red lines, respectively), and develop a significant anisotropy ∆P e > 0, that peaks at t · s ≈ 0.4. We also see that trapped electrons detach from their CGL evolution much earlier than passing electrons. This evolution then leads to the early burst of whistler waves, which also quickly regulates and drives anisotropies of both trapped and passing electrons towards a more isotropic state (see section 3.2). As expected, the anisotropy of trapped electrons is higher than the one of the passing electrons. After this process, and during ∆τ LR , the anisotropy of trapped electrons increases again, while that of passing electrons continues to decrease. This way, we see that trapped electrons build up a pressure anisotropy ∆P e > 0 that is also quickly regulated after ∆τ LR , converging to an anisotropy level similar to the one of the general electron populations. The anisotropy ∆P e < 0 of the passing electrons also gets regulated towards a similar anisotropy level during the same time. This evolution of trapped electrons also suggests that they become anisotropic enough to destabilize whistler waves, and therefore could be the source of the whistler activity observed at t · s > 0.6. We provide evidence of this by showing the evolution of the distribution function of electrons. Figure 11 shows the distribution functions of trapped and passing ions and electrons at three different times t·s = 0.57, t · s = 0.61, and t · s = 0.75, spanning ∆τ LR and also part of mirror's saturated stage. In the following we describe the evolution of each population: The distribution of trapped ions (figs. 11a, 11b, and 11c) shows a clear loss-cone like form at t · s = 0.57 (all outside the loss-cone), meaning that all trapped ions are effectively trapped in mirror troughs. At this time, trapped ions have reached its maximum pressure anisotropy according to figure 10a.
Once IC waves are excited, they interact with both trapped and passing ions via pitch-angle scattering in a quasilinear fashion (Kennel & Engelmann (1966)). This diffusion process happens along paths of constant particle's energy in the frame moving with the waves (see e.g. Squire et al. (2022) We plot these contours in solid white lines in each plot of figure 11 as v 2 ⊥,j + (v ∥,j − ω/k ∥ ) 2 ≈ v 2 ⊥,j + v 2 ∥,j = const., as in a high-β scenario, the phase velocity of an IC wave offers a small correction of order v A /v th,i = 1/β. Additionally, the IC waves in our simulations are destabilized in both parallel and anti-parallel directions to B. We can see that the relaxation of the distribution function of trapped ions by the quasi-linear interaction with IC waves agrees very well with these paths, by looking at t · s = 0.61 and t · s = 0.75.
The distribution of passing ions (figs. 11d, 11e, and 11f ) shows, on the one hand, a concentration of ions at low perpendicular velocities and relatively large parallel velocities, and it looks fairly symmetric in v ∥ . This is consistent with having untrapped ions mainly streaming along the mean magnetic field in both directions. On the other hand, the population of large parallel velocity is also shown at v ∥ /c ≈ 0.3 (see section 3.5). Interestingly, the passing ions also interact quasilinearly with IC waves, and this is particularly evident in the evolution of passing ions. Indeed, we can clearly see how the large parallel velocity population of passing ions evolves along the contours of of constant particle energy with Figure 11. The distribution function f (v ∥,j , v ⊥,j ) of trapped and passing ions and electrons at three different times: t · s = 0.57 (first column), t · s = 0.61 (second column), and t · s = 0.75 (third column). The distribution function ftrapped(v ∥,i , v ⊥,i ) of the trapped ions is shown in the first row, fpassing(v ∥,i , v ⊥,i ) for the passing ions are shown in the second row, ftrapped(v ∥,e , v ⊥,e ) for the trapped electrons are shown in the third row, and fpassing(v ∥,e , v ⊥,e ) for the passing electrons are shown in the fourth row. In all the plots, the solid white curves denote contours of constant particle energy in the frame moving with the waves: . An animation is available. excellent agreement at t · s = 0.61 and t · s = 0.75. We can understand the evolution of this population by looking at the gyroresonance condition If we look at the peak power at positive frequencies in the power spectrum shown in fig. 6c, we can estimate the frequency and wavenumber at which most of the power of IC waves resides: ω/ω init c,i ≈ 0.2, and ck ∥ /ω init p,i ≈ ±0.15. From eq. (6) we can estimate then the parallel velocity of the ions interacting gyroresonantly with these IC waves: which gives v ∥,i /c ≈ 0.36 and v ∥ /c ≈ −0.24, which falls in the range of the large parallel velocity population. The quasilinear evolution also happens with the population with smaller parallel velocity.
The population of trapped electrons (figs. 11g, 11h, and 11i) shows a very similar evolution to that of trapped ions; the loss-cone like distribution is also apparent. The evolution of this distribution is also consistent with a quasilinear interaction now between the electron and whistler waves, driving the distribution towards isotropy along paths of constant particle energy, as can be seen at later times in figure 11.
Finally, the population of passing electrons (figs 11j, 11k, and 11l) also shows a very similar evolution to that of the ions. The populated loss-cone shape of the distribution is also apparent, and we can see the quasilinear evolution of the distribution function along constant particle energy contours at later times.
This way, we have provided evidence for the source of both IC and whistler waves observed in our simulations. Once ions and electrons get trapped in regions of low magnetic field strength of mirror modes, they get significantly anisotropic with a loss-cone like distribution, which is able to destabilize parallel-propagating IC and whistler waves, respectively. These waves then interact with both population of trapped and passing particles in a quasilinear fashion, driving both populations of trapped and passing ions and electrons towards a more isotropic state. Consequently, this mechanism can contribute to regulate the global anisotropy of ions and electrons, and can thus be a pathway for particle escape and consequent saturation of mirror modes (Kunz et al. (2014)).
MASS-RATIO DEPENDENCE
In this section, we compare simulations with different mass ratios: m i /m e = 8, m i /m e = 32, but with the same initial conditions for ions, as shown for runs b20m8w800, b20m32w800,and b20m64w800 in Table 1, although with somewhat different temperatures. We see that IC and whistler waves' signatures do appear in all three simulations, and thus they do not seem to present a strong dependence on mass ratio. Figure 12 shows the evolution of δB 2 ∥ (panel a) and δB 2 z (panel b) for the three runs with mass ratios: m i /m e = 8, 32, and 64 (runs b20m8w800, b20m32w800, and b20m64w800 in table 1). We can see a very consistent evolution of δB 2 ∥ in all three runs, meaning that m i /m e does not play a significant role on the early evolution and saturation of the mirror instability. Similarly, δB 2 z shows the same features in all three runs, especially during mirrors' secular growth and saturated stages (t · s ≈ 0.5 onwards). The early peak in δB 2 ∥ at t · s ≈ 0.4 corresponding to the early whistler burst is also seen in the three runs, but more prominently in the simulation with m i /m e = 8. This is possibly due to an enhancement of this wave activity by the ions, which are able to weakly feel the presence of whistlers, as the mass separation is not very large. This effect disappears as the mass ratio increases, and the early whistlers only affect the electrons. More importantly, for t · s > 0.5, all three runs show a very similar evolution of δB 2 ∥ . Figure 13 shows the evolution of the pressure anisotropy of ions (panel a) and electrons (panel b) for the same three runs. In the case of the ions, we can see an overall evolution that is very consistent in all three runs, both in early and late stages. We can see a smaller anisotropy overshoot for the simulation with m i /m e = 8 at t · s ≈ 0.4, coincident with the enhancement seen in δB 2 z , during the early whistler burst, suggesting that ions can weakly interact with the whistlers at this mass ratio, and consequently their anisotropy does not reach the same overshoot as the rest of the runs. Notwithstanding the foregoing, we can see how all three runs display a very similar pressure anisotropy evolution afterwards, which is also well described by the best-fit threshold ∆P i ∝ β −0.45 i shown in fig. 3.
In the case of the electron pressure anisotropy ∆P e , we can also see a similar evolution overall in fig. 13b. The overshoot at t·s ≈ 0.4 is larger for decreasing mass ratios, possibly due to the fact that the whistler amplitude required for efficient scattering decreases as m i /m e increases, as explained above. This means that, after ∆P e /P e,∥ has surpassed the threshold for efficient growth of the whistler modes, the simulations with larger m i /m e take shorter times to reach the necessary whistler amplitude to efficiently scatter the electrons. This implies that the overshoot decreases for higher mass ratios. During late stages, we can see a very similar evolution of ∆P e in all three runs, that is even more evident for m i /m e = 32 and m i /m e = 64 (orange and green curves in fig. 13a), which essentially lie on top of each other.
Here we also see a very similar power distribution at both mass ratios, showing both left-hand and right-hand polarized waves (positive and negative frequencies, respectively). The peak power is also observed at the same frequencies and wavenumbers as in fig. 6 for both polarizations. Figure 14. The power spectrum of δBz(ω, k ∥ ) + iδB ⊥ (ω, k ∥ ) at 0.5 < t · s < 0.7 for mi/me = 32 (run b20m32w800, left panel) and mi/me = 64 (run b20m64w800, right panel). Positive and negatives frequencies show the power in left-hand and right-hand polarized waves, respectively. This way, we can see that the linear and nonlinear evolution of the mirror instability and the late IC and whistler evolution are well captured in our simulations, and it does not strongly depend on mass ratio.
DEPENDENCE ON INITIAL PLASMA β
We tested whether the IC and whistler waves' activity is present in simulations with β init i = 2 (i.e, total β init = 4), and β init i = 40 (i.e. total β init = 80), and compare them with our fiducial simulation at β init i = 20. We confirm that the mirror instability can develop in all simulations, and both IC and whistler waves do appear at nonlinear stages.
The power spectrum of δB z (ω, k ∥ ) + iδB ⊥,xy (ω, k ∥ ) is shown in figure 15, and we can see that it is similar among the three β i cases. In all three cases we see the power concentrated at ω ∼ 0 corresponding to mirror modes. In addition, we also see a concentration of power in right and left polarized waves, so both IC and whistler waves are also present, although their peak frequency changes. For the β init i = 2 case we see that the peak frequency is at ω/ω init c,i ≈ 0.5, whereas in the β init i = 40 case it shifts to smaller values, ω/ω init c,i ≈ 0.1. This shift in peak frequency can also be explained by the IC and whistler dispersion relations analogous to our discussion in section 3.3. Figure 16 compares the evolution of δB 2 ∥ (i.e., mainly the development of the mirror instability) for the three runs with different initial β init (the other phyiscal parameters are the same, see table 1). In all three cases we can see an exponential phase followed by the secular and saturated stages characteristic of the mirror instability, which develops earlier for higher initial β init , consistent with the smaller anisotropy threshold for the growth of the mirror instability at larger beta. The amplitude of δB 2 ∥ at the saturated stage is comparable for both β init = 20 and β init = 40 runs, and is smaller for the β init = 2 run, as also seen by previous works (e.g. Riquelme et al. (2015)).
Indeed, when we look at the evolution of δB 2 z , we can see that for both β init = 20 and β init = 40 runs, the evolution is similar: both display an early whistler burst at t·s ≈ 0.4, and a IC/whistler excitation stage (t · s ≈ 0.5 onwards) at almost the same amplitude. In the case of the β init = 2 run, we can see that the first exponential growth in δB 2 z at t · s ≈ 0.6 is consistent with an IC burst (see e.g. Ley et al. (2019)), after which we see the typical oscillation pattern that the excitation of late IC and whistler waves produces, from t · s ≈ 0.8 onwards, saturating at a similar amplitude than the rest of the runs, and displaying a very high-frequency oscillation.
In figure 17, we compare the evolution of the ion and electron pressure anisotropy plotted as a function of their parallel plasma β i for the three simulations with different initial β i (As in all our simulations the mean magnetic field strength is continuously increasing, so the particles' β i decreases over time, and therefore the simulation evolves towards the left in fig. 17.).
In the case of the ions ( fig. 17a), we can see a similar overshoot and subsequent regulation, but the overshoot occurs at a lower anisotropy value for increasing β i . This is consistent with the inverse β i dependence of the mirror instability threshold: mirror modes are excited earlier at higher β i , and therefore have relatively more time to regulate the anisotropy before it reaches a higher overshoot. Interestingly, the saturated stage of the ion pressure anisotropy is consistent with the theoretical IC threshold from Gary & Lee (1994): for γ IC /ω c,i = 10 −2 (see fig. 3a) in all three runs, suggesting a universality in the threshold that ∆P i /P ∥,i follows, as a consequence of the excitation of IC waves during mirrors' saturated stage. (In the case of the β init i = 40 run, however, it is more unclear whether it can follow the above mentioned threshold at late stages, given the short duration of this run.) In the case of electrons ( fig. 17b), we can also see that the overshoot is reached at lower values of the pressure anisotropy ∆P e /P ∥,e for increasing initial beta, consistent with an inverse-β i dependence now of the whistler instability anisotropy threshold. It is interesting to note that after the anisotropy overshoot, and during these late stages, the electron pressure anisotropy tends to be significantly smaller than the expectation from the threshold for the whistler instability in the higher initial β i runs (β init i = 20 and β init i = 40), irrespective of the generation of pressure anisotropy that the continuous amplification of the magnetic field produces as a consequence of the shear motion in the simulation. Notice, however, that in low magnetic field regions the electron pressure anisotropy is larger than the whistler threshold for growth rate γ = 0.01ωc,e, from Gary & Wang (1996). and, therefore, enough to excite whistlers (fig 8). This shows the key role played by mirror-generated magnetic troughs in creating the conditions to excite whistlers despite the fact that, globally, the pressure anisotropy may be not be enough to make these waves unstable. On the other hand, in the β init i = 2 run, ∆P e /P ∥,e continues to weakly grow because of the continuous B amplification, and this is done following a marginal stability state well described by the threshold of the whistler instability ∆P e /P ∥,e ∝ β −0.55 (Gary & Wang (1996)), consistent with previous works at lower β ∥,e (Ahmadi et al. (2018)).
The persistence of the late IC and whistler activity at different initial plasma β i suggests that this phenomenon is a natural consequence of the excitation of the mirror instability. In other words, in a weakly collisional plasma with an initial plasma β i sufficiently high to effectively excite the mirror instability, the excitation of IC and whistler waves at its late, saturated stages seems to be ubiquitous. ror instability, and provide an interesting physical connection between ion-scale instabilities and electron-scale physics.
In this work, we did not vary the scale-separation ratio ω c,i /s. In an environment like the ICM, turbulent eddies could drive the plasma locally through shear motions at kinetic scales with a wide range of frequencies s, and we typically expect larger kinetic energy at low frequencies (i.e., higher ω c,i /s). For larger values of ω c,i /s, previous works have shown that mirror modes can develop comparatively earlier in the simulations, therefore having relatively more time to saturate, and reaching similar amplitudes (Kunz et al. (2014); Melville et al. (2016); Riquelme et al. (2016); Ley et al. (2023)). In this sense, we would expect a similar late excitation of IC and whistler waves once mirror modes have reached a saturated stage.
The excitation of IC and whistler waves at saturated stages of the mirror instability modulates its nonlinear evolution, and therefore could affect transport processes in the ICM in which mirror modes come into play.
Particularly important is the pressure anisotropy regulation in the context of collisionless heating and dissipation via magnetic pumping in the ICM (Kunz et al. (2011);Ley et al. (2023)). The marginal stability level that the ion pressure anisotropy reaches at the saturated stage, ∆P i ∝ β 0.45 ∥,i (see fig. 3a, also correctly pointed out by Sironi & Narayan (2015)) is larger than the usual mirror threshold 1/β ∥,i by a factor ∼ β 0.55 ∥,i . which directly translates into an excess heating of the same order. Indeed, given that β is estimated to be β ∼ 10 − 100, and that the heating rate is directly proportional to the pressure anisotropy, this could imply a heating rate several times larger than predicted from the mirror threshold, enhancing the efficiency of the mechanism by draining more energy from the turbulent motions that drive the pumping.
The structures of high and low magnetic field that mirror modes produce in the saturated stage seem to be persistent in time, and its energy δB 2 ∥ does not decrease as long as the amplification of the mean magnetic field B is maintained (see fig. 2g). Even when this amplification is halted or reversed, the decaying timescales of mirror modes are large compared to the typical ion gyroperiod (Melville et al. (2016); Ley et al. (2023)). This implies that the trapping process of ions and electrons also persists, along with the excitation of secondary IC and whistlers. This source of whistler waves can have interesting implications in the context of ICM thermal conduction models like whistler-regulated MHD (Drake et al. (2021)), as they can dominate the electron scattering in the presence of mirror modes.
This source of whistler waves associated to mirror modes can also influence the suppression of the effective heat conductivity in the plasma even in the absence of heat-fluxes (Komarov et al. (2016); Riquelme et al. (2016); Roberg-Clark et al. (2016), and this can have consequences in larger-scale instabilities such as the Magneto-thermal instability (MTI, Balbus (2000); Berlok et al. (2021); Perrone & Latter (2022a,b)).
Future work aimed towards 3D fully kinetic PIC simulations would be required to have a full understanding of the consequences of the mirror instability and secondary IC/whistler excitation in these high-β plasmas.
We thank Aaron Tran for providing the dispersion solver used in this work, and we thank Lorenzo Sironi, Jonathan Squire and Alexander Schekochihin for useful comments and discussion. F.L. acknowledges support from NSF Grant PHY-2010189. M.R. thanks support from ANID Fondecyt Regular grant No. 119167. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant No. ACI-1548562. This work used the XSEDE supercomputer Stampede2 at the Texas Advanced Computer Center (TACC) through allocation TG-AST190019 (Towns et al. (2014)). This research was performed using the compute resources and assistance of the UW-Madison Center For High Throughput Computing (CHTC) in the Department of Computer Sciences. This research was partially supported by the supercomputing infrastructure of the NLHPC (ECM-02). | 2023-10-02T06:42:07.695Z | 2023-09-28T00:00:00.000 | {
"year": 2024,
"sha1": "ca5093dbfb97d5cc6ba1c455a08e6c1c7602167d",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/ad2455/pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "ca5093dbfb97d5cc6ba1c455a08e6c1c7602167d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
2914882 | pes2o/s2orc | v3-fos-license | Understanding the impact of colorectal cancer education: a randomized trial of health fairs
Background Regular screening for colorectal cancer (CRC) reduces morbidity and mortality from this disease. A number of factors play a role in the underutilization of CRC screening; populations with the lowest CRC screening rates are least likely to be aware of the need for screening or have knowledge about screening options. The overall purpose of this project was to assess two methods for increasing knowledge about CRC in a health fair context: one, by using a health educator to provide CRC information at a table, or two, to provide a tour through a giant inflatable, walk-through colon model with physical depictions of healthy tissue, polyps, and CRC. Methods We participated in six community health fair events, three were randomized to incorporate the use of the inflatable colon, and three used a standard display table method. We used a pre/post-design to look for changes in knowledge about CRC before and after participating in a health fair. We examined descriptive statistics of participants using frequencies and proportions. McNemar’s test for paired binary data was used to test whether there were significant differences in the distribution of correct answer percentage from pre to post and from pre to follow up. Linear regression (GEE) was used to investigate whether there was a significant difference in the change from pre- to post-intervention in the percentage of correct answers on knowledge of tests available to detect CRC and awareness of risk factors for CRC between participants at sites with the inflatable colon compared to participants at sites without the inflatable colon. Results Participants (n = 273) were recruited at the six health fairs. Participants in health fairs with the inflatable colon had higher knowledge at post-test than participants in health fairs with tabling activities, that is, without the inflatable colon; however, the difference was not significant. One month follow-up after each health fair showed virtually no recollection of information learned at the health fairs. Conclusions The use of an inflatable colon may be an innovative way to help people learn about CRC and CRC screening; however, it is not significantly more effective than conventional table display methods. Further research is needed to associate intention to obtain screening after touring the inflatable colon with actual screening. Future research could explore ways to better retain knowledge at long-term follow-up.
Background
The United States (US) Preventive Services Task Force, the American Cancer Society and the American College of Physicians have established national guidelines that recommend colorectal cancer (CRC) screening for averagerisk adults starting at age 50 [1]. Although there have been general increases in CRC screening in the last two decades [2], there are disparities in CRC screening, as well as in CRC incidence and mortality [3]. Research indicates that CRC screening is especially low in minority and low socio-economic status (SES) groups; some research suggests that these populations have both low awareness of CRC and low knowledge about CRC screening [4][5][6][7].
Health fairs are a common health promotion strategy used in community settings to disseminate information about public health topics to a large segment of a population. Health professionals, or trained organizational representatives and volunteers sit at a table and offer educational materials, interactive activities, and are available to answer questions. When resources are available, health screening tests such as oral exams, blood pressure checks, cholesterol tests, diabetes tests, HIV tests, and mammography may also offered at these health fair events [8]. There is inconclusive evidence regarding the effectiveness of health screenings done at health fairs. One study in 1985 found that screenings may generate false positives in healthy people, and false negative results may provide a false sense of security for people at risk for disease [9]. Interestingly, a study published in 1991 found that among 303 adult health fair participants, 47 % reported that obtaining screening tests was the sole reason for attending the event [10]. Another study published in 2006 found that screenings may offer financial value for people with little or no insurance [11]. Nevertheless, most recently, community health fairs that offer screenings have been found to be a culturally appropriate way to reach underserved Hispanics [12].
Studies looking at the impact of health fairs have been few. In 2001, one published study did a formative evaluation of the planning and implementation of a health fair and found the event to be a success. Unfortunately, outcome evaluation was beyond the scope of that project [13]. In 2005, a study found lifestyle changes made among participants in rural farm health fairs [14]. Most recently, in 2015, senior wellness fairs were found to be an effective tool to help students obtain skills and knowledge in providing health promotion information to older adults [15].
The Center for Community Health Promotion (CCHP) of the Fred Hutchinson Cancer Research Center (FHCRC) participates in many health fairs throughout three counties in Eastern Washington. CCHP promotores offer instruction about CRC and CRC screening at these events using an inflatable colon. When the inflatable colon cannot be accommodated at an event, the promotores use more conventional materials, such as flip charts, tabletop displays, brochures, and videos. The inflatable colon has been demonstrated to be efficacious in increasing knowledge and intention to be screened, as well as actual screening behavior using a one group pre-test/post-test design in different geographic areas of the US [16][17][18]. Based on findings from a local prior study where we learned that the inflatable colon was an innovative way to learn about CRC [16], the project team was interested in learning about the success of the inflatable colon relative to the more conventional materials in increasing CRC awareness and knowledge as well as intention to be screened. The overall purpose of this project was to assess two different methods for increasing knowledge about CRC in a health fair context: one, by using a health educator to provide CRC information at a table, or two, to provide a tour through a giant, inflatable, walk-through colon model with physical depictions of healthy tissue, polyps, and CRC.
Setting
One of 23 Community Network Program Centers (CNPCs) in the US, the CCHP of the FHCRC is located in a rural, agricultural area in Eastern Washington State east of the Cascade Mountain range. Many communities in this area are majority-minority (Hispanic) towns with those of Hispanic ethnicity (primarily Mexican) forming 67 % of the population; the population in general is underserved in terms of poverty, educational status, and having insurance [19]. The CCHP seeks to reduce health disparities in this geographic area, especially among low socio-economic status Hispanics. Based on findings from two Town Hall Forums conducted by the CCHP in April 2011, community members reported being most concerned about CRC compared to other cancer sites. Thus, the CCHP decided to focus on CRC education and raising awareness about CRC screening in the first years of the CNPC. The six communities in which the health fairs were held are very similar with a high proportion of individuals (60 to 70 %) of Hispanic origin.
Intervention
After discussion with community members, the CCHP purchased a giant colon; this is a walk through inflatable colon (10 ft high, 12 ft wide, and 20 ft long) that contains simulated normal tissue, polyps, cancers, and advanced cancers. Six display signs inside the colon explain the progression of cancer from normal tissue to advanced stage cancer and highlight the importance of screening and early detection of CRC. Signs were created in English and Spanish. The inflatable colon is used at community events to increase awareness and knowledge of CRC. Tours through the colon are led by trained promotores (lay health workers) who are staff of the CCHP. The 12-min tours emphasize what can be done to reduce the risk of CRC.
CCHP staff worked with community partners to participate in the six community health fairs between May and August of 2013. The six health fairs were randomized so that the inflatable colon appeared at three of those health fairs, while the other three held the standard tabletop information and materials. Trained CCHP promotores participated in the health fairs by either staffing a table with CRC and CRC screening information in English and Spanish, or facilitating a tour of the inflatable colon. A description of the colon tours is available elsewhere; briefly, promotores led tours in the giant colon explaining the advantages of CRC screening [16]. At the display tables, promotores talked to participants about CRC addressing topics such as risk factors and the importance of screening tests.
Measurement
As community members arrived at each health fair, adults 18 and older were invited to participate in this evaluation. If they were interested in participating, individuals were given a pre-numbered packet containing a pre-and postquestionnaire. The packet also included a passport to keep track of the tables and displays they visited at the health fair. Participants in this evaluation completed the prequestionnaire when they picked up their packet. When they handed in their passport, they completed the postquestionnaire. Participants completed pre-and post-pencil and paper questionnaires in their language of choice (English or Spanish). CCHP promotores were available to help read questionnaires to participants who needed assistance. When they returned the post-questionnaire, they were given a water bottle as an incentive. When participants completed the post assessment questionnaire, they were asked if they were interested in completing a one month follow-up questionnaire via the telephone. If they consented, CCHP promotores followed up with a phone call approximately one month later to ascertain whether the participant had taken steps to be screened.
The protocol, as well as participant consent language, and all questionnaires for this intervention, were approved by FHCRC Institutional Review Board.
Study measures
We measured participants' knowledge of CRC and CRC screening, past screening behavior, and access to health care. Demographic variables collected on the pre-test, included gender, age, race/ethnicity, whether they had health insurance, and whether they had a regular health clinic. Awareness of screening was assessed by pre-test and post-test responses to yes/no questions where respondents were asked if they knew what a fecal occult blood test (FOBT), sigmoidoscopy and colonoscopy were. Knowledge was further assessed with nine yes/no questions at pre-test and post-test; these questions asked the respondents if they thought most patients can survive CRC if it is found early and removed, and the remaining eight questions asked if they knew whether factors were associated with an increased risk of CRC (getting older, a diet that doesn't have many fruits and vegetables, a family history of CRC, a diet high in fat and low in fiber, smoking, having type 2 diabetes, lack of physical activity, and being overweight or obese).
Study data were collected and managed using Research Electronic Data Capture (REDCap) electronic data capture tools hosted at FHCRC [20]. REDCap is a secure, webbased application designed to support data capture for research studies, providing 1) an intuitive interface for validated data entry; 2) audit trails for tracking data manipulation and export procedures; 3) automated export procedures for seamless data downloads to common statistical packages; and 4) procedures for importing data from external sources.
Analysis
We first examined descriptive statistics of participants using frequencies and proportions. The percentage of correct answers for the twelve awareness/knowledge questions are shown for pre (entrance) and post (exit) test by randomized location (sites with inflatable colon vs. sites without inflatable colon) was calculated. McNemar's test for paired binary data was used to test whether there were significant differences in the distribution of a correct answer on individual questions from pre-to post-test and from pre-test to follow up (Table 2).
We used linear regression to investigate whether there was a significant difference in the change in percentage of correct answers from pre-to post-test (Table 3) between participants at sites with the inflatable colon compared to participants at sites with Tables (i.e., without the inflatable colon). In this analysis, the percentage correct for the awareness of screening tests available and the percentage correct for knowledge of risk factors for CRC was calculated for each participant pre-and post-intervention, and the difference in these percentages from pre-to postintervention was used as the dependent variable in a linear regression model, with covariate for randomization status (inflatable colon vs information tables). Generalized estimating equations (GEE) were used to account for intra-site correlations.
The difference in the change in correct answers for both knowledge and awareness were estimated using the linear formula: DP = β0 + β1*χ1 + ɛ, where DP is the change in the percentage of questions answered correctly from preto post-intervention and χ1 is an indicator variable coded "1" for participants who received information via the inflatable colon and "0" for those who received information from tables. Hence, β1, is the coefficient of interest for evaluating the effect of receiving information on CRC via the inflatable colon. All statistical tests were two-tailed with a significant level at 0.05. Analysis was performed with SAS for Microsoft Windows (version 9.3, SAS Institute Inc.).
Results
During the four month intervention, 273 participants completed pre-(entrance) questionnaires, 247 participants finished pre-and post-(exit) questionnaires, and 205 finished pre-, post-and one month follow-up questionnaires. Characteristics of the 273 participants are presented in Table 1. The 273 participants were recruited from the six health fairs: three health fairs with the inflatable colon (n = 134, 49.1 %) and three health fairs without the inflatable colon, that is, only with information tables (n = 139, 50.9 %). The majority of participants were female (208, 76.2 %). Of the four age categories, the majority of participants were 40 to 49 years old (84, 30.8 %).
In Table 2, familiarity with individual CRC knowledge questions was examined for the 247 participants who completed pre-and post-tests. Among participants at health fairs without the inflatable colon, there was a significant improvement in knowledge of CRC (answering incorrect at pre-test to answering correct at post-test) for three questions: "Know Fecal Occult Blood Test (FOBT) is available for CRC" (p < 0.01), "Know Sigmoidoscopy is available for CRC" (p < 0.01), and "Know Colonoscopy is available for CRC" (p < 0.01). Among participants at health fairs with the inflatable colon, there was a significant improvement in knowledge of CRC for six questions: "Know Fecal Occult Blood Test (FOBT) is available for CRC" (p < 0.01), "Know Sigmoidoscopy is available for CRC" (p < 0.01), "Know Colonoscopy is available for CRC" (p < 0.01), "Getting older increases the risk of CRC" (p < 0.01), "A diet that doesn't have many fruits and vegetables increases the risk of CRC" (p = 0.02), and "A family history of colorectal cancer increases the risk of CRC" (p = 0.01).
The results of the linear regression analysis for the change from pre-to post-intervention in the percentage of those answering awareness and knowledge questions correctly are described in Table 3. For participants who received information using the inflatable colon, the increase in the percentage correct on the awareness of screening tests available was 33 % compared to 15 % for those at tabling. Although the difference was almost double for the inflatable colon group, the difference between the two groups in the change in the awareness gain was not statistically significant (p value = 0.17) based on robust standard errors from GEE estimates to account for intra-site correlation.
In the analysis of change in knowledge of risk factors for CRC, participants who received information using the inflatable colon showed an increase in the percentage correct of 7 % compared to participants who received information through tabling, where the increase in the percentage correct on knowledge of risk factors for CRC questions was 2 %. Again, however, the difference between the two groups in the change in knowledge of 5 % was not statistically significant (p value = 0.09) based on robust standard errors from GEE estimates.
Discussion
In this randomized study, we demonstrated that the use of an inflatable colon increases knowledge about CRC and CRC screening; and is more effective at increasing knowledge than conventional materials, such as flip charts, tabletop displays, brochures, and videos. However, these observed differences were not statistically significant, possibly due to sample size. We also learned that awareness of CRC screening methods increased by a difference of 18 %, but this also was not statistically significant. While there are trends towards the benefit of the inflatable colon our sample size may not have been sufficient to detect significant differences. Intra-site correlation may have reduced our power to detect significant differences between the two groups. Future studies with more participants per study site (e.g. health fair) or more sites would improve power to detect differences in the use of the inflatable colon to educate individuals about CRC.
This study adds to the body of knowledge that educational interventions which vary in the type of learning methods (visual, text and audio) are no more effective at increasing comprehension and recall than interventions that only include written/text materials [21]. In this small study of two clusters, both methods had similar outcomes in terms of awareness of CRC screening methods and in CRC knowledge overall [22][23][24]. We hypothesized that the inflatable colon, a visual and interactive display, would be more likely to increase CRC knowledge and awareness than an information table with conventional materials. However, even when combined with a walk-through tour, the inflatable colon was not significantly more effective compared to conventional materials. Data from the one month follow-up questionnaires indicated that it was irrelevant how participants learned about CRC, neither arm (inflatable colon or tabletop information and materials) seemed to retain the knowledge one month after the health fair event. When examining the significance in knowledge after one month from pre-to follow-up, none of the questions had significant improvement except one, "A diet that doesn't have many fruits and vegetables increases the risk of CRC" (p < 0.01). This was somewhat discouraging. One reason that may contribute to this is the use of a brief telephone call for the follow-up questionnaire, compared to promotor(a)-led personal questionnaires at the pre-and post-test stages. Another reason may be that the one-time, short interaction with promotores, either through the inflatable colon tour, or at an information table, is not long enough to enable participants to retain long-term understanding about CRC. It may be that more exposure to CRC information increases knowledge retention and likelihood to be screened [25]. It may also be that an intervention after participation in the health fair may be needed to have an impact on knowledge and behavior [26,27]. Another explanation may be that not only health literacy, but also participants' cognitive abilities, such as working memory and long-term memory, affect their ability to recall information [28]. Given that this population is of low socioeconomic status, it may be that there cognitive abilities are affected by the stressors in their daily lives.
In the US, racial differences in practices, knowledge, and barriers related to CRC screening exist [29]. There is evidence to show that literacy and knowledge regarding cancer may affect participation in prevention [30][31][32]. When compared to non-Hispanic whites, minorities are more likely to have inaccurate knowledge and beliefs regarding CRC, as well as increased perceived barriers regarding CRC screening [33]. More research is needed regarding interventions that address barriers to CRC screening among Hispanics, the fastest growing population in the US. It is imperative to reinforce the importance of CRC screening and encourage age-eligible participants to obtain CRC screening, but equally important to address barriers to screening [34,35]. Barriers to colorectal cancer among Hispanics include factors such as fatalism about cancer survival, fear of the test, cost, low literacy and low level education, lack of awareness about screening, and lack of provider recommendation [36,37]. With the advent of healthcare reform in the US, cost may become less of an issue for documented Hispanics.
Strengths and Limitations
A strength of this study is the randomized control design. The study also has some limitations. There may have been bias in the self-selection of the sample; those choosing to attend a health fair may differ from the general population. We had participants younger than 50 years of age. We asked participants if the health fair helped them decide to visit the doctor for a check-up or health screening, but we do not know if intention predicted actual behavior. Although we asked about CRC screening history among study participants, data was self-reported. We learned in a previous study that if education is coupled with access to a free fecal occult blood test, participants are very likely to comply with CRC screening [16]; however for this study, we did not have resources to offer free or low-cost CRC screening for participants 50 and older who had not been screened. Finally, it is worth mentioning that the time from pre-to post-questionnaire at the health fairs ranged between 30-120 min. Information may have still been "fresh" in participants' minds which would make it easier to recall information at the post-questionnaire compared to the one-month follow-up questionnaire.
Conclusions
The use of an inflatable colon or tabling to instruct participants at health fairs are both effective in changing participants awareness of CRC screening and in knowledge about CRC. There were more significant changes in CRC knowledge among participants who participated in an inflatable colon tour, compared to participants who did not; however, overall, the differences were not significant. Further research is needed to associate intention to obtain CRC screening after learning about CRC with actual CRC screening. Future research could explore ways to better retain CRC knowledge at long-term follow-up. | 2017-06-22T00:32:15.096Z | 2015-11-30T00:00:00.000 | {
"year": 2015,
"sha1": "8fc25d96ff6d4b3465a33014e77adc99f5692599",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-015-2499-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d47f959e2d18d068edcc8d9b042d7cb623f902db",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13795757 | pes2o/s2orc | v3-fos-license | The Pseudomonas aeruginosa type III secretion translocator PopB assists the insertion of the PopD translocator into host cell membranes
Many Gram-negative bacterial pathogens use a type III secretion system to infect eukaryotic cells. The injection of bacterial toxins or protein effectors via this system is accomplished through a plasma membrane channel formed by two bacterial proteins, termed translocators, whose assembly and membrane-insertion mechanisms are currently unclear. Here, using purified proteins we demonstrate that the translocators PopB and PopD in Pseudomonas aeruginosa assemble heterodimers in membranes, leading to stably inserted hetero-complexes. Using site-directed fluorescence labeling with an environment-sensitive probe, we found that hydrophobic segments in PopD anchor the translocator to the membrane, but without adopting a typical transmembrane orientation. A fluorescence dual-quenching assay revealed that the presence of PopB changes the conformation adopted by PopD segments in membranes. Furthermore, analysis of PopD's interaction with human cell membranes revealed that PopD adopts a distinctive conformation when PopB is present. An N-terminal region of PopD is only exposed to the host cytosol when PopB is present. We conclude that PopB assists with the proper insertion of PopD in cell membranes, required for the formation of a functional translocon and host infection.
Pseudomonas aeruginosa is an opportunistic pathogen that poses severe threats to immunocompromised individuals and hospitalized patients because of its ability to develop resistance to antibiotics. Like many bacterial pathogens, P. aeruginosa has been shown to exploit the type III secretion (T3S) 3 system to establish infection (1,2). T3S is activated upon direct cell contact (3), and functions as a conduit through which effector proteins are secreted (4). P. aeruginosa uses the T3S system to transport up to four different effector proteins (ExoT, ExoY, ExoS, or ExoU) (5) into the target cells to trigger apoptosis, disrupt the actin cytoskeleton, and cause cell death. Insertion of the translocon and pore formation have been recently shown to have downstream effects resulting ultimately in modifications to the host epigenome (6). Because of such a critical role in pathogen infection, the T3S system constitutes an excellent target for the development of novel therapeutic agents (7).
The T3S system consists of a multimeric protein complex that can be divided into four major structural elements: (i) a cytosolic platform that delivers and sorts proteins to be secreted, (ii) a basal body that spans the two bacterial membranes and the periplasmic space, (iii) a hollow needle that extends more than 50 nm from the surface of the outer membrane, and (iv) a translocon complex that is required for protein translocation across the target cell plasma membrane. Structural information is available for the T3S basal body and needle (the injectisome) (8), which was visualized together with the cytosolic platform in situ for different pathogens including Yersinia enterocolitica (9), Shigella flexneri (10), and Salmonella typhimurium (11). However, structural information on the translocon complex, which is essential for protein translocation into the host cytosol, has remained elusive.
In P. aeruginosa, it has been hypothesized that the translocon is formed by two T3S-secreted proteins PopB and PopD (12). Bacteria lacking PopB or PopD lose the ability to translocate effector proteins into the target cell, despite the fact that proteins are still being secreted through the needle. PopB and PopD have been detected on mammalian cell membranes after incubations with P. aeruginosa (12) and both proteins can bind and oligomerize into homo-or hetero-complexes with discrete stoichiometry on liposomal membranes (13). However, how PopB and PopD interact with membranes and the mechanism behind the insertion remains largely unknown.
In this work, we provide specific insights about the interaction of the PopD translocator with membranes and the mechanism of PopD insertion into cell membranes. PopB and PopD are inserted into membranes as integral membrane proteins. Analysis of the primary sequence of PopD reveals one hydrophobic segment long enough to cross the lipid bilayer. Here we identified a second conserved segment in PopD that could become a transmembrane helix upon protonation of its acidic residues. Combining site-specific fluorescent labeling and a series of biophysical techniques, we found that both segments interact with the membrane but do not adopt a typical transmembrane orientation in the PopB and PopD hetero-complexes reconstituted in liposomes. Interaction with PopB not only redirected the oligomerization of PopD from homo-oligomers to hetero-oligomers, but also changed the conformation adopted by these membrane-interacting segments. To specifically study the assembly of P. aeruginosa translocators in mammalian cells, we established a procedure to isolate translocators inserted into cell membranes. Using this method we demonstrated that PopD associated with cell membranes also underwent a structural rearrangement when PopB was present. Together, our findings provide an explanation, in the perspective of protein insertion, for the requirement of both PopB and PopD to form a functional translocon.
Protonation of acidic residues in PopD showed a second potential transmembrane segment
We have shown that the association of purified PopB and PopD with liposomal membranes was facilitated by incubations at acidic pH. We have also shown that the presence of PopB promotes PopD binding to membranes at higher pH, but the reasons for this low pH requirement are unknown (13). Analysis of the primary sequence of PopD showed only one hydrophobic segment long enough to cross-the lipid bilayer (Leu 119 -Val 137 or H1, Fig. 1A). Interestingly, a second putative transmembrane helix segment (Met 63 -Phe 81 or H2) appeared when the hydropathy analysis was performed with protonated Asp and Glu residues (simulating the acidic conditions required for membrane binding, Fig. 1B). The hydropathy of H2 increased 13.77 kcal/mol after protonation, as calculated using the Wimley-White octanol scale, or 7.39 kcal/mol when using the interfacial scale (14). The observed increase of hydropathy in H2 would indicate a more favorable partition into the lipid bilayer, and this could explain the increased membrane associ-Figure 1. Identification of a second potential transmembrane segment after protonation of acidic residues in PopD. A, schematic of the primary structure of P. aeruginosa T3S translocator PopD. Predicted hydrophobic segments 1 (H1) and 2 (H2) are shown. The segment H2, whose hydrophobicity increased upon protonation of acidic residues, is underlined. B, hydropathy plot of PopD before (black) and after neutralization of acidic residues (red). The graph was generated using membrane protein explorer (MPEx) with a 19-amino acid sliding window. ⌬G was defined as the free energy to transfer amino acids from lipid bilayer to water. The position of segment H2 is indicated with a line. C, pore formation activity of WT PopD and PopD E69A/D71A/E75C at the indicated pH. Pore formation was determined as the fraction of encapsulated Tb(DPA) 3 3Ϫ quenched by EDTA as described previously (37). PopD was incubated with liposomes at 20 -23°C for 20 min. The protein:lipid ratio was 1:1000. The mean from two independent experiments are reported with error bars corresponding to the range. D, glutaraldehyde cross-linking of PopD homo-oligomers and PopB and PopD hetero-oligomers formed on liposomes. PopD alone or premixed with an excess of PopB were incubated with liposomes in buffer B for 20 min with a protein:lipid ratio of ϳ1:5000. Proteoliposomes were pelleted and subjected to immunoblotting using an anti-PopD antibody. Expected molecular masses for PopD n-mers (left) and PopD:PopB n-mers (right) were estimated using the molecular mass for PopD (31.3 kDa) and PopB (40.1 kDa) monomers.
PopB assists PopD insertion
ation observed at acidic pH. Inspection of other T3S PopD homologues showed a similar pH-sensitive hydrophobicity for this segment, suggesting that this characteristic has been conserved among T3S systems (Fig. S1). We reasoned that elimination of the negative charges present in this segment will promote the binding and insertion of PopD into membranes at higher pH. The simultaneous modification of the E69A, D71A, and E75C in PopD shifted the pore formation activity to higher pH, indicating that protonation of these acidic residues could facilitate protein-membrane interaction (Fig. 1C). The increase in the pH range at which PopD E69A,D71A,E75C perforated model membranes was modest, and we could not discard that other factors may be required to efficiently assemble a translocon complex into the membrane at neutral pH, for example, the association of PopD with PopB.
Formation of PopB and PopD heterodimers leads to stable membrane-inserted hetero-complexes
PopB alters the pH-dependent binding of PopD to membranes, and as mentioned above, the interaction of PopD with PopB modifies the stoichiometry of the formed oligomers (13). This suggested that the formation of a PopB-PopD heterodimer could be responsible for the change in the stoichiometry of the complexes, however, the existence of such a heterodimer has remained elusive. To detect the presence of a PopB-PopD heterodimer, PopD was incubated with liposomes with or without PopB, and the resulting oligomers were reacted with the nonspecific cross-linker glutaraldehyde. The presence of crosslinked proteins was detected by immunoblotting using anti-PopD antibodies. When PopD was allowed to form homo-oligomers in liposomes, cross-linking captured several complexes with apparent molecular masses that corresponds to PopD dimers (62.6 kDa), trimers (93.9 kDa), and pentamers (156.5 kDa). Other complexes with higher molecular mass were also observed but it is difficult to estimate apparent molecular masses in this region. In the presence of an excess of PopB, a band that corresponds to a heterodimer (71.4 kDa) was observed. Additionally, oligomers with molecular masses similar to a dimer of heterodimers (142.7 kDa) and a trimer of heterodimers (214.1 kDa) were also observed (Fig. 1D). Despite the fact that the composition of the complexes observed in the presence of PopB cannot be precisely assessed from this SDS-PAGE analysis, it is clear that addition of PopB redirected the formation of PopD homo-complexes toward formation of PopD-PopB heterodimers.
Because the cross-linking reaction of membrane-associated complexes required neutral pH, we tested whether neutralizing the pH released the translocators from the liposomal membranes ( Fig. 2A). We also tested conditions commonly used to distinguish peripherally associated proteins from membraneinserted proteins (15,16). Proteoliposomes containing inserted PopB and PopD were pelleted and resuspended in the indicated buffers (Fig. 2, B and C). In the end, liposome-bound proteins were separated from free proteins by sucrose gradient centrifugation. Proteoliposomes float to the top of the gradient due to their low density, whereas unbound free proteins sediment to the bottom. As expected, binding of PopB and PopD to membranes at pH 7.5 was not favorable, but both PopB and PopD remained bound if the pH was raised to neutral ( Fig. 2A). When treated with a chaotropic agent, alkaline pH, or high salt buffers, about 54 Ϯ 20% of PopD was dissociated from the membrane when treated with 6 M urea in the absence of PopB (Fig. 2B). In contrast, PopB and PopD remained mostly membranebound when forming hetero-complexes. In the presence of PopB, only 16 Ϯ 7% of PopD was dissociated by urea (Fig. 2C). These observations indicated that after stable insertion at acidic pH, PopB and PopD remained stably inserted in membranes at neutral pH. Data are reported as the mean from two independent experiments and their range. Representative blots are shown. B, some dissociation of PopD from homo-oligomers was observed when proteoliposomes were subjected to the indicated treatments. Proteoliposomes containing PopD alone were treated with buffer C, 6 M urea, pH 8.0, 0.1 M sodium carbonate, pH 11.5, or 1 M NaCl in 10 mM Hepes, pH 7.5, for 30 min on ice. Membrane-bound proteins (top fractions: T) were separated from dissociated proteins (middle: M, and bottom: B fractions) using a floatation assay. All fractions (T, M, and B) were collected, precipitated with TCA, and PopD was detected by immunoblotting. The amount of dissociated proteins was quantified and shown as the percentage of total protein ((M ϩ B)/(M ϩ B ϩ T)) in each condition. Data are reported as the mean from two independent experiments and the range. Representative blots are shown. C, PopB and PopD hetero-complexes were stably inserted in the membranes. Extraction of PopB and PopD on liposomes were performed as described in B. PopB and PopD were detected by immunoblotting using the indicated primary antibodies and peroxidase-conjugated secondary antibodies. The amount of dissociated protein was analyzed as described in B.
The hydrophobic segments of PopD did not adopt typical transmembrane orientation in PopB/PopD hetero-complexes
Stable association of PopB and PopD with membranes suggests that the translocators may insert one or more hydrophobic segments across the membrane, as observed for integral membrane proteins. Therefore, we targeted H1 and H2 segments ( Fig. 1A) for site-directed fluorescence labeling and used multiple biophysical techniques to determine the interaction and location of the segments at the membrane (17). We labeled Cys residues (introduced one at a time on different locations in these segments) with the environment-sensitive fluorescent probe (7-nitrobenz-2-Oxa-1,3-Diazol-4-yl) (NBD). NBD has been successfully used to probe the location of amino acids in membrane proteins because of its small size, its ability to locate in polar and nonpolar environment, and the distinct fluorescent properties displayed in those environments (18,19). The NBD emission intensity, maximum emission wavelength ( max ), and fluorescence lifetime () are excellent reporters for the microenvironment around the targeted amino acid (20). In an aqueous environment, NBD exhibits red-shifted max and lower ; compared with a hydrophobic environment. For example, the emission spectrum of NBD in water has max at 551 nm, whereas NBD-labeled cholesterol in membranes showed a max of 519 nm (Fig. 3). Single NBD-labeled PopD derivatives were mixed with an excess of PopB to ensure the incorporation of labeled PopD into hetero-complexes. Excess PopB will form some PopB homo-oligomers (13), but PopB is not labeled with NBD and therefore will not interfere with the analysis of PopD membrane interaction.
After inserting into membranes, the max of NBD-labeled PopD derivatives in the hetero-complexes showed a value ranging from 527 to 532 nm for both segments (Fig. 3, A and B). The max of PopD-NBD derivatives implies that both hydrophobic segments lie in a relatively nonpolar environment. Intensity weighted average of NBD within segment H2 spanned from 4.5 to 6.5 ns, whereas segment H1 from 6 to 8 ns was similar to the measured of cholesterol-NBD (Fig. 3, C and D). Consistent with max results, of PopD-NBD also suggests a nonpolar location for both segments. Overall, NBD in segment H1 exhibited a slightly more hydrophobic environment than H2. Furthermore, flanking residues A53C, H104C, and T154C displayed a more red-shifted max and low , suggesting a polar surrounding.
The data on max and describe the polarity of the environment around each labeled residue, but it is not a direct indication of the exposure and location of the residue in the membrane bilayer. Alternatively, NBD probes could be located in a hydrophobic pocket provided by a nonpolar protein cavity or at a protein-protein interface. Fluorescence quenching studies are useful to confirm the location of NBD moieties (17,20). For example, exposure to the aqueous solvent could be assessed using water-soluble iodide ions as quenchers, and membrane exposure using nitroxide moieties covalently attached to the acyl chain of a phospholipid (e.g. 1-palmitoyl-2-stearoyl-(12- Figure 3. The hydrophobic segments of PopD did not adopt a typical transmembrane orientation in hetero-complexes. Multiple residues in segments H1 and H2 were replaced by Cys and labeled with the environment-sensitive probe NBD. Each single NBD-labeled PopD was premixed with a 10 times molar excess of PopB WT before incubation with liposomes. Reported data correspond to the mean of two independent measurements and error bars indicate the range. A, max of NBD attached to the indicated residue in segment H2. B, max of NBD attached to the indicated residue in segment H1. C, average ; for NBD-labeled residues in segment H2. D, average for NBD-labeled residues in segment H1. The max and for NBD in water (red lines) and NBD-cholesterol in liposomes (blue lines) serve as references for expected values of max and for NBD located in a polar or hydrophobic environment, respectively.
PopB assists PopD insertion
doxyl)-sn-glycero-3-phosphocholine (12-doxyl-PC)) (18,21). The quenching ratio calculated from iodide and 12-doxyl-PC quenching was used to report on the relative exposure of each NBD-labeled residue to the membrane. A high quenching ratio (defined as exposure to water/exposure to membrane core) indicates a location close to the surface of lipid bilayer, and low quenching ratio indicates proximity to the nonpolar membrane core. PopD A53C-NBD showed the highest quenching ratio, with a value of 4.6 Ϯ 0.2, suggesting that Ala 53 is very solvent-exposed, in agreement with the polar environment indicated by fluorescence values reported in Fig. 3. The quenching ratio of residues in segment H2 ranged from 1.42 Ϯ 0.005 to 2.21 Ϯ 0.02, residues in segment H1 ranged narrowly from 0.66 Ϯ 0.01 to 1.18 Ϯ 0.01 (Fig. 4, A and B). Low values in both segments implies that indeed segment H1 and H2 interact with the membrane, and H1 is inserted deeper into the membrane than H2. What's more, neither of these segments showed a quenching pattern of a typical hydrophobic transmembrane segment, in which residues in the middle position display lower quenching ratios compared with residues at the ends of the segment.
The hydrophobic segments of PopD adopt different conformations in the presence of PopB
Given that PopB redirects the formation of PopD homo-oligomers to hetero-oligomers, we compared the quenching ratio for PopD-NBD derivatives in the absence and presence of excess PopB. Initial quenching ratio determinations for PopD-NBD in homo-oligomers were calculated using iodide and 10-doxylnonadecane (10-DN) as quenchers. 10-DN is a quencher located close to the center of the membrane bilayer (22). The quenching ratios for PopD in hetero-oligomers were calculated using 12-doxyl-PC, a quencher that also locates close to the bilayer center, but with a wider quenching radius compared with 10-DN. Although both quenchers provide essentially the same information on NBD accessibility, it is not feasible to directly compare the absolute values between sets of quenching ratios obtained with different membrane-restricted quenchers. However, in these experiments we focused on conformational changes observed in the presence or absence of PopB. When PopD was forming homo-oligomers, segment H1 exhibited higher solvent exposure in the middle of the segment than the two termini with the quenching ratios increasing from PopD L119C-NBD (4.13 Ϯ 0.02) to PopD V129C-NBD (7.05 Ϯ 0.03) and then dropping to low levels for PopD L137C-NBD (1.88 Ϯ 0.07) (Fig. 4D). In PopB and PopD hetero-complexes, however, these NBD-labeled residues showed a similar extent of accessibility to the quenchers (Fig. 4B), suggesting that segment H1 becomes more parallel to the membrane in the presence of PopB. Segment H2 displayed a more tilted angle in homo-oligomers with the N terminus showing a higher solvent exposure than the C terminus (Fig. 4, A and C). Notably, when PopB was present, H2 also became more parallel to the membrane with A and B, quenching ratio of NBD-labeled PopD residues in segments H2 (A) and H1 (B) when PopD was reconstituted into hetero-complexes. Quenching ratio was calculated using k q /(1 Ϫ F doxyl /F 0 ), where k q is the bimolecular quenching constant obtained from iodide quenching (a measure of NBD exposure to the aqueous solvent) and the amount of quenching observed when 12-doxyl-PC was incorporated into the liposomes (a measure of NBD exposure to the membrane core). C and D, quenching ratio of NBD-labeled PopD residues in segments H2 (C) and H1 (D) in PopD homo-oligomers. In this case, exposure to the membrane core was calculated using 10-DN and quenching ratio was determined using k q /(1 Ϫ F 10-DN /F 0 ). Data shown are the mean of two measurements and error bars correspond to the range.
An intact T3S injectisome is required for PopB and PopD insertion into cell membranes at neutral pH
Next, we studied the insertion of the two translocators in P. aeruginosa-infected cell membranes. It is well-known that PopB and PopD can be secreted into the culture media (e.g. by chelating Ca 2ϩ ions with EGTA) (12), and that PopB and PopD are inserted into the plasma membrane when the target cells are incubated with P. aeruginosa at physiological pH (12). Based on our experiments with purified recombinant proteins, we hypothesized that translocators secreted into the media via the injectisome would not be able to efficiently bind to cell membranes, only translocators that are released in close contact to the target membrane will insert and assemble into translocon complexes. To test this hypothesis, we examined PopB and PopD insertion into HeLa cell membranes when the translocators were delivered during the incubation of P. aeruginosa strain PAK (PAK) with HeLa cells, or when using proteins isolated from the bacterial culture media. HeLa cells have proven to be a good model system for P. aeruginosa infection (6,23). To maximize secretion of translocators, we used a PAK with ⌬exsE (lacking T3S system regulator ExsE) (24) and ⌬exoSTY (lacking all three T3S system effectors) (25) (Fig. S2).
HeLa cells were infected with PAK⌬exsE⌬exoSTY⌬popD:: popD using a multiplicity of infection of 30 in three different conditions: (i) in PBS, (ii) in Dulbecco's modified Eagle's medium (DMEM), or (iii) in DMEM plus 10% fetal bovine serum (FBS). Both PopB and PopD were largely secreted into the infection media only when FBS was present (Fig. 5A), as reported previously for the Yersinia system (26). Similarly to the poor binding observed with recombinant proteins ( Fig. 2A), secreted PopB and PopD isolated from culture media did not efficiently associate with HeLa cell membranes at neutral pH (Fig. 5B). In contrast, PopB and PopD were efficiently detected in HeLa cell membranes at neutral pH in all infection conditions (Fig. 5A), suggesting that secretion and membrane insertion are coupled during translocon assembly at neutral pH.
PopB-assisted PopD insertion into cell membranes
As shown earlier in vitro, hydrophobic segments in PopD displayed different extents of membrane exposure in the presence or absence of PopB (Fig. 4), suggesting that PopB may assist PopD insertion during translocon assembly. To analyze the role of PopB in PopD membrane insertion in vivo, we employed a GSK tag phosphorylation assay that reports on exposure of protein segments to the cytosol of the target cell (27). The GSK tag (MSGRPRTTSFAES) is a 13-amino acid peptide of the N terminus of human glycogen synthase kinase (GSK), where Ser 9 is constantly phosphorylated by multiple kinases in the mammalian cell cytosol. Because this tag is not phosphorylated in P. aeruginosa or extracellularly, it constitutes an excellent reporter of accessibility of protein segments to the host cell cytosol.
PopB and PopD are secreted through the T3S needle, and it is possible that a small fraction of these translocators are injected into the cytosol before a switch from translocator secretion to effector secretion can take place (28,29). Therefore, we introduced a cell permeabilization procedure to selectively collect membrane-associated proteins (Fig. 6A). HeLa cells incubated with P. aeruginosa were washed to remove unbound bacteria, and permeabilized using a Cys-less derivative of perfringolysin O (rPFO). rPFO is a pore-forming toxin that only perforates mammalian cell membranes due to its specificity for cholesterol (30,31). We used antibodies against glyceraldehyde-phosphate dehydrogenase (GAPDH) and Na ϩ /K ϩ -ATPase to detect cytosolic and plasma membrane fractions, respectively (Fig. 6, B and C). Most of the cytosol was released through the large pores formed by rPFO (ϳ25-30 nm in diameter), and separated from membranes and insoluble components using centrifugation (PFO sup, Fig. 6B). The presence of some GAPDH in supernatants that were not treated with rPFO indicated that the integrity of some cells was compromised during culture manipulation (Fig. 6B). The majority of the soluble cytosolic components were released after rPFO treatment. Therefore, any translocated PopD will be washed out from the permeabilized membrane fraction and will not interfere with our analysis.
rPFO-permeabilized cell membranes were incubated with 0.1% Triton X-100 to specifically solubilize HeLa cell membrane proteins. Bacterial membranes are not affected by this concentration of detergent (32). Triton X-100-insoluble HeLa cell components plus intact attached bacteria were removed by centrifugation, and the resulting supernatant containing Triton-solubilized proteins was collected and analyzed for the presence of GAPDH and Na ϩ /K ϩ -ATPase (Triton sup, Fig. 6A). When HeLa cells were permeabilized with 2 M rPFO, the
PopB assists PopD insertion
"Triton sup" contained less than 5% of the cytosolic content compared with its "PFO sup." Therefore, these control experiments corroborated that any PopB or PopD detected in Triton sup fraction (Fig. 6B) was associated with HeLa cell membranes, and not injected into the HeLa cell cytosol.
According to Armentrout and Rietsch (33), the N terminus of PopD is proposed to be located in the target cell cytosol. We chose to insert a single GSK tag after residue Gln 40 to avoid potential problems with the PopD secretion signal sequence (34). Proper insertion of PopD is therefore expected to expose the N terminus to the HeLa cell cytosol, where the GSK tag will be phosphorylated. Insertion of the GSK tag did not affect the function of PopD, as determined by complementation of a PAK⌬popD strain with a plasmid encoding PopD-Gln 40 -GSK (Fig. S3). The effect of PopB on PopD insertion was studied by introducing the plasmid encoding PopD-Gln40 -GSK into PAK⌬exsE⌬exoSTY⌬popD or PAK⌬exsE⌬exoSTY⌬popBD.
Using the membrane protein isolation procedure described above, we found that PAK⌬exsE⌬exoSTY⌬popBD strains complemented with plasmids encoding PopB WT, PopD WT, or GSK-tagged PopD were able to insert individual translocators into the target membrane (Fig. 6D), even though strains producing a single translocator were not able to cause perturbation of the actin cytoskeleton as a result of disrupted effectors injection (Fig. S3B) (12). Phosphorylation of the GSK tags was detected using a mAb against phospho-GSK3(Ser 9 ). The GSK-tagged PopD was not phosphorylated in the absence of PopB. In contrast, phosphorylation of PopD-Gln40 -GSK was readily detected when PopB was present. These results clearly showed that PopB assists the insertion of PopD into human cell membranes when forming functional translocons.
Discussion
Combining a series of biophysical and cell-based assays we obtained five important insights into the assembly of the P. aeruginosa T3S translocon, including the interaction of hydrophobic segments of PopD with the membrane and the requirement of PopB for proper PopD assembly into membranes. First, analysis of the enhanced PopD binding observed under acidic conditions revealed a novel membrane-interacting segment that may contribute to anchor the protein into membranes. Second, membrane-assembled PopB and PopD translocators have the properties of integral membrane proteins. Third, the two hydrophobic segments in PopD are buried in the membrane but lie parallel to the membrane surface. Fourth, interaction with PopB modifies the conformation adopted by PopD hydrophobic segments in the membrane. Fifth, PopB promoted the insertion of PopD in P. aeruginosa-infected HeLa cell membranes.
Acidic pH and anionic lipids have been shown to drive the insertion of T3S translocators and other pore-forming toxins like colicin and diphtheria toxin into lipid bilayers (35-39). Acidic pH has been suggested to induce a molten globule intermediate state of proteins, facilitating their interaction The pellet containing permeabilized HeLa cells and attached PAK was treated with 0.1% Triton X-100 to selectively solubilize HeLa cell membranes. Insoluble cell debris and bacteria were removed using centrifugation. The supernatant (Triton sup) containing solubilized membrane proteins was precipitated and subjected to immunoblotting as described under "Experimental procedures." B and C, validation of the membrane protein isolation method. PAK⌬exsE⌬exoSTY⌬popD::popD-infected HeLa cells were incubated with PBS, 1 M rPFO, or 2 M rPFO. PFO sup and Triton sup samples containing 23 g of total protein were analyzed using immunoblotting for the presence of HeLa cell cytosol or plasma membrane markers using anti-GAPDH (B) and anti-Na ϩ /K ϩ -ATPase (C) antibodies. D, detection of PopD insertion in HeLa cell membranes using a GSK tag phosphorylation assay. PAK⌬exsE⌬exoSTY⌬popBD or PAK⌬exsE⌬exoSTY⌬popD strains complemented with pUCP18 plasmid carrying the popB, popD, or GSK-tagged popD genes were incubated with HeLa cells in FBS-free DMEM for 1 h. Proteins associated with HeLa cell membranes were isolated using the membrane protein isolation procedure described in A, and the presence of PopB, PopD, and phospho-GSK PopD was detected by immunoblotting. Representative blots from two independent experiments are shown. The effector translocation ability of used PAK strains into HeLa cells is indicated with a "ߜ" symbol.
PopB assists PopD insertion
with membranes in vitro. Furthermore, an increase of net positive charge of proteins at low pH would decrease protein-protein interactions (i.e. aggregation) during folding in the aqueous solvent, whereas favoring the interaction with membranes containing negatively charged lipids. Therefore, it is not surprising that purified PopB and PopD bind better to model membranes containing anionic lipids at acidic pH. Moreover, it has been shown that increasing concentrations of NaCl inhibit the interaction of PopB and PopD with membranes (36).
The pH-dependent insertion of PopD in the membrane suggests that protonation of acidic residues might play a role in this process, as shown previously for other pore-forming toxins (40,41). Protonation of acidic residues in PopD revealed a significant increase of hydropathy in one segment (Met 63 -Phe 81 , Fig. 1B). The pH-dependent hydrophobicity of this segment is conserved among PopD homologues (Fig. S1), and this unique property is not found in other pore-forming proteins like colicins, diphtheria toxin, or Bcl-like proteins (Fig. S4). Modification of acidic residues in the segment to uncharged amino acids promoted PopD binding to membranes at higher pH values (Fig. 1C), suggesting that protonation of these residues is involved in the pH-dependent insertion of PopD into model membranes.
Protonation of His residues in diphtheria toxin has been suggested to induce a major conformational change in the protein that exposes hydrophobic segments to the membrane (42,43), whereas protonating the acidic residues in its transmembrane hairpin modifies the membrane insertion (41). Therefore, it is tempting to think that once the pH-sensitive segment in PopD is protonated, it becomes hydrophobic enough to form a transmembrane hairpin with segment H1 (Leu 119 -Val 137 ) penetrating the membrane and anchoring PopD in the membrane. This hypothesis prompted us to examine the orientation of these segments further in the membrane-inserted PopB-PopD complexes.
Characterization of the structural arrangement of the T3S translocators in model membranes has been difficult given their tendency to form mixtures of homo-and hetero-oligomers (13,44). However, we found experimental conditions to maximize PopD incorporation into hetero-complexes by mixing PopD with an excess of PopB (13). This method allowed us to study PopD uniformly associated with PopB in hetero-complexes and compare its conformation with the one adopted while forming homo-complexes. PopD was stably associated with membranes, suggesting the presence of one or more transmembrane segments (Fig. 2). Therefore, we examined the conformation of the two hydrophobic segments H1 and H2 when PopD was associated with membranes. Fluorescence measurements on single NBD-labeled PopD derivatives showed that both hydrophobic segments were located in a nonpolar environment (Fig. 3). However, in contrast with the transmembrane orientation assumed in current models for T3S translocons, dual quenching studies positioned the two segments adopting in-plane conformations when PopD was assembled into hetero-complexes (Fig. 4, A and B).
Given that other components of the T3S apparatus (e.g. the needle tip) can affect the translocon assembly, we decided to advance our study in the context of a cellular environment. After being secreted through the needle, the translocators could be (i) released to the media, (ii) inserted directly into the membrane while being secreted, or (iii) translocated into the cytosol if one active translocon is already in place and the switch to effector secretion has not yet occurred. To specifically study T3S-dependent insertion of PopB and PopD in cell membranes, we evaluated these three possibilities. FBS induced secretion of T3S translocators into extracellular medium has been documented in Y. enterocolitica (26) and we found that FBS could induce nonspecific secretion of translocators in PAK cultures. Medium containing PopB and PopD secreted in the presence of FBS was separated from bacteria and incubated with HeLa cells to evaluate post-secretion binding of the translocators. We found that secreted translocators bind to the HeLa cell plasma membrane at very low levels. This observation readily correlates with the little binding observed at neutral pH when using purified recombinant PopB and PopD and model membranes ( Fig. 2A). In addition, these results corroborate that no receptor present in the target cell membrane (and absent in model membranes) seems to be sufficient to trigger binding of the translocators. They also emphasize that effective insertion of translocators into cell membranes at physiological pH requires a full T3S system. Eliminating FBS during infection of HeLa cells with PAK strains minimized the secretion of the translocators in the absence of target cells (Fig. 5A) and therefore eliminated any interference from nonspecific binding of translocators secreted to the media.
A source of uncertainty is added when analyzing translocator topology in human cell membranes if translocators are partially delivered into the target cell cytosol. Transient injection of PopD is possible during the time it takes to switch from translocator secretion to effector secretion. Injection of translocators into the target cell cytosol has been noticed in certain T3S systems previously (28,29). In addition, intracellular functions have been proposed for translocated IpaB, a translocator from the Shigella T3S system (29,45,46). Any translocated PopD will interfere with the characterization of membrane-inserted PopD, especially when the GSK tag was used to report on exposure to host cell cytosol. To avoid this problem, we developed a membrane-enrichment procedure that included the permeabilization of the plasma membrane and elimination of water-soluble cytosolic components (e.g. injected translocators). Permeabilization with rPFO allowed the release of soluble cytosol content. Subsequently, proteins in HeLa cell membranes were selectively solubilized with a low concentration of Triton X-100, conditions that do not lyse P. aeruginosa (Fig. 6A). In the end, only translocators associated with the target cell membrane are enriched and analyzed.
Using this approach we demonstrated that a strain lacking both translocators (PAK⌬exsE⌬exoSTY⌬popBD) complemented with plasmid bearing the popD-GSK gene was able to insert PopD in HeLa cell membranes but no phosphorylation took place on the GSK segment. However, when a strain lacking only PopD (PAK⌬exsE⌬exoSTY⌬popD) was complemented with the same plasmid, both PopB and PopD inserted into the membrane and the GSK segment was phosphorylated (Fig. 6D). These results clearly indicate that PopB is required to properly
PopB assists PopD insertion
insert PopD into the target cell membrane to form functional translocons.
The mechanism of translocator assembly into cell membranes during bacterial infection has been explored previously (12,33,47). However, how the membrane-associated translocators transition to a functional translocon has remained elusive. This work provides important insights into the mechanism of translocon assembly. First, PopB and PopD formed a heterodimer on lipid membranes, suggesting that an early PopB and PopD interaction is essential for guiding the assembly of hetero-complexes. Second, the interaction of PopB with PopD is required to properly insert PopD into the target cell membrane and assemble functional translocons.
Plasmids and strains construction
Genomic deletion of the genes encoding T3S regulator ExsE, translocator PopD, or translocator PopB were introduced according to a two-step allelic exchange procedure described by Hmelo et al. (48) using the PAK⌬exoSTY strain (courtesy of Dr. Stephen Lory (25)) to generate the PAK⌬exsE⌬exo STY⌬popD and PAK⌬exsE⌬exoSTY⌬popBD strains. pUCP18 plasmids containing the DNA fragment coding for PopD WT or PopB WT were generated using Gibson assembly as described previously (13). The expression of the translocators is regulated by the endogenous promoter region that was included in the DNA fragment (13). The plasmid containing the gene encoding PopD-Gln 40 -GSK was generated using primers: forward, CGCCCTCGCACTACTAGTTTCGCTGAAA-GTGTGCCGGCCGCGCGGGCCGATC; reverse, GAAACT-AGTAGTGCGAGGGCGACCACTCATCTGCGGCAGGTC-CGCAGCCG. Modifications introduced into plasmids used in this work were verified by DNA sequencing. Plasmids carrying the gene encoding the desired protein were introduced into PAK⌬exsE⌬exoSTY⌬popD or PAK⌬exsE⌬exoSTY⌬popBD strains using electroporation, and positive clones were identified by their resistance to carbenicillin. The generated strains were evaluated by their ability to restore translocation of effectors into HeLa cells, evidenced by typical cell rounding as described (13).
Protein expression, purification, and fluorescent labeling
PopB and PopD were purified as a complex with the Histagged chaperone PcrH (hisPcrH) as previously described (37). Single Cys PopD derivatives were generated by site-directed mutagenesis as described previously (37). PopD derivatives were labeled with N,NЈ-dimethyl-N-(iodoacetyl)-NЈ-(NBD) ethylenediamine (IANBD amide, Invitrogen) when bound to hisPcrH in 50 mM Hepes, pH 8.0, supplemented with 100 mM NaCl at 20 -23°C. After 2 h, the labeling reaction was stopped by removing the unreacted dye using a Sephadex G-25 size exclusion column. The labeled hisPcrH-PopD complex was bound to an immobilized metal ion affinity chromatography column and PopD was dissociated from hisPcrH using buffer A (20 mM Tris-HCl, pH 8.0, supplemented with 6 M urea and 20 mM glycine) as described previously (37). Purified PopD was kept in buffer A until use. The labeling efficiency was calculated using the molar absorptivities at 280 nm for PopD (13,980 M Ϫ1 cm Ϫ1 ) and NBD (25,000 M Ϫ1 cm Ϫ1 ) in buffer A. Labeling efficiency was more than 70% for all PopD derivatives. All NBDlabeled PopD derivatives showed a pore-forming activity similar to PopD WT (Fig. S5).
Immunoblotting
Protein was separated on a 12.5% SDS-PAGE, transferred to polyvinylidene difluoride membrane (GE Healthcare), and blocked with 5% milk for 1 h at 20 -23°C. Primary antibodies were diluted as indicated using 3% (w/v) BSA in a solution of 25 mM Tris-HCl, pH 7.5, supplemented with 150 mM NaCl, and 0.1% Tween 20, and incubated with the membrane at 4°C overnight. After washing the membrane three times with the same solution (10 min each), anti-rabbit IgG horseradish peroxidaseconjugated secondary antibody (Sigma) and a chemiluminescent detection reagent (Amersham Biosciences ECL Prime, GE Healthcare) were used to detect targeted primary antibodies as instructed by the manufacturer. Images were analyzed and quantified using ImageJ software. PopD and PopB polyclonal antibodies were raised in rabbits immunized with recombinant full-length proteins extracted from preparative SDS-PAGE gels.
Protein extraction from liposomal membranes
PopD homo-oligomers or PopB:PopD hetero-oligomers were prepared by adding PopD (to 0.1 M final concentration) or a mixture of PopB:PopD (0.1:0.7 M final concentration, respectively) solubilized in buffer A to liposomes (4 mM total lipids final concentration) suspended in 500 l of buffer B and incubating the sample for 20 min. Proteoliposomes were then spun down as described above. After centrifugation, the pellets were resuspended in 75 l of a solution of buffer C (control), 6 M urea (buffer A), 0.1 M sodium carbonate, pH 11.5, or 1 M NaCl in 10 mM Hepes, pH 7.5, and samples were incubated on ice for 30 min with occasional mixing. Any unbound protein was separated from liposomes using a floatation assay as previously reported (37). Briefly, samples were mixed with 67% sucrose, and overlaid with 40% sucrose and 4% sucrose. After ultracentrifugation at an average 288,000 ϫ g for 50 min at 4°C, 300 l of the top, middle, and bottom fractions were collected and precipitated with 10% TCA. The pellets were resuspended in 80 l of SDS-PAGE sample buffer, and examined by immunoblotting using anti-PopD (diluted 1:4000) and anti-PopB (diluted 1:200,000) polyclonal antibodies.
Fluorescence measurements
Steady state fluorescence measurements were made with a Fluorolog-3 photon-counting spectrofluorimeter as reported earlier (37). For NBD emission maxima max measurements, the wavelength for NBD excitation was set to 475 nm and emission was scanned from 510 to 560 nm, every 1 nm using a 1-s integration time. The bandpass was 2 nm for excitation and 4 nm for emission. Polarizers were in place in the excitation (vertical) and the emission (horizontal) to reduce scattered light and to account for polarization effects in the emission monochromator (49). The spectra were corrected by subtraction of the background emission of a sample containing an identical amount of liposomes.
For collisional quenching by iodide, a set of samples were prepared for each NBD-labeled PopD derivatives. In each set, the samples contained an increasing concentration of KI (0 -90 mM) obtained by addition of a solution of 1 M KI and 1 mM Na 2 S 2 O 3 . The ionic strength in each sample was maintained constant by addition of a solution of 1 M KCl and 1 mM Na 2 S 2 O 3 . Initial fluorescence intensity F 0 was defined as the intensity of NBD-labeled PopD in 90 mM KCl. Different KI:KCl mixtures were incubated with proteoliposomes for 10 min at 20 -23°C, and NBD intensities were measured as F iodide . A linear relation-ship between fluorescence intensities and the concentration of iodide for each NBD-labeled PopD was obtained when data were analyzed using the Stern-Volmer equation (F 0 /F iodide ) Ϫ 1 ϭ K sv [I Ϫ ], where K sv is the Stern-Volmer quenching constant. k q was further calculated from K sv and of NBD-labeled PopD in the absence of quenchers using the equation: k q ϭ K SV /. Collisional quenching by membrane-restricted quenchers (12-doxyl-PC or 10-DN) was carried out using two sets of liposomes, one with quencher and one without quencher. F 0 represents the fluorescence signal of NBD-labeled PopD incorporated in the liposomes without quenchers. F doxyl or F 10-DN indicates the fluorescence intensity of NBD-labeled PopD incorporated in the liposomes with quenchers. For the acquisition of NBD emission the wavelength for excitation and emission were 475 and 530 nm, respectively. The bandpass was 5 nm for both excitation and emission. Polarizers were used as described above. The temperature of samples was equilibrated to 12-15°C before measurements. Quenching ratio was determined using equations k q /(1 Ϫ F doxyl /F 0 ) or k q /(1 Ϫ F 10-DN /F 0 ).
Time-resolved fluorescence measurements were taken using a Chronos spectrofluorometer (ISS, Champaign, IL) with the same setups as reported previously (37). The of NBD was measured in frequency domain (20 frequencies,. A solution of fluorescein (Invitrogen) in 0.1 M NaOH was used as reference with a value of 4.05 ns (50). The emission intensity of the reference sample was matched with that of the measured sample (Ϯ10%). An equivalent sample without labeled proteins was used for blank subtraction (51). All data were analyzed with Vinci software, fitted to two discrete exponential , and the calculated average was intensity weighted.
Analysis of translocators associated with HeLa cells
HeLa cells were maintained in DMEM (Hyclone) supplemented with 10% FBS in a 5% CO 2 atmosphere at 37°C. Prior to infection, 2.5-2.8 ϫ 10 6 cells were washed twice with prewarmed Dulbecco's PBS (DPBS) (Hyclone) and replenished with 4 ml of DPBS, DMEM, or DMEM supplemented with 10% FBS. PAK⌬exsE⌬exoSTY⌬popD::popD grown overnight in Miller lysogeny broth at 37°C was diluted to A 600 of 0.15 in fresh broth the next day, and allowed to grow until A 600 of 1. Aliquots containing 0.3-0.5 ml of bacterial culture were added to a monolayer of HeLa cells at a multiplicity of infection of 30 and incubated for 1 h at 37°C in a 5% CO 2 atmosphere (Fig. S7). At the end of the incubation, the entire medium containing free bacteria and secreted proteins was removed by aspiration. An aliquot of 1 ml of the medium was centrifuged (twice at 18,000 ϫ g for 10 min at 4°C), and the supernatant containing secreted proteins was precipitated with TCA, resuspended in SDS-PAGE sample buffer, and analyzed by immunoblotting for the presence of PopB and/or PopD as indicated below. The flasks containing infected cells were washed gently three times with DPBS, cells were scraped in 1 ml of ice-cold DPBS supplemented with protease inhibitor mixture (PIC) (Roche) and 5 mM NaF, and pelleted at 4°C (2,000 ϫ g, 10 min). Cells were resuspended in 210 l of lysis buffer (DPBS supplemented with 0.1% Triton X-100, PIC, and 5 mM NaF), and incubated for 30 min at 4°C with constant mixing. Solubilized plasma membrane and cytosolic proteins were separated from insoluble cell PopB assists PopD insertion debris and intact bacteria by centrifugation at 4°C (18,000 ϫ g, 15 min). The supernatant containing Triton-solubilized proteins was precipitated with methanol:chloroform (sample: methanol:chloroform:H 2 O ratio was 1:4:1:3), and the precipitate was resuspended in 60 l of buffer C containing 2% SDS. Protein concentrations were measured using bicinchoninic acid assay (Thermo Scientific). An aliquot containing 20 g of total protein was analyzed by immunoblotting for the presence of PopB and/or PopD using anti-PopB (diluted 1:20,000) and anti-PopD (diluted 1:1000) serums.
To obtain secreted PopB and PopD, PAK⌬exsE⌬exoSTY⌬ popD::popD was incubated with HeLa cells in DMEM plus FBS for 1 h at 37°C. After infection, the entire medium was removed, and the supernatant containing secreted proteins was clarified by centrifugation (twice at 18,000 ϫ g for 10 min). The presence of PopB and PopD in the supernatant was confirmed by immunoblot. To determine whether secreted PopB and PopD bind to HeLa cell membranes, the supernatant containing secreted proteins was transferred to a flask of HeLa cells that have not been exposed to bacteria, and incubated for 1 h. The binding of secreted PopB and PopB to HeLa cells was analyzed as described above.
HeLa cell membrane protein isolation
PAK⌬exsE⌬exoSTY⌬popD::popD was incubated with HeLa cells in DMEM free of FBS (to minimize nonspecific secretion of translocators) for 1 h. Infected cells were collected in 1 ml of ice-cold DPBS and spun down at 2,000 ϫ g for 10 min at 4°C. Cells were washed once with 1 ml of DPBS to remove free bacteria. Cell were resuspended in 230 l of DPBS (control) or the same volume of a solution containing 1-2 M rPFO in buffer C (supplemented with PIC and 5 mM NaF) and incubated for 30 min at 20 -23°C. The released cytosolic components were separated from permeabilized cells and attached bacteria by centrifugation at 2,000 ϫ g for 10 min at 4°C. The released cytosol (PFO sup) was further clarified by centrifugation at 18,000 ϫ g for 20 min, and precipitated with methanol:chloroform as described above. Precipitated proteins were resuspended in 60 l of buffer C containing 2% SDS. Protein concentration in the PFO sup fraction was measured using a bicinchoninic acid assay. The permeabilized cells were washed with 500 l of 10 mM Tris, pH 7.4, supplemented with PIC and 5 mM NaF on ice for 5 min, and lysed in 210 l of lysis buffer for 30 min at 4°C. Solubilized membrane proteins (Triton sup) were separated from residual bacteria and insoluble debris by centrifugation at 18,000 ϫ g for 15 min at 4°C. Aliquots containing 23 g of total protein of each PFO sup or Triton sup fractions were analyzed by immunoblotting using anti-GAPDH antibody (diluted 1:2,000) (Cell Signaling Technology) or anti-Na ϩ /K ϩ -ATPase antibody (diluted 1:1,000) (Cell Signaling Technology).
To detect GSK tag phosphorylation, HeLa cells infected with the indicated P. aeruginosa strains were treated with 2 M rPFO and analyzed as described above. The resulting Triton sup fraction was analyzed by immunoblotting using anti-PopB, anti-PopD, or anti-phospho-GSK antibodies (1:1,000 dilution as indicated by the manufacturer, Cell Signaling Technology). | 2018-05-03T00:19:16.288Z | 2018-04-23T00:00:00.000 | {
"year": 2018,
"sha1": "07120a1c978a8231c7b0482cbb0850e3ae3c6a00",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/293/23/8982.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "279d7e95d1982b13b8685d4ea1188aadfd123511",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
18665256 | pes2o/s2orc | v3-fos-license | Matrix eigenvalue model: Feynman graph technique for all genera
We present the diagrammatic technique for calculating the free energy of the matrix eigenvalue model (the model with arbitrary power $\beta$ by the Vandermonde determinant) to all orders of 1/N expansion in the case where the limiting eigenvalue distribution spans arbitrary (but fixed) number of disjoint intervals (curves).
Introduction
Exact solutions of various matrix models have been considerably put forward recently. The progress in finding asymptotic expansions is mostly due to a geometrization of the picture. Attained first for a mere large-N limit of the Hermitian one matrix model (1MM) [27], [14] [11], [9], it was almost simultaneously put forward for solutions of two-matrix Hermitian matrix model (2MM) [17], [18], [25], [4], [5]. Combined with the technique of the loop equation [3], which is the generating function for Virasoro conditions in matrix models [12], it permitted to generalize the moment technique of [2] to finding the subleading order correction to the 1MM free energy first in the two-cut case [1] and then in the multicut case [26], [13], [6]. Almost simultaneously, the same correction was found in the 2MM case [19] and in the case of normal matrices (see [29], [28]).
Next step in constructing asymptotic expansions pertains to introducing the diagrammatic technique describing terms of expansion in the way very similar to the one in (quantum) field theories: the expansion order is related to the number of loops in the diagrams, and n-point correlation functions as well as the free energy itself are presented by a finite sum of diagrams in each given order of the expansion. This technique, first elaborated in [16] for correlation functions in the 1MM case, was then developed for pure and mixed correlation functions in the 2MM case [21], [22] and then turned into a complete solution for the free energy first in the 1MM case [7], and then, eventually, in the 2MM case [8].
a E-mail: chekhov@mi.ras.ru b E-mail: eynard@saclay.cea.fr We expect that generalizations of the diagrammatic technique are a very power tool for investigating various matrix-model-like problems. We address the eigenvalue model, which describes a gas of N particles dwelling in the potential field and having anyonic statistics expressed by a power of the Vandermonde determinant in coordinates of the particles. This is the Dyson (or Laughlin) gas system, and we investigate a one-dimensional case of this system in this paper. (The first investigation in two-dimensional case was done recently in [30] where two first subleading order corrections were derived but without involving a diagrammatic technique.) We elaborate the diagrammatic technique for consistent calculation of corrections of all orders and find separately the only correction term that cannot be produced from this technique.
The paper is organized as follows: in Sec. 2, we formulate the problem and present the loop equation together with necessary definitions. In Sec. 3, we introduce elements of our diagrammatic technique for calculating resolvents (loop means), and we summarize this technique in Sec. 4. In the next section 5, we invert the action of the loop insertion operator and obtain the free energy of the model, presenting first few corrections. This enables us to obtain all correction terms except two: the first one is just a subleading term in the 1MM case (see [6]), while the second term has not been known previously. We calculate it separately in Sec. 6.
Eigenvalue models in 1/N -expansion
Our aim is to show how the technique of Feynman graph expansion elaborated in the case of Hermitian one-matrix model [16], [7] can be applied to solving the (formal) eigenvalue model with the action where V (x) = n≥0 t n x n and is a formal expansion parameter. The integration in (1) goes over N variables x i having the sense of eigenvalues of the Hermitian matrices for β = 1, orthogonal matrices for β = 1/2, and symplectic matrices for β = 2. In what follows, we set β to be arbitrary positive number. The integration may go over curves in the complex plane of each of N variables x i . For β = 1, no topological expansion in even powers of exists and we rather have the expansion in all integer powers of . Customarily, t 0 = N is the scaled number of eigenvalues. We also assume the potential V (p) to be a polynomial of the fixed degree m + 1, or, in a more general setting, it suffices to demand the derivative V ′ (p) to be a rational function [20].
These resolvents are obtained from the free energy F through the action of the loop insertion operator Therefore, if one knows exactly the one-point resolvent for arbitrary potential, all multi-point resolvents can be calculated by induction. In the above normalization, the -expansion has the form W (p 1 , . . . , p s ) = ∞ r=0 r W r/2 (p 1 , . . . , p s ), s ≥ 1, where it is customarily assumed that it corresponds in a vague sense to the genus expansion in the usual Hermitian models with possible half-integer contributions. It is often written as a sum over all-integer and half-integer-g ≡ r/2.
The first in the chain of the loop equations of the eigenvalue model (1) is Here and hereafter, C D is a contour encircling clockwise all singular points (cuts) of W (ω), but not the point ω = p; this contour integration acts as the projection operator extracting negative part of V ′ (p)W (p). Using Eq. (6), one can express the third second term in the r.h.s. of loop equation (9) through W (p), and Eq. (9) becomes an equation on one-point resolvent (4).
The β-dependence enters (9) only through the combination and, assuming β ∼ O(1), we have the free energy expansion of the form Substituting expansion (8) in Eq. (9), we find that W g (p) for g ≥ 1/2 satisfy the equation where K is the linear integral operator In Eq. (12), W g (p) is expressed through only the W g i (p) for which g i < g. This fact permits developing the iterative procedure.
In analogy with (11), it is convenient to expand multiresolvents W g (·) in γ: Then, obviously, (12) becomes The form of loop equation (9) is based exclusively on the reparameterization invariance of the matrix integral, which retains independently on the details of eigenvalue density distribution. We assume that as N → ∞, the eigenvalues fill in some segments in complex plane, dependently on the shape of potential V (X). For polynomial potentials, the number of segments is finite and the contour C D of integration in (12) encircles a finite number n of disjoint intervals Recall that all W k,l (p) are total derivatives, The solution in the large-N limit coincides with the solution of the Hermitian one-matrix model and satisfies the equation (in fact, the above normalization was chosen in a way to ensure the coincidence of this equation with the one in the 1MM case) Actually, W 0 (p) must be W 0,0 (p) in our notation; we however preserve the old notation assuming this identification in what follows (for shortening the presentation).
Recall the solution of Eq. (19). Deforming the contour in Eq. (19) to infinity, we obtain where the last term in the r.h.s. is a polynomial P m−1 of degree m − 1, and the solution to (20) is where the minus sign is chosen in order to fulfill asymptotic condition (17) whereas the function y(p) is defined as follows. For a polynomial potential V of degree m + 1, the resolvent W 0 (p) is a function on complex plane with n ≤ m cuts, or on a hyperelliptic curve y 2 = V ′ (p) 2 + 4P m−1 (p) of genus g = n − 1. For generic potential V (X) with m → ∞, this curve may have an infinite degree, but we can still consider solutions with a finite genus, where a fixed number n of cuts are filled by eigenvalues. For this, we separate the smooth part of the curve introducing with all branching points µ α distinct. The variableỹ defines therefore the new, reduced Riemann surface, which plays a fundamental role in our construction. In what follows, we still assume M (p) to be a polynomial of degree m − n, keeping in mind that n is always finite and fixed, while m ≥ n can be chosen arbitrarily large.
From now on, we distinguish between images of the infinity at two sheets-physical and unphysicalof hyperelliptic Riemann surface (22) respectively denoting them ∞ + and ∞ − . We often distinguish between variables of the physical and unphysical sheets placing the bar over the latter. By convention, we setỹ| p→∞ + ∼ p n , and M (p) is then 1 Inserting this solution in Eq. (21) and deforming the contour back, we obtain the planar one-point resolvent with an n-cut structure, Let us now discuss the parameter counting. We introduce the filling fractions where A i , i = 1, . . . , n − 1 is the basis of A-cycles on the hyperelliptic Riemann surface (22) (we may conveniently choose them to be the first n − 1 cuts). Adding the (normalized) total number of eigenvalues to the set of S i , we obtain n parameters, to which we add, following (17), the asymptotic conditions In this article we consider filling fractions (25) as independent parameters of the theory. We need this assumption to interpret random matrix integrals as generating functions of discrete surfaces. Other assumptions are possible, but we do not consider them in this paper. In other words, we consider only the perturbative part of the matrix integral, and the filling fractions are fixed because the jumps between different cuts are non-perturbative corrections in . In particular, this imposes restrictions ∂ ∂V (p) That implies that, for k + l > 0: In addition, we impose another assumption. The zeroes b j of M (p) are called double points. In some sense, they can be considered as cuts of vanishing size. We require that those degenerate cuts contain no eigenvalue to any order in the expansion. We therefore demand for any contour which encircles a zero of M .
This assumption, together with loop equation (15), suffices for proving that for every k, l, s, except (k, l, s) = (0, 0, 1) and (k, l, s) = (0, 0, 2), the function W k,l (p 1 , . . . , p s ) has singularities in the physical sheet only at the branch points µ α . In particular, it has no singularities at the double points in the physical sheet.
Calculating resolvents. Diagrammatic technique
In this section, we derive the diagrammatic technique for model (1), which is a generalization of technique in [16], [7]. Our main goal is to invert loop equation (12) to obtain the expression for W k (p) for any k ≥ 1/2.
A piece of Riemann geometry
The main notion is again the Bergmann kernel, 2 which is the unique bi-differential on a Riemann surface Σ g that is symmetrical in its arguments P, Q ∈ Σ g and has the only singularity (a double pole) at the coinciding arguments where, in any local coordinate τ , it has the behavior (see [24], [23]) with S B (P ) the Bergmann projective connection associated to the local coordinate τ . We fix the normalization claiming vanishing all the integrals over A-cycles of B(P, Q): We then have the standard Rauch variational formulas relating B(P, Q) with other objects on a (general, not necessarily hyperelliptic) Riemann surface: and where µ α is any simple branching point of the complex structure. Then, by definition, in the vicinity of µ α , and dw i (P ) are canonically normalized holomorphic differentials: Besides these formulas, we need another, rather obvious, relation: for any meromorphic function f on the curve, we have: where the contour C P encircles the point P only, and not the poles of f .
We also introduce the 1-form dE Q,q 0 (P ), which is the primitive of B(P, Q): Then, obviously, We can now express the 2-point resolvent W 0 (p, q) in terms of B(P, Q). We let p and p denote the complex coordinates of points on the respective physical and unphysical sheets. Then, dp dq ∂V ′ (p) ∂V (q) = −B(p, q) − B(p, q) = − dp dq (p − q) 2 since it has double poles with unit quadratic residues at p = q and p = q. The 2-point resolvent (21) is nonsingular at coinciding points; therefore, dp dq ∂y(p) and W 0 (p, q) = − B(p, q) dp dq .
3.2 Inverting the operator K − 2W 0 (p) We can determine corrections in iteratively by inverting loop equation (12). All multi-point resolvents of the same order can be obtained from W g (p) merely by applying the loop insertion operator ∂ ∂V (p) . In the 1MM case, a natural restriction imposed on the free energy is that all the higher free energy terms F g must depend only on µ α and a finite number of the moments M (k) α , which are derivatives of (k − 1)th order of the polynomial M (p) at branching points, allowed no freedom of adding the terms depending only on t 0 and S i to F g . In model (1), such a restriction cannot be literally imposed, as we demonstrate below, and we instead claim that which was a consequence of locality in 1MM and can be taken as a defining relation in the β-model.
The first step is to find the inverse of the operator K − 2W 0 (p). It was found in [16], that if f (p) is a function whose only singularities in the physical sheet are cuts along D, which vanishes at ∞ like O(1/p 2 ) in the physical sheet and has vanishing A-cycle integrals, then, having dE q,q (p) = dE q,q 0 (p) − dEq ,q 0 (p), where y(q) = V ′ (q) 2 −W 0 (q) is the function introduced in (21) and integration contour lies in the physical sheet. Indeed, in the standard notation for the projections [Q(q)] ± that segregate the respective polynomial and regular parts of Q(q) as q → ∞, we have ( K − 2W 0 (q)).f (q) = 2[y(q)f (q)] − = 2y(q)f (q) − P (q) where P (q) is a polynomial [y(q)f (q)] + of degree deg V ′ − 2, and thus: Let us compute the part of the integral involving the polynomial P (q): The first equality is obtained by changing the name of the variable q →q with accounting for P (q) = P (q), y(q) = −y(q). The second equality comes from deforming the contourC D to −C D picking residues at branch points. The third equality holds because P (q) is a polynomial and thus has no singularity at branch points, whereas zeros of y(q) are canceled by those of dE q,q (p), so the residues vanish.
Therefore we have: The second equality holds because dEq ,q 0 (p) has no singularity at q → p and we can push the integration contour for q to infinity (in the physical sheet), which gives zero. In the third equality, the contour C ′ D is a contour which encloses the A-cycles and D. When we cross the cycle A i , dE q,q 0 (p) jumps by the corresponding holomorphic differential dw i (p). The fourth equality holds because dw i (p) is independent of q, whereas the integral of f (q) along any A−cycle vanishes by our assumption (29) (this is nothing but the Riemann bilinear identity). Then the contour C ′ D is deformed in the physical sheet, into a contour which encloses only the point p (recall that we have assumed that f (q) has no singularity in the physical sheet, and vanishes like O(1/q 2 ) at ∞), and the final result comes from the fact that dE has a simple pole (see eq. (38)).
This proves that for all functions of the type W k,l , the integration 1 acts like the inverse of K − 2W 0 (q).
Notice that if f (q) is a function which has poles only at branch points µ α or at double points b j , by moving the integration contours we have: This relation provided a basis for the diagrammatic representation for resolvents in 1MM [16], and we show that it works in the case of β-model as well. Let us represent the form dE q,q (p) as the vector directed from p to q, the three-point vertex as the dot at which we assume the integration over q, 2y(q) , and the Bergmann 2-form B(p, q) as a nonarrowed edge connecting points p and q.
Let us also introduce a new propagator dpdy(q) denoted by the dashed line.
The graphic representation for a solution of (12) then looks as follows. We represent the multiresolvent W g ′ (p 1 , . . . , p s ) as the block with s external legs and with the index g ′ . We also present the derivative ∂ ∂p 1 W g ′ (p 1 , . . . , p s ) as the block with s + 1 external legs, one of which is the dashed leg that starts at the same vertex as p 1 . That is, we obtain (cf. [16]) which provides the diagrammatic representation for W k (p 1 , . . . , p s ).
Recall the diagrammatic formulation of 1MM (γ = 0). There the multiresolvent W k,0 (p 1 , . . . , p s ) can be presented as a finite sum of all possible connected graphs with k loops and s external legs and with only three-valent internal vertices (the total number of edges is then 2s + 3k − 3, and we assume s ≥ 1 for k ≥ 1 and s ≥ 3 for k = 0) and such that in each graph we segregate a maximal rooted tree subgraph with all arrows directed from the root. This subtree comprises exactly 2k + s − 2 arrowed edges. We then choose one of the external legs, say, p 1 (the choice is arbitrary due to the symmetry of W k,0 (p 1 , . . . , p s )), to be the root vertex the tree starts with; for each three-valent vertex there must exist exactly one incoming edge of the tree subgraph. All external edges (except the root edge) are lines corresponding to B(p, q) and are therefore nonarrowed. This subtree therefore establishes a partial ordering of vertices: we say that vertex A precedes vertex B if there exists a directed path in the subtree from A to B. Internal nonarrowed edges are again B(r, q) but we allow only those nonarrowed edges for which the endpoints, r and q, are comparable. If r = q, then, for the tadpole subgraph, we set B(r,r), wherer is the point on the other, nonphysical sheet. At each internal vertex, denoted by •, we have the integration C (q) D dq 2πi 1 2y(q) , while the arrangement of the integration contours at different internal vertices is prescribed by the arrowed subtree: the closer is a vertex to the root, the more outer is the integration contour. 3
Acting by spatial derivative
As a warm-up example, let us consider the action of the spatial derivative ∂/∂p 1 on W k,0 (p 1 , . . . , p s ). We place the starting point of the ( fictitious) dashed directed edge to the same point p 1 and associate just dx with this starting point. Recall that the first object (on which the derivative actually acts) is dE η,η (p 1 ), then comes the vertex with the integration 1 2πi C (η) D dη y(η) , then the rest of the diagram, which we denote as F (η). We can present the action of the derivative via the contour integral around p 1 with the kernel B(p 1 , ξ): where p 1 lies outside the integration contour for η. The integral over ξ is nonsingular at infinity, so we can deform the integration contour from C p 1 to C (ξ) We now push the contour for ξ through the contour for η, picking residues at the poles in ξ, at ξ = η, ξ =η and at the branch points. We then obtain where the first integral experiences only simple poles at the branching points. The residue is unchanged by using l'Hôpital rule, replacing B(ξ, Pushing contour of integration for ξ around the branch points back through the contour for η, we pick residues at ξ = η and ξ =η, which both give the same contribution, and we obtain We evaluate the last two terms by parts, so it remains only We have thus found that and we can graphically present the action of the derivative as follows: 4 Here, as in [7], comparison of contours is equivalent to their inside/outside ordering. We therefore see that using relation (55) we can push the differentiation along the arrowed edges of a graph. It remains to determine the action of the derivative on internal nonarrowed edges. But for these edges (since it has two ends), having the term with derivative from the one side, we necessarily come also to the term with the derivative from the other side; combining these terms, we obtain 2πi dξ , and we can deform this contour to the sum of contours only around the branching points (sum of residues). Then we can again introduce y ′ (ξ)dξ/y(ξ) and integrate out one of the Bergmann kernels (the one that is adjacent to the point q if p > q or p if q > p; recall that, by condition, the points p and q must be comparable).
That is, we have It becomes clear from the above that we must improve the diagrammatic technique of β-model in comparison with the 1MM by including the dashed lines (we do not call them propagators as they have a rather fictitious meaning); being treated as propagators, they however ensure the proper combinatorics of diagrams. Indeed, from (55) and (57) it follows that the derivative action on the "beginning" of the dashed line is null, ∂ p dp = 0, and when this derivative acts on the "end" of this line, we merely have which we denote symbolically as two dashed propagators ending at the same vertex. If we continue to act by derivatives ∂/∂q, then, obviously, when k dashed propagators are terminated at the same vertex, we have the kth order derivative y (k) (q) corresponding to them. We then already have three variants of incorporating dashed lines into the play. Recall that the dashed lines, as the solid nonarrowed lines, can connect only comparable vertices (maybe, the same vertex) and the starting vertex must necessarily precede the terminating vertex (they may coincide).
The first case is where we have two adjacent solid lines and exactly one outgoing dashed line. Then, no incoming dashed lines or extra outgoing dashed lines is possible and the both solid lines must be arrowed: one is pointed inward and the other outward. We also distinguish this case by labeling the corresponding vertex by the white dot: The second case is where we have two adjacent solid lines (one of them is necessarily incoming directed line, but the other can be either directed outward or be nonarrowed internal or external line), no outgoing dashed line, and k ≥ 1 incoming dashed lines (this means that we must have at least k white vertices preceding this vertex in the total graph). In this case, we denote the corresponding vertex by a solid dot: The last case is when we have just one (incoming arrowed) solid line adjacent to the vertex. We then have exactly one outgoing dashed line and k + 1 incoming dashed lines (k ≥ 0). In this case, we also denote the vertex by the solid dot: External lines of W k,l (p 1 , . . . , p s ) are either the root vertex p 1 or nonarrowed propagators B(q, p i ); no external dashed lines are possible.
Our general rule for assigning white and black colors to vertices is as follows: if there is no factors y (k) (q) standing by the vertex, it is white; if there are such factors, the vertex is painted black.
Acting by the loop insertion operator
We now extend our diagrammatic technique by incorporating the action of loop insertion operator (7) on its elements. The action of the loop insertion operator on the Bergmann differential and its primitive was presented in [7], so we do not describe it here. We can graphically present the action of ∂/∂V (r) as where in the first case we must also take into account the variation of the y(p) factor in the denominator of the measure of integration in p at the right vertex (irrespectively, white or black), and the proper contour ordering is assumed. All the appearing vertices are of white color as they do not contain additional factors of type y (k) (ξ). In the second case, it is our choice on which of edges to set the arrow. Recall, however, that the points P and Q were already ordered, as prescribed by the diagram technique. That is, if "P > Q", we must choose the first variant and if "Q > P ", we must choose the second variant of arrows arrangement in order to preserve this prescription.
We now calculate the action of ∂/∂V (r) on the dashed propagator. Obviously, but any attempt to simplify this expression or to reduce it to a combination of previously introduced diagrammatic elements fails. This means that we must consider it a new element of the diagrammatic technique. With a little abuse of notation, we visualize it by preserving k dashed arrows still landed at the vertex "q" with the added nonarrowed solid line (third or second, depending on whether this vertex was of the second (59) or third (60) kind) corresponding to the propagator B(r, q). The vertex then change the coloring from black to white because it contains y (k) (q) factors no more. We then represent graphically these vertices as for the respective cases (59) and (60). Recall that here necessarily r > q.
The reason why we prefer to keep arrowed propagators entering "white" vertices will be clear after the next step when we consider the subsequent application of the spatial derivative ∂/∂q to the new object ∂ k ∂q k B(r, q). Recall that the very application of this operator assumes that we have somewhere in the preceding vertices a vertex of type (58), i.e., the vertex with a new white vertex. Then, successively applying derivative when moving up from the root over branches of the tree subgraph, we have two possibilities. The first case is where the vertex "r" is an external vertex of the graph. Then, when we reach the vertex "q" we just set the extra derivative on the corresponding Bergmann kernel, which corresponds to adding an extra incoming dashed line to either the case (63) or to the case (64), depending on the type of the "white" vertex. The second and most involved case occurs when the vertex "r" is an internal vertex of the graph. Then, the derivative must successively act on both its ends, and we want to express the action on the end "r" in terms of the diagram technique constructed above.
and it remains to act by k derivatives ∂/∂q using the rules formulated in (55). We then produce a number of diagrams, but all of them will be of one of the type indicated above. It becomes clear why we prefer to preserve incoming dashed lines in notation (63) and (64). When applying derivatives ∂/∂q in the r.h.s. of (65) a part of these dashed lines that does not act on the last nonarrowed propagator B(r, ξ) appear again as the derivatives y (i) (χ j ) at intermediate vertices "χ j " (ξ < χ j < q < r).
The very last step in constructing the diagram technique is to consider the action of the loop insertion operator on ∂ k ∂q k B(q, r). Then, applying now relation (61), we obtain and we observe the appearance of the last remaining structure of the diagram technique-the ("white") vertex "ξ" at which a number s ≤ k of dashed lines terminate and which have two incident nonarrowed solid lines corresponding to the propagators B(ξ, p) and B(ξ, r). This vertex generalizes vertex (63) in a sense that the action of derivatives must be now distributed among these two nonarrowed lines (see (68) In this case, however, as soon as k > 0, the derivatives act on both ends of the propagator B(ξ,ξ), and we can use (65) to distribute them. Therefore, such a diagram (with closed loop of B-propagator) enters the diagram technique only in the case k = 0.
We are now ready to present the complete diagram technique for multiresolvents of the β-model.
Feynman diagram rules
We therefore have the following components of the diagrammatic technique. We absorb all the factors related to Bergmann kernels B(p, q) and dE q,q (p) and all the factors with y(ξ) and its derivatives into vertices in accordance with the rule: we associate to a vertex the solid arrowed propagator that terminates at this vertex (recall there is exactly one such propagator) and associate nonarrowed propagator B(p, q) or its derivatives with the vertex that correspond to the minimal variable among p and q. The ordering of vertices is implied from left to right (as will be assumed in most of appearances below).
The vertices with three adjacent solid lines Here, by construction, ξ < r and ξ < p, or r and/or p can be external vertices. If q here is an external vertex, then k = 0 and r and p must be also external vertices; The vertex q can be external.
Here r can be external vertex. If q is an external vertex, then k = 0 and r is also an external vertex.
Here q can be an external vertex (and r is not by condition).
Here q can be an external vertex.
Here by construction q < r and r can be an external vertex, while p cannot be an external vertex.
Here p cannot be an external vertex.
Here always q < r or r is an external vertex. If p is an external vertex, then k = 0 and r is also an external vertex.
Here p can be an external vertex.
If p here is external, then k = 0.
When calculating W g (p 1 , . . . , p s ) we take the sum over all possible graphs with one external leg dE q,q (p 1 ) and all other external legs to be B(ξ j , p i ) and, possibly, their derivatives w.r.t. internal variables ξ j in accordance with the above rules such that arrowed propagators constitute a maximum directed tree subgraph with the root at p 1 and the bold nonarrowed lines can connect only comparable vertices (and may have derivatives only at their inner endpoints) and arrowed dashed lines can also connect only comparable vertices (the arrow is then directed along the arrows on the tree subgraph). The diagrams enter the sum with the standard symmetry coefficients.
In full analogy with the 1MM case, the order of integration contours is prescribed by the order of vertices in the subtree: the closer is the vertex to the root, the more outer is the integration contour. In contrast to the 1MM case, the integration cannot be reduced to taking residues at the branching points only; all internal integrations can be nevertheless reduced to sums of residues, but these sums may now include residues at zeros of the additional polynomial M (p) on the nonphysical sheet and, possibly, at the point ∞ − .
In complete analogy with the 1MM case, in the next section, we use the H-operator introduced in [7] in order to invert the action of the loop insertion operator and obtain the expression for the free energy itself.
The H-operator
We now use the operator that is in a sense inverse to loop insertion operator (7) and was introduced in [7] in the 1MM case. It has the form 5 The arrangement of the integration contours see in Fig. 1. We now calculate the action of H on the Bergmann bidifferential B(x, q) using again the Riemann bilinear identities. We first note that B(x, q) = (∂ x dE x,q 0 (q)) dx and we can evaluate residues at infinities by parts. Then, since dE x,q 0 (q) is regular at infinities, for V ′ (x) we substitute 2y(x) + 2t 0 /x as x → ∞ + and −2y( Figure 1: The arrangement of integration contours on the Riemann surface. whence the cancelation of terms containing t 0 is obvious, and it remains to take the combination of residues at infinities involving y(x). For this, we cut the surface along A-and B-cycles taking into account the residue at x = q. The boundary integrals on two sides of the cut at B i then differ by dE x,q 0 (q) − dE x+ A i ,q 0 (q) = 0, while the integrals on two sides of the cut at A i differ by , and we obtain for the boundary term the expression which exactly cancel the last term in (79). It remains only the contribution from the pole at x = q, which is just −y(q). We have therefore proved that Let us now consider the action of H on W k,l (·) subsequently evaluating the action of loop insertion operator (7) on the result. Note first that the only result of action of ∂/∂V (p) on the operator H itself are derivatives ∂V (x)/∂V (p) = −1/(p − x) (and recall that by definition |p| > |x|, i.e., instead of evaluating residues at infinities one should take residues at x = p, and we obtain ∂ ∂V (p) (H · W k,l (·)) = W k,l (p) + H · W k,l (·, p).
For the second term, due to the symmetry of W k,l (p, q), we may choose the point p to be the root of the tree subgraphs. Then, the operator H always acts on B(·, ξ) (or, possibly, on its derivatives w.r.t. ξ) where ξ are integration variables of internal vertices.
Let us recall the action of ∂/∂V (q) on the elements of the Feynman diagram technique in Sec. 4. Here we have three different cases.
• When acting on the arrowed propagator followed by a (white or black) vertex, we use the first relation in (61).
• When acting on nonarrowed internal propagator ∂ k ∂q k B(p, q), p ≥ q, k ≥ 0, we apply relation (66) without subsequent representing the action of the derivative ∂ k ∂q k as a sum of diagrams. We have no external B-lines as we act on the one-loop resolvent.
• Eventually, when acting on dashed lines coming to a black vertex, using relation (62) we obtain expression in (63); the action on dashed lines coming to a white vertex is null.
We now consider the inverse action of the H-operator in all three cases.
In the first case where it exists an outgoing arrowed propagator dE p,q 0 (ξ) (we can have only one such arrowed propagator as one line is external), then we can push the integration contour for ξ through the one for p; the only contribution comes from the pole at ξ = p (with the opposite sign due to the choice of contour directions in Fig. 1. We then obtain the following graphical representation for the action of the operator H in the first case: In the second case, the vertex ξ in (66) is an innermost vertex (i.e., there is no arrowed edges coming out of it). The 1-form y(ξ)dξ arising under the action of H (80) cancels the corresponding form in the integration expression, and the residue vanishes being nonsingular at the branching point. Graphically, we have Eventually, in the third case, the inversion is rather easy to produce. Indeed, action of H-operator just erases the new B-propagator arising in (63) simultaneously changing back the color of the vertex from white to black: For H q · W k (q, p) = H q · ∂ ∂V (q) W k (p), we obtain that for each arrowed edge, on which the action of (7) produces the new (white) vertex, the inverse action of H q · gives the factor −1, on each nonarrowed edge, on which the action of (7) produces the new vertex accordingly to (66), the inverse action of H q · just gives zero, and at each black vertex, at which the action of (7) changes the color to white and adds a new B-propagator, the inverse action of H q · gives the factor +1.
As the total number of arrowed edges coincides with the total number of vertices and the contributions of black vertices are opposite to the contributions of arrowed edges, the total factor on which the diagram is multiplied is exactly minus the number of white vertices, which is 2k + l − 1 for any graph contributing to W k,l (p). We then have and, combining with (81), we just obtain and, since all the dependence on filling fractions and t 0 is fixed by condition (42), we conclude that This is our final answer for the free energy. It permits us to calculate all F k,l except the contribution at k = 1, l = 0 (torus approximation in the 1MM) and the second-order correction in γ (the term F 0,2 ). The term F 1,0 was calculated by a direct integration in [6]. We devote the special section below for the calculation of the term F 0,2 . All other orders can be consistently calculated. For this, we only introduce the new vertex • · at which we place the (nonlocal) integral term C (ξ) dξ 2πi ξ ξ y(s)ds y(ξ) . In the 1MM case, although this term was also nonlocal, it was possible to shift the starting point at the branching point µ α in the vicinity of every branching point; here it is no more the case and we must consider global integrations. Note, however, that it is only for the very last integration for which we must introduce nonlocal terms; all internal integrations can be performed by taking residues at branching points and at the zeros of the polynomial M (p). The examples of diagrams are collected in the next section.
Low-order corrections
We begin with presenting several low-order corrections to the free energy.
Next two diagrams cannot be presented in the free-energy form, so we present the expressions for the one-loop resolvents: We also present two first "regular" terms of the free-energy expansion: The free-energy diagrammatic terms corresponding to resolvents (88) and (89) vanish (for (88) see [7]). The free-energy term F 1,0 is the subleading correction in 1MM calculated in [6]. It therefore remains only to calculate (89) subsequently integrating it to obtain the corresponding free-energy contribution F 0,2 .
6 Calculating F 0,2 We begin with demonstrating that the diagrammatical expression for F 0,2 of the form vanishes. For this, we observe that the first two terms are and we can integrate by parts to set the derivative w.r.t. q on the exceptional vertex. To do this, however, we must first interchange the order of contour integrations (as we have the integral of y inside the outer integration in q). This interchanging yields the additional term and, upon integrating by parts, we have ∂ ∂q q q y y(q) = 2 − y ′ (q) q q y y 2 (q) .
The constant part does not contribute, while for the second part we introduce the notation y ′ (q) q q y y 2 (q) where we assume now that the order of appearance of vertices (from left to right) implies the contour ordering for the corresponding integrations.
As concerning the third term, if we collapse the outer integration (the one with the integral in y term) to the support D, it gives zero upon integration because it contains no singularities, and we remain with two terms appearing when passing through two other integration contours. Both these terms are of the same form as (93) except they both come with the factors −1 and they differ by the integration order. Therefore, combining with (93) which can be evaluated by taking the residue at q = ξ (doubled because of two residues at two sheets). It gives the integral over D with the integrand 2 y · (y ′ ) 2 · y −3 Together with the first residue term (92), it gives 2 C (q) D dq 2πi q q y y(q) and, integrating by parts, we just obtain 2 C (q) D dq 2πi y ′ (q) y ( q) = 2n, i.e., the constant independent on the potential.
We can now make a guess for the actual F 0,2 . It seems very plausible to expect it to be of the Polyakov's anomaly form R 1 ∆ R, where R is the curvature and 1/∆ is the Green's function for the Laplace operator, which in our case is the logarithm of the Prime form. The curvature is expressed through the function y as R ∼ y ′ /y. That is, we have two natural candidates for F 0,2 : where E is the prime form, or F 0,2 ∼ dq dp log y(q)B(p, q) log y(p), but neither of these expressions is well defined. The first one develops the logarithmic cut at p = q and cannot be written in a contour-independent way; moreover, both these expressions are divergent when integrating along the support. We therefore must find another representation imitating this term. A good choice is when we integrate by part only once, we then have the expression of the form y ′ (q) y(q) D dE q,q (p) log y(p)dp, where the second integral is taken along just the eigenvalue support D.
Variation of the logarithmic term in (94) can be presented already in the form of the contour integral. In what follows, we also systematically use that Action of ∂ ∂V (r) on "outer" combination y ′ (q)/y(q) gives ∂ q B(r, q)/y(q), which, upon integration by parts, gives and in the second term we recognize the two first diagrams of (89) (with factors −1).
Action of ∂ ∂V (r) on "inner" log y(p) gives B(r, p)/y(p), and in order to correspond to our diagrammatic technique we must interchange the order of integration w.r.t. q and p. And for the last action of ∂ ∂V (r) on dE p,p (q), we use (61). That is, implying the contour ordering as above (from left to right) and indicating explicitly all the y-factors in the vertices, we graphically obtain for the last two variations: The first line, together with the second term in (95), just give (with the minus sign) the desired term W 0,2 (r) (89), the first term in the second line is canceled by the first term in (95), so the only mismatch is due to the second term in the second line. This term acquires the form where we use the standard notation of [7] (cf. (43)). But this term can be already calculated just by taking residues at the branching points, which gives (for the comprehensive calculation, see [10]) where M (1) α ≡ M (p)| p=µα are the first moments of the potential and ∆(µ) is the Vandermonde determinant in µ α .
Therefore, combining all the terms, we conclude that and we see that, in analogy with the answer obtained by Wigmann and Zabrodin [30] for the corresponding correction in the normal matrix model case, we have quantum correction term similar to the one in F 1,0 (the second term in (97)).
This completes the calculation of the exceptional term F 0,2 and we therefore have all terms of the asymptotic expansion of the eigenvalue model (1). | 2014-10-01T00:00:00.000Z | 2006-04-06T00:00:00.000 | {
"year": 2006,
"sha1": "7a7d408f1eb12b049e4dba4f9d9c5229fc637291",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math-ph/0604014",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6924223ce060b1bdc8018dbc7b6f57748e07dc1d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
210248155 | pes2o/s2orc | v3-fos-license | Prediction of Roughness Effects on Wind Turbine Aerodynamics
Wind turbines may suffer power loss after a long operation period due to degradation of the surface structure around the leading edge, by accumulation of contaminants and/or by erosion. For understanding the underlying physics, and validating the available prediction methods, the aerodynamics of smooth and artificially roughened airfoil profiles are investigated by different modelling procedures. A spectrum of turbulence models are applied encompassing RANS, URANS and DES. Two approaches are used to model the roughness. In one approach, the roughness is not geometrically resolved but its effect is modelled via the wall-functions of the turbulence models. In the alternative approach, the roughness structures are geometrically resolved, which necessitates a three-dimensional formulation, whereas the wallfunctions based approach may also be applied in two-dimensions. Computational results are compared with the experimental results of other authors, and the predictive capability of different modelling procedures are assessed.
Introduction
Wind energy is being increasingly used an in power generation from the renewable energy resources. Accumulation of contaminants and erosion can occur in a zone near the leading edge of the turbine, after a rather long operation period, which, then, causes a degradation of the performance. This can be caused by different environmental parameters, such as dust, rain and hail. Beyond the existence of contaminant and/or erosive material in the atmosphere, the degree of contamination and/or erosion is dependent on temperature, humidity, and, of course, on the wind speed.
Contamination and erosion effects can cause substantial amounts of performance loss. It was reported by Corten and Veldkamp [1] that the accumulated insect debris around the leading edge of wind turbine blades was the main cause for approximately 25% power loss observed on wind farms in California. Khalfallah and Koliub [2] analyzed the effect of dust accumulation at the leading edge of a 300kW stall regulated Horizontal Axis Wind Turbine in a dusty region in Egypt and found that a power loss about 50% in nine months can occur. Performance degradation of wind turbines can also be caused by ice accretion [3], due to similar reasons, which is also a serious problem for aircraft flight.
Surface roughness has a rather complex influence on airfoil aerodynamics, since it can also effect the transition behavior. For a smooth surface, the boundary layer transition usually occurs with involvement of the Tollmein-Schlichting waves as the primary instabilities [4]. It is known that surface roughness alters the transition behavior due to the changes in the underlying mechanism and can also trigger an earlier transition, the modes of which being strongly dependent on the type of roughness. Due to its significance, the effect of surface roughness has been investigated experimentally and computationally over several decades.
Experimental investigations on the performance degradation of airfoils with leading edge roughness was provided by Zhang et al. [5]. Further experimental studies were presented by of Ehrmann et al. [6]. Recent experimental investigations were performed by Sareen et al. [7]. It was estimated [7] that an 80% increase in drag, which was caused by a rather small degree of leading edge erosion, can result in approx. 5% loss in annual energy production, and for an increase in drag of 400-500% coupled with the loss in lift, as observed for many of the moderate-to-heavy erosion cases, the loss in annual energy production could be as high as 25%.
A computational analysis of the effect of surface roughness on dynamic stall of a wind turbine blade was presented by Noura et al. [8], where the Shear Stress Transport (SST) model of Menter [9,10] was used as turbulence model, and the roughness was modelled via corresponding wall-functions, assuming an equivalent sand-grain roughness, assumed to hold for the whole blade surface. Effect of roughness near the trailing edge of airfoils was computationally and experimentally investigated by Dhiliban et al. [11]. The standard k-ε turbulence model [9,12] was used. The quite large, regular, 2D, sawtooth like roughness elements were resolved, without applying a roughness model. CFD calculations for the effect of surface roughness on the aerodynamic performance of a turbine blade cascade was presented by Bai et al. [13], where the 4-equation transition model, i.e. the SST-γ-Reθ model (γ: intermittency, Reθ: transition momentum thickness) of Menter [9,14,15] was used. Roughness was modelled via wall-functions assuming equivalent sand-grain roughness, and applied uniformly over the whole blade surface. The SST turbulence model was applied by Zhang et al. [16] to study the effect of roughness on a blunt trailing-edge airfoil in two-dimensions, assuming sawtooth like roughness shapes, resolved by the grid. Schramm et al. [17] computationally studied erosion effects in 2D using the 3-equation SST-γ model of Menter [9]. The roughness was modelled via wall functions for low erosion while for high erosion, the blade shape was modified. Han et al. [18] applied the SST-γ-Reθ turbulence model to calculate the roughness effects in 2D, modelling the roughness by wall functions.
In the present work, we computationally investigate the leading edge roughness effects on airfoil aerodynamics applying different modelling approaches and compare the results with the measurements of Zhang et al. [5]. The main purpose is a comprehensive and coherent validation of turbulence and roughness modelling approaches. An important distinguishing feature of the present work from the previous work is the application of three-dimensional surface roughness resolving approach, which was not investigated within this context before, along with roughness modelling via wall functions and assessing the performance of the latter by comparisons. Furthermore, a wide range of turbulence models are applied with and without transition modelling, and their performance is assessed.
The test case
Experiments of Zhang [5] are considered, where wind tunnel measurements were performed for the NACA GA(W)-1 airfoil. Different Reynolds numbers (Re) and angles of attack (AoA) were considered. In the present study, we investigate the case with Re=169,000 and AoA=12°. Leading edge roughness was simulated by thin plastic strips with hemispherical roughness elements, which were placed in a region 5% chord length (c) from the leading edge in in-line or staggered arrangements, where the staggered configuration is considered in the present work. For the roughness height (k), two values were considered: k/c=0.0025, k/c=0.005.
Modelling
Incompressible flow of air described by Navier-Stokes equations [19] is computationally modelled within the framework of the general-purpose CFD software ANSYS Fluent 18.0 [9]. No heat transfer effects [20] and constant material properties are considered (ρ: density). Turbulence is modelled by different approaches. Although different versions of the k-ε model has been successfully applied in various turbomachinery applications [21], the SST model [10] is used in the present work, due to its favorable properties in modelling the near-wall flow, which has also been observed also in different applications [22]. In applying SST, a production limiter is applied along with the Kato-Launder model [9] to prevent excessive turbulence energy generation near the stagnation point, which is typical for general two-equation models. Curvature correction [9] is always applied, although it observed, in several comparisons, not to have a significant effect. The effect of the low Reynolds number (LowRe) corrections within the framework of the SST model was also investigated. Additionally, the 4-Equation SST-γ-Reθ transitional turbulence model [9], as well as the 3-Equation SST-γ transitional model [9] are applied. Besides the surface resolving approach, capturing the real roughness geometry, wall-functions based roughness modelling approach [9] is also used, using different methods [23] for estimating the equivalent sand grain roughness.
For the velocity-pressure coupling, the SIMPLEC [9] scheme is used. In unsteady calculations, a second-order accurate bounded backward differencing scheme is used for time discretization [9], choosing time step-sizes to assure cell Courant numbers smaller than unity. For the discretization of convective terms, the second order accurate upwind scheme [9], for all variables with the exception of momentum equations within the framework of DES, where a bounded central differencing is used.
Solution domain and boundary conditions
A 2D view of the solution domain is presented in Figure 1, with indication of boundary types. At the inlet, uniform profiles are applied for the velocity components and turbulence quantities of the uni-directional flow. (U: inlet velocity) Assuming a low turbulence level at the inlet, a turbulence intensity of 0.5% and turbulent-tolaminar viscosity ratio of 1 is assumed to derive the inlet boundary conditions of the turbulence quantities. For the intermittency, γ=1.0 is applied [9]. At walls (the airfoil), the no-slip conditions apply, whereas at the outlet the static pressure is prescribed along with the zero-gradient conditions for all diffusive-convectively transported variables. At the upper and lower boundaries (Fig. 1), symmetry planes are assumed. In 3D, the 2D domain is extruded for a distance of 0.04c. Boundaries occurring in the third dimension, are prescribed to be periodic.
Grid generation
In problems with roughness, grid generation is very challenging due to the huge scale disparity, if the geometries of roughness structures are intended to be resolved by the computational grid, as it is presently the case. In the present work, a grid generation strategy is applied, consisting of three steps. As much as possible, the grids are generated applying structured/block-structured, arranged as C-type around the airfoil (Fig. 1). In all grids, it is ensured that the nondimensional wall distance y + is always smaller than unity (roughened regions while using a correlation based roughness modelling is an exception, where a "y + -shift" is applied by the model [9]) The first step has been the generation of an adequate grid for two-dimensional flow.
Results of the grid independence study (RANS-SST)
showing the variation of the predicted lift coefficient (CL) by different grids relative to the finest grid are presented in Table 1. One can see that the variations are small, and even the coarsest grid (114,000 nodes) can be seen to provide sufficient grid independence.
For cases, where the surface roughness is to be resolved by the grid, a three-dimensional formulation is necessary. A further reason for a three-dimensional formulation is given through the unsteady formulation, since three-dimensional flow structures can temporarily emerge, although the time-averaged flow may be twodimensional.
Keeping the structured grid topology while retaining the resolution on the blade surface including the third dimension would result in too large grid sizes. For this purpose, as the second step of the grid generation strategy, a locally refined block-structured grid with nonconformal block interfaces is generated, which provides a comparable accuracy to the structured grid from the first step. The finest structured grid from the first step (G5, Table 1) and the resulting non-conformal blockstructured grid (G-blocks) are displayed in Fig. 2.
The grid sizes of the grids and the deviation in the predicted lift coefficient are presented in Table 2. The results of the G-blocks can be considered to be sufficiently close to those of G5, indicating the adequacy of the former.
In generating the three-dimensional grid, where the 2D geometry is extruded by a distance of 0.04c in the third direction, the G-blocks grid is taken as basis and extruded. For the grid resolution in the third dimension, An exception to the previously described structure is the treatment of a tiny region at the leading edge, where the roughness elements are placed, for the threedimensional roughness-resolving calculations. This tiny region is discretized by an imbedded unstructured block which corresponds to approx. 300 subdivisions in the third direction on the blade surface. Based on this "base grid", local grid refinements are additionally applied during the calculations, to ensure the local near-wall y + values to remain smaller than unity. A section of the 3D grid in is presented in Fig. 3, where this region can be recognized (in comparison to Fig. 2). The number of nodes of the resulting three-dimensional grids are between 8·10 6 and 9·10 6 , depending in the roughness type considered. A detail view of the surface grid generated (before local grid refinements) for the threedimensional surface roughness-resolving approach for the case k/c=0.0025 with staggered arrangement is provided in Fig. 4. For the resolution of the third direction, no grid independence study is performed. It is assumed that the surface resolution near the leading edge is sufficient, and the need for an eventual finer resolution in downstream is assessed by inspecting the resulting distributions.
Overview of applied modelling approaches
A large number of modelling approaches are investigated. Table 3
Field distributions
A general view of the flow pattern is provided in Figure 5, showing velocity magnitude (u) predicted by SST-S. Near the leading edge, the stagnation region on the pressure side, as well as the acceleration over the nose, towards the suction side can be observed (Fig. 5).
For the smooth blade, the predicted streamlines by different models are compared with the measured ones in Figure 6. One can see that the measurements indicate a separation bubble on the suction side near the leading edge (Fig. 6a). As the SST model (SST-S, Fig. 6b) cannot predict this, the transitional SST models (SST-S-3TR, Fig. 6d, SST-S-4TR, Fig. 6e), but also the SST model with LowRe correction (SST-S-LR, Fig. 6c) can qualitatively predict the phenomenon.
For the rough blade, distribution of the wall shear stress (τW) near the leading edge predicted by DES is Figure 7, for an instant of time, for two roughnesses. It can be seen that shear stress is distributed quite unevenly over the roughness elements.
Lift, Drag, Pressure Coefficients
For the smooth blade, for Re=169,000 and AoA=12°, the variations of the pressure coefficient (Cp) along the blade surface are predicted by the applied twodimensional modelling approaches are compared with the measured values (EXP) in Figure 8. One can see that all presented predictions show a generally quite fair agreement with the measurements. The variations between the models are observed rather on the suction side of the blade, where SST-3TR and SST-4TR show a better agreement with the experiments.
Predicted coefficients of lift (CL) and drag (CD) for the smooth blade are compared with experimental values [5] (EXP) for Re=169,000 and AoA=12°, in Table 4. Percentage deviations of predictions from the measured value are also indicated in parentheses. One can see that CL is over-predicted by about 18% by the SST. A very good prediction of CL is provided by SST-3TR, which is even better than that of SST-4TR. The 3D, unsteady IDDES modelling provides a quite good CL prediction, which is, but, not as good as that of SST-3TR.
Conclusions
Aerodynamics of smooth and at the leading edge roughened wind turbine blades are calculated by different turbulence modelling approaches. For the smooth blade, SST based transitional turbulence models applied in 2D, or in 3D within a IDDES formulation are observed to provide quite good predictions of the lift coefficient. For the rough blade, the 3D, IDDES based surface resolving approach is observed to perform remarkably better than the empirical roughness modelling approaches in 2D. | 2019-11-14T17:05:51.745Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "0662fbf8388369e6f8fa3c5c796112641da229d2",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/54/e3sconf_icchmt2019_09004.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b5e71611bb3e5d427582325000a72a29798adbfc",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
51747676 | pes2o/s2orc | v3-fos-license | Transgenic Mosquitoes for Malaria Control: From the Bench to the Public Opinion Survey
The recent field releases of genetically modified mosquitoes in inter alia The Cayman Islands, Malaysia and Brazil have been the source of intense debate in the specialized press [1, 2] as well as in the non-specialized mass media. For the first time in history (to our knowledge), transgenic Aedes aegypti were released in the Cayman Islands in 2010 by a private company, Oxitec, in collaboration with the local Mosquito Research and Control Unit (MRCU) [3]. The releases were followed by other releases in Malaysia in 2010/11 and then in Brazil in 2011 [4]. While the releases in Malaysia and Brazil were publicised beforehand, the releases in The Cayman Islands were only announced publicly one year after the fact [1, 5]. This lack of transparency, not to say the secrecy, in the way the first trial was conducted is without much doubt the major reason for the controversy that emerged. Brushing aside years of discussion in the scientific world and a shared recognition of the importance to consider ethical, legal and social issues this first trial could be read as a fait-accompli: the cage of transgenic mosquitoes has now been opened [6]. Oxitec faced harsh criticism for these releases, both within the scientific community, as well as from non-governmental organisations, such as GeneWatch that accused the company of acting like “a last bastion of colonialism”. A vector-borne diseases method for control has rarely been the subject of such discussion not even concerning its potential efficacy at reducing the burden associated with a vector-borne disease.
Introduction
The recent field releases of genetically modified mosquitoes in inter alia The Cayman Islands, Malaysia and Brazil have been the source of intense debate in the specialized press [1,2] as well as in the non-specialized mass media. For the first time in history (to our knowledge), transgenic Aedes aegypti were released in the Cayman Islands in 2010 by a private company, Oxitec, in collaboration with the local Mosquito Research and Control Unit (MRCU) [3]. The releases were followed by other releases in Malaysia in 2010/11 and then in Brazil in 2011 [4]. While the releases in Malaysia and Brazil were publicised beforehand, the releases in The Cayman Islands were only announced publicly one year after the fact [1,5]. This lack of transparency, not to say the secrecy, in the way the first trial was conducted is without much doubt the major reason for the controversy that emerged. Brushing aside years of discussion in the scientific world and a shared recognition of the importance to consider ethical, legal and social issues this first trial could be read as a fait-accompli: the cage of transgenic mosquitoes has now been opened [6]. Oxitec faced harsh criticism for these releases, both within the scientific community, as well as from non-governmental organisations, such as GeneWatch that accused the company of acting like "a last bastion of colonialism". A vector-borne diseases method for control has rarely been the subject of such discussion not even concerning its potential efficacy at reducing the burden associated with a vector-borne disease.
Focusing on malaria control, this chapter reviews the major technological milestones associated with this technique from its roots to its most recent development. Key-points in the understanding of mosquito ecology are going to be presented, as well as their use in models whose major aim is to determine the validity of the transgenic approach and to help designing successful strategies for disease control. Furthermore, the ethical and social points related to both field trials and wide-scale releases aiming at modifying mosquito populations (and thus controlling vector-borne diseases) are going to be discussed as well as the question of public engagement and the role scientists might play in fostering debate and public deliberation. While large part of the laboratory research is done in the Global North, most of the vector-borne diseases are endemic in the Global South. We suggest that the geopolitics related to the genetically modified (GM) mosquitoes as well as the specificity of Southern contexts needs to be considered when discussing the application of this technology.
Why acting on the vector population: How efficient are transgenic methods for malaria control?
When discussing the epidemiology of malaria the gold standard is the description of the R 0 [7][8][9]. Focusing on the vector compartment suggests that the spread of malaria can be curved either by reducing the mosquito population or by decreasing their vectorial capacity. In other words, one either aims to decrease the number of mosquitoes or to make them less efficient in transmitting the parasites. These two strategies can both be addressed by vector control including through a transgenic approach: population reduction or population replacement. However, when looking closely at R 0 one can notice that the parameters that are affected by those strategies are not the most likely ones to curve transmission efficiently. The mortality of mosquitoes (µ) and their biting rate (a) are indeed affecting R 0 in an exponential and in a quadratic manner respectively. In this respect, they are the parameters whose modifications affect R 0 and consequently the human prevalence mostly (see Box 1). This means that modifying a linear parameter is less likely to lead to a drastic change in malaria epidemiology. For example halving the vector population density (m) is going to reduce R 0 by two but because of the non-linear relationship between R 0 and the human prevalence (y) the decrease of the latter one is not going to be affected in such a manner especially in a context of high transmission.
Technology: What has lead to GM mosquitoes for malaria control?
The roots of the technology can be traced back to the early 80's/90's when the knowledge gained in genetics in Drosophila research sparked the development of new tools in the fight of vectorborne diseases. The plan was straightforward with three milestones to be achieved in a decade: i) the stable transformation of Anopheles mosquitoes by 2000 ii) the engineering of a mosquito unable to carry malaria parasites by 2005 and iii) the development of controlled experiments to understand how to drive this genotype of interest into wild populations by 2010 [10].
Regarding malaria most recent research has concentrated on the development of an Anopheles strain that has the ability to interrupt transmission through the synthesis and production of molecules able to block the development of the parasite. A few years ago, the SM1 peptide was shown to reduce malaria oocysts number by about 80% [11]. More recently, it was synthesised from a transgenic entomopathogenic fungi [12], this later one is by-itself (in its natural version) already considered as a potentially interesting method to develop [13][14][15]. Other potential solutions currently developed rely on single-chain antibodies [16][17][18]. Using the φC31 integration system for the first time in An. stephensi it is now possible to insert the transgene of interest in a permanent manner at chromosomal 'docking' site using site-specific recombination and to have a tissue-and sex-specific expression. The authors have then shown that the prevalence and number of oocysts decreased when the transgenic mosquitoes were Box 1. The Ross-MacDonald model permits to describe R0 which is the number of secondary case arising from a single one in an otherwise uninfected population (Macdonald 1957; Koella, 1991). It permits to determine the relative importance of the different parameters implicated in the transmission of malaria (equation 1). From the R0 value, a simple expression permits to determine the prevalence in the human population (equation 2). As seen on the graph above, only a large decrease in the intensity of transmission (estimated by R0) can affect significantly the human prevalence (y). [17]. If technology has been able to determine how the insertion of a transgene can be made to change a vector to a quasi non-vector, the next question to answer concerns the spread of this construction in natural populations of mosquitoes.
Mosquito ecology: First hurdle at the door of the Lab
When the ecological and evolutionary issues related to the potential use and impact of Plasmodium-resistant transgenic mosquitoes started to be discussed about a decade ago [19,20], most studies aimed at providing information on the fitness of genetically-modified mosquitoes were based on the use of natural mosquito immune responses as a model system. This was mainly driven by the fact that using the natural immune system of mosquitoes in a transgenic approach was considered of some potential interest [21], and also because the only fully effective system against malaria parasite was the melanization response (also known as melanotic encapsulation) in selected lines of mosquitoes [22]. The mechanism leading to the death of the parasite because of melanization remains unclear. It seems that death can occur because of starvation (by isolation from the hemolymph) as well as because of the cytotoxic function of melanin [21,23]. The melanization response was then considered as a model of what could happen with an artificial peptide mimicking an immune response and thus aiming at reducing the number of parasites in the mosquito.
Before considering the cost associated with resistance that could impair the spread of resistance in mosquito populations, it is important to notice that the sole insertion of an exogenous gene (not even conferring any anti-parasitic advantage) leads to a drastic decrease in Anopheles stephensi fitness [24]. However, recent work with site-specific insertion seems to bring a less negative outcome in term of fitness [18]. This even seems to be the case when all different groups including the control group (called wild) derive from a lab colony and the fitness reduction due to the colonisation process is probably significant. Concerning the cost of resistance, mosquitoes are no exception and reduced fitness associated with the absence of parasite can be observed. Thus, several studies have measured the associated cost in Anopheles stephensi carrying a transgene conferring resistance again the rodent malaria parasite P. gallinaceum. Regardless if resistance was provided by the expression of SM1 (termed for salivary gland-and midgut binding peptide 1) [25] or the phospholipase A2 gene (PLA2) [26], a fitness cost was associated with it. Even in conditions where harbouring an allele conferred an advantage i.e. when mosquitoes were fed on Plasmodium-infected blood, the SM1 transgene could not reach fixation revealing that the benefit of resistance was counterbalanced by the cost of resistance in the transgenic homozygotes [27]. In any case the construction needs to follow a couple of requirements for the promoter and the gene of interest for the method to have some chances of success [28]. The gene of interest needs to express in a temporal manner i.e. after a blood-meal is taken, but also only in the tissues where it could efficiently impact the parasite life cycle, such as the midgut epithelium and the salivary glands.
Recent work on GM mosquitoes have also been done with Aedes that are not resistant towards a pathogen but that are carrying a gene that makes nearly all their offspring non-viable in a natural environment [29][30][31]. To date such a strategy has not been developed for the Anopheles genus.
For the strategy considering the replacement of malaria vector by their modified non-vector version, this question of a cost associated with resistance leads necessarily to the idea of the need to use a driving system in order to favour the spread of resistance in natural populations of mosquitoes.
Driving an allele of interest in natural populations of mosquitoes
The idea of using a gene drive to affect the epidemiology of vector-borne diseases is not a recent idea as the use of chromosomal translocation to reduce mosquito populations was already proposed in 1940 by Serebrovskii [32]. It was revived later with the idea to use those translocations to drive alleles conferring refractoriness in mosquito populations [33].
Thus the spread of refractoriness in mosquito populations could be facilitated if the allele, conferring resistance but also associated with a cost, was linked with an element whose spread is not Mendelian. One of the techniques for which various models provide information is the use of transposable elements. A tandem made of a transposon and an allele of interest can spread easily and fixation can be reached [34,35], even if the cost of resistance is particularly high [36].
Using intracellular bacteria associated with cytoplasmic incompatibility, such as Wolbachia, is also an idea that has been explored. Modifying them so that they could harbour the allele of interest would permit, at least in theory, to favour the spread of the allele of interest [37,38].
There is no natural infection of Anopheles by Wolbachia but work is in progress trialling infections of Anopheles gambiae cells by Wolbachia pipientis (strains wRi and wAlbB) in the lab [39]. However, up to now no such sustainable transformation has been done [40].
Other constructions that would favour the spread of resistance have also been considered [41,42]. Among them the use of HEG (Homing Endonuclease Genes) has been the centre of a lot of attention in the last years [43][44][45]. Apart from those systems another approach relies on the use of pairs of unlinked lethal genes. In this case, each gene is associated with the repressor of the lethality of the other one and this system is called engineered underdominance [46]. With respect to those methods a number of recent papers have been focusing on theoretical work aiming at spreading an allele conferring resistance as well as containing it. If the aim of a GM approach is to favour the spread of an allele conferring resistance it is also important to consider that self-limitation could be a real advantage to avoid the establishment of the transgene in non-target populations. Such an approach has been studied in theoretical analysis with the Inverse Medea gene drive system [47] and with the Semele one [48].
If the speed at which the construction of interest can spread in mosquito populations is a major issue, authors have also shown that in the case of the use of transposable elements one of the problems is the stability of the system with the probability of disruption [49].
However, if the spread of an allele conferring resistance is a target that can be reached, the real aim should be a strong decrease in the prevalence of the disease or even its elimination. Two models merging population genetics and epidemiology have pointed out the major importance of the efficacy of resistance [36,50]. They have shown that a significant reduction in malaria prevalence can only be obtained if the efficacy is close to 1 especially when a release of resistant mosquitoes is done in high transmission areas.
If recent work claims that the engineered-mosquito do not suffer too much from carrying a resistant allele [17], this remain only valid under lab conditions where environmental conditions remain fairly stable and usually favourable. It is interesting to note that the survival of the mosquitoes in Isaacs et al. study reaches about 35 to 40 days which is probably far more that what happens under natural conditions.
As shown with natural immune responses, environmental conditions experienced at the larval or at the adult stage can greatly affect the host-parasite interactions and thus the outcome of an infection [51]. A reduction of 75% on food availability at the larval stage in lines selected for refractoriness [22] leads to a decrease in the proportion of the mosquitoes able to melanize half of the surface of a foreign body (a Sephadex bead) of more than 50% of it [52]. Even more worryingly, a recent paper [53] revealed the complex effects of temperature on both the cellular and humoral immune responses on the malaria vector Anopheles stephensi. What is highly interesting in this study is that not only temperature can affect immune responses but also that different immune responses are affected in different manners by temperature. The authors have studied the melanization response, the phagocytosis (a cellular immune response that lead to the destruction of small organisms or apoptotic cells) and the defensin (an antimicrobial peptide) expression. The three of them are higher at 18°C while the expression of Nitric Oxide Synthase (active against a large number of pathogens [54]) peaks at 30°C and the one of cecropin (an antimicrobial peptide) seems to be temperature-independent. Concerning melanization it is important to note that if the melanization rate is higher at 18°C, the percentage of melanised beads -introduced inside the mosquito to measure its immunocompetence-(at least partly) was higher when the temperature increased ( fig. 1).
This result highlights the difficulties to define what is an optimal temperature for the melanization response especially as it is also involved in developmental processes. The complexity of the immune function appears also with cecropin expression that despite being independent from temperature was affected by the administration of an injury or the injection of heat-killed E. coli. Other works have also revealed that the immune function is affected in a complex manner by a variety of environmental parameters such as the density of conspecifics or the quality of food resources [55]. Apart from showing the need to better understand the impact of the complex interactions between temperature and other variables on the vector competence, this work also highlights the crucial importance to take them into account when determining the potential outcome of the interactions between the natural immune function, the allele conferring resistance in a GM mosquito and finally the resulting vectorial competence under a large variety of ecological conditions.
What appears to be clear is that the expression of genes involved in the anti-parasitic response are not only influenced by the sole host-parasite interactions but that the environment is a crucial factor be it the abiotic conditions, such as temperature and its daily variations, or biotic factors, such as parasites encountered at the larval or adult stage [56,57].
On the side of the parasite it would be naïve not to consider an evolutionary response in the face of selective pressure represented by any (natural or artificial) resistance. The quick selection of resistance against artemisinin in South-East Asia in the last years [58] and the evidence of its genetic basis [59] suggests that it is reasonable to envision the selection of parasite strains able to overcome any engineered resistance mechanism. Using transgenic Plasmodium-resistant mosquitoes can be considered equivalent to artificially increasing the investment of the mosquito in an immune response. Referring to some theoretical work [60] this is assumed to be followed by an increase in the parasite investment to avoid resistance. In the long term this would lead to a decrease in the effectiveness of the programme aiming at decreasing malaria prevalence or the need to 'play evolution' by monitoring the parasite population and releasing transgenic mosquitoes for which resistance could be modified as in an arm race with parasite evasion.
What is then important is to determine the longer-term of such a strategy regarding parasite virulence. Some answers have already been provided by theoretical work concerning the impact on parasite virulence to humans and mosquitoes in the case of dengue [61]. The authors examined four distinct situations: blocking transmission, decreasing mosquito biting rate, increasing mosquito background mortality or increasing the mortality due to infection; if all of them are associated with a benefit in terms of disease incidence, only the ones affecting mosquito mortality seem to pose the smallest risk in term of virulence to humans. It is important to note the scarcity of studies aiming at providing empirical data on this topic even if experimental evolution with mosquitoes and parasite can provide interesting results in a reasonable number of generations [62]. This lack of data not only concerns dengue but also malaria as has already been discussed in a paper on possible outcomes of the use of transgenic Plasmodium-resistant mosquitoes [63].
Vector control: To be or not to be transgenic-based
As mentioned earlier one of the major points to consider with transgenic mosquitoes used for malaria control are the ethical and societal issues and public acceptance of this high-tech method. Even though the importance of societal acceptance of GM mosquitoes has been recognised for a decade [64], studies on acceptability remain scarce. One first study conducted in Mali mapped out several crucial aspects of potential acceptance or rejection of GM mosquitoes [65]. While Marshall reports that his interviewees were generally "pragmatic" about the technology, acceptance was dependent on several conditions. If people were supportive of a release of transgenic mosquitoes for malaria control, they first wanted to see evidence of safety for human health and the environment prior to releases. In addition, proof of efficacy of the technology in reducing malaria prevalence was requested.
Lastly people declared that they would prefer the trial to be done outside of their village and when comparing GM crops and GM mosquitoes, people were more sceptical of the latter. Even if this not a rejection of the idea of using a GM technology for health purpose, it is important to note that a population, even if at risk of contracting malaria, remains cautious about the idea of using such a technology. This should remind us how, in the 70's, a decade-long programme conducted by the WHO in India utilising the sterile insect technique (SIT) ended in a chaotic way after the publication of inaccurate information in the Indian press [66].
Secondly, the question of regulation has recently been highlighted as crucial [5,67]. Because the social and environmental implications of GM mosquitoes are significant and potentially irreversible, and as the regulatory attention that GMOs have received in Europe suggests broad-based trials and releases require robust legislation and international agreements. These regulations are still under development, and it is important to note that at the time of the first releases in The Cayman Islands international guidance on open field releases of GM mosquitoes was still in preparation [67,68]. While the existing Cartagena Protocol on Biosafety is considered to be applicable to GM crops, it is in need of specific amendments in order to work for GM mosquitoes [69].
Furthermore, in terms of regulation one has to distinguish between two different types of GM mosquitoes. While regulation and tracking might be possible for genetically sterilised mosquitoes as they are self-limiting in their spread, tracking and containment of GM mosquitoes with self-spreading genetics, i.e. fertile mosquitoes that block disease transmission, is considered almost impossible, or at the very least extremely difficult [70,71]. This distinguishes GM mosquitoes from earlier GM technologies, such as for the modification of crops. GM and non-GM crops can be separated from each other and marked by labels on GM products, it can thus be seen as a technology of choice. However, the accuracy of this argument is only limited. As for instance Lezaun has shown, bees have proven to be effective agents of cross-pollination between GM and non-GM crops, thus subverting regulations that aim to keep GM and non-GM crops separate [72]. GM insects, however, are markedly different. The elusiveness of mosquitoes will likely be a major impediment to tracking, containment and comprehensive regulation, as for instance the spread of Aedes albopictus and herewith the increased risk of arboviral transmission in new locations across the world has shown, mosquitoes are hard to contain. This renders GM mosquitoes as a no-choice technology -once released, GM mosquitoes will stay in our environments.
A second major issue in terms of the social and ethical implications of GM mosquitoes is the question by whom and how they are produced and implemented. GM modification of insects is an expensive high-tech intervention and research so far has mainly been located in resource rich laboratories in the Global North, rather than in disease-endemic developing countries [73]. This enrols the technology thoroughly into discussions about technology transfer and development initiatives from North to South, and sits uncomfortably with the West's history in colonial exploitation and tropical medicine. Aside from this imbalance in bio-capital and agenda setting, GM mosquitoes are as much a product of the biotech industry as they are tools for public or global health. Are GM mosquitoes currently seen as a public good or a commercial product? While most of the research and development of GM mosquitoes has so far been funded by public institutions -both national research foundations -such as the US National Science Foundation-and philanthropic organisations -such as the Bill and Melinda Gates Foundation and the Wellcome Trust, the mosquitoes that have been released were part of a commercial project. The emerging GM mosquito industry has caught the interest of private biotech firms. The first company to produce and market GM mosquitoes is Oxford Insect Technologies (Oxitec), founded by a group of entomologists as a spin-off company of Oxford University. The company is a for-profit-enterprise, so far has mainly been funded by public entities and venture capitalists, and is one of the main drivers of high-end developments in the field. As discussed in the introduction, Oxitec was the first to release sterile GM mosquitoes into the wild in the field trials in The Cayman Islands. A fundamental issue that is raised through the dominance of Oxitec in the field is the tension between GM mosquitoes as a public health tool and a commercial product [74][75][76]. While GM mosquitoes in malaria control would be used as a tool of disease control and to foster public health, companies like Oxitec follow different aims -they have to become profitable and eventually make profits with their GM entities. This tension brings another social issue of GM mosquitoes to the forefront, namely the question of how one conducts field trials with GM mosquitoes in an ethical way?
As we alluded to in the introduction, the first releases in The Cayman Islands were conducted in a rather secretive fashion. Oxitec only published the news about the release with a one-year delay [1], leading to accusations that the releases were deliberately done in secret [75,76]. Oxitec stated the trials were prepared and conducted in close cooperation with local Mosquito Control and Research Unit, had conformed to the British Overseas Territory's biosafety rules, and that information had been sent to local newspapers preceding the trials. However, many locals claimed they were not informed and no risk assessment documents were made available to the public on the internet. The only risk assessment document that can be found was published by the UK parliament in 2011, over one year after the releases started [5]. The Cayman Island releases have triggered fears for entomologists working on GM mosquitoes that such secretive trials might lead to a public backlash and undermine their own extensive efforts at public engagement, some scientists for instance claimed they have spent years preparing a study site through "extensive dialogues with citizen groups, regulators, academics and farmers" [1].
GeneWatch argued that Oxitec purposefully bypassed existing international GM regulations (developed mainly for GM crops), because Cayman Islands does not have biosafety laws and is not a signatory to the Cartagena Protocol on Biosafety or the Aarhus Convention (even though since the UK is a signatory to the protocol, Oxitec had a duty to report the export of GM eggs to UK government). As a result GeneWatch reads Oxitec's actions as colonialist tactics: "the British scientific establishment is acting like the last bastion of colonialism, using an Overseas Territory as a private lab" [76].
All in all, this raises the question what ethically and socially responsible research on GM mosquitoes means? Here, the ability of researchers and stakeholders to communicate with each other is key for meaningful public engagement. In this respect, a recent survey has focused on the willingness of scientists to have interactions with a non-scientific audience [77]. One of the main findings of the survey indicates that more than 90% of scientists working on GM mosquitoes are agreeable to interactions with the public on their research. However, communication might not be enough and real discussion might not be easy between researchers and a non-scientific audience. This has been underlined by the reluctance of a fraction of the research community to have their research project evaluated by a non-scientific public [77]. Thus, while a significant proportion of researchers are ready to interact with a non-scientific audience, they seem to be less likely to accept an evaluation and a prior-agreement of a research proposal by the general public, interestingly especially researchers from the Global North are hesitant. On the other hand, many scientists in malarious countries do welcome exchanges with publics and are more willing to negotiate their research project with members of the disease-endemic communities.
In summary, the GM mosquito technology in malaria control raises a set of challenging questions. Challenges from a biological and ecological perspective are interlinked with questions about democratic decision-making, local acceptance and international regulation of these emerging entities. Such a potentially controversial technology cannot afford to skip these debates and time is ripe to focus on the ethical and sociological aspects governing the potential use of GM mosquitoes. Furthermore, it is crucial that the development of transgenic methods does not lead to a decrease in funding of classical, accepted and efficient vector control methods -indeed, they should be favoured and enhanced to continue curbing the malaria burden today. | 2018-04-10T05:38:18.224Z | 2013-07-24T00:00:00.000 | {
"year": 2013,
"sha1": "36111472d08aefde0dab9d63bfd13aa0974c8e17",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/44149",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "36111472d08aefde0dab9d63bfd13aa0974c8e17",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Political Science"
]
} |
6224538 | pes2o/s2orc | v3-fos-license | Towards the integration of mouse databases - definition and implementation of solutions to two use-cases in mouse functional genomics
Background The integration of information present in many disparate biological databases represents a major challenge in biomedical research. To define the problems and needs, and to explore strategies for database integration in mouse functional genomics, we consulted the biologist user community and implemented solutions to two user-defined use-cases. Results We organised workshops, meetings and used a questionnaire to identify the needs of biologist database users in mouse functional genomics. As a result, two use-cases were developed that can be used to drive future designs or extensions of mouse databases. Here, we present the use-cases and describe some initial computational solutions for them. The application for the gene-centric use-case, "MUSIG-Gen" starts from a list of gene names and collects a wide range of data types from several distributed databases in a "shopping cart"-like manner. The iterative user-driven approach is a response to strongly articulated requests from users, especially those without computational biology backgrounds. The application for the phenotype-centric use-case, "MUSIG-Phen", is based on a similar concept and starting from phenotype descriptions retrieves information for associated genes. Conclusion The use-cases created, and their prototype software implementations should help to better define biologists' needs for database integration and may serve as a starting point for future bioinformatics solutions aimed at end-user biologists.
Background
At present, we are just beginning to appreciate the complexity of genotype-phenotype association in humans, but more detailed and comprehensive analyses in basic research are urgently needed. Although studies in humans are important, they are limited because of the size of cohorts, strong but often unknown environmental influences, poor and inconsistently coded diagnosis, and lack of repeatability. Therefore, animal models are absolutely essential to complement human studies; they allow the investigation of underlying biological mechanisms in well-controlled experimental systems.
In particular, the mouse is an ideal model system for studying genetic factors that contribute to diseases because genetic reference populations (GRPs) with a large number of allelic variants in many genes, combinations thereof, and many knock-out mouse lines with deletions in single genes are available [1]. Research on mouse model systems has generated valuable discoveries for our understanding of the biological mechanisms of the normal function of the immune system as well as immune abnormalities, cardiovascular diseases, cancer, and infectious diseases [2].
Consequently, funding agencies around the world have supported an increasing number of functional genomics projects focused on the use of the laboratory mouse as a model for human disease. The results obtained have been collected in various databases. However, in most cases, these databases represent single project outputs and are maintained at different sites. Exceptions are, for example, the mouse genome database (MGD) database of MGI [3], the mouse phenome database (MPD) [4], Europhenome [5] and the GeneNetwork database [6], which have collected information from many different sources. MGD is a database which has been optimized for researchers in the field of mouse functional genetics and genomics. It is constantly updated and manually curated and thus contains information of extremely high quality. Similarly, the GeneNetwork database contains phenotype and genotype information on mouse GRPs from the literature and directly entered source data, as well as tools to map quantitative trait loci. Both databases are extensively linked to other informatics resources.
However, there is a large volume of data in distributed databases that is not contained in MGI (Mouse Genome Informatics) or GeneNetwork and which are important for functional genomics studies (see the Mouse Resource Browser MRB [7]). Ad-hoc integration of these databases is very difficult. Many databases require a separate login procedure and need to be accessed using different methods (e.g. via a website, downloadable files or web services). Several resources do not adopt common standards e.g. using the same identifier for a given gene or protein [8]. In this case, a user may need to convert their gene identifiers to whatever the particular resource understands, e.g. MGI or Ensembl/mouse IDs, before starting a search.
As a first step towards new concepts for database integration, we have established a network of scientists from Europe, North America, Japan and Australia. The network is funded as a Coordination Action by the European Commission and called CASIMIR (Coordination and Sustainability of International Mouse Informatics Resources) [9]. The Coordination Action is aimed at recommending standards to allow data sharing and integration between different projects.
Much can already be achieved using query tools that ease selection and joining of distributed data, such as BioMart [10], and/or workflow tools that support stepwise data retrieval, conversion and integration, such as Taverna [11] and Galaxy [12]. A prerequisite is that sources provide programmatic interfaces for queries or workflow tools that can be used to access or import the original data. However, such interfaces are often not available. This challenge was addressed by Smedley et al. who federated BioMart and MOLGENIS [13,14] in a Taverna workflow [15]. But these solutions are still too involved for many bench biologists to use directly for their research. Task-oriented user interfaces are needed on top of all these tools to more closely support biologists in their integrative analyses.
In order to gather the perspective of the end-users, the biologists, who will perform the actual data mining we designed use-cases together with biologists. Subsequently, two software implementations were developed on the basis of these use-cases to provide tools which could carry out the tasks requested by the users in the most practical format. Here, we describe two use-cases that arose as a result of our discussions with biologistusers during workshops, meetings and via a questionnaire. Furthermore, we demonstrate the first steps towards their implementation.
Definition of the use-cases
During the first sessions with different user groups, some principle needs for data mining became apparent. These needs were further confirmed in subsequent meetings and demonstrations of development steps to biologist users. A user-friendly interface should not only query multiple databases but also allow for multiple search terms, allow iterative interactions, and contain a tool that allows storage of the results. Furthermore, most of the currently performed data mining in functional mouse genomics concerns genes, their functions and variants on one side; and phenotype descriptions on the other side. Based on these discussions, we designed two generic use-cases that should be suitable to a larger scientific community: a gene-centric and a phenotypecentric use-case.
Gene-centric use-case
The advent of high-throughput technologies in biology, such as gene expression microarrays, makes it now possible to identify, with the help of statistical and bioinformatics tools, large groups of candidate genes changing their expression levels in different experimental conditions. However, of the genes identified in this way, usually a few hundred, only a limited number of genes (in the order of 20-50) can feasibly be studied experimentally in the laboratory. Therefore, researchers prioritize the gene lists based on their own knowledge, literature, and additional information from many different web accessible databases, such as gene and protein descriptions, genetic diversity information, expression patterns in different tissues, etc. Since the searching of all these web databases by hand is very laborious and time-consuming, our user groups decided to describe a gene-centric use-case starting with an input of a limited number of gene names and aiming to facilitate easy and automatic collection of information about these genes from different sources. This process should be performed in an interactive fashion and allow storage and export of the results obtained.
An iterative user-driven strategy was developed based on the principles of an "online shop" (Fig. 1). Here, a customer can perform searches on the available data and collect them in a shopping cart. By performing additional searches for other data and by evaluating additional information on them, the customer can then decide to add or remove articles from his cart. Finally the collected articles are "exported" by executing an order.
Following the above principle, the integration of mouse databases via a gene-centric use-case should allow candidate gene symbols to be entered into a query form which then automatically collects basic information like synonyms, gene IDs, descriptions and genome locations for the entries (Fig. 1). Based on this information the user will then be able to refine the gene hit list by selecting the interesting genes and removing false hits. The final list will then be saved as a 'shopping cart' which can be revisited, modified, refined or extended. Finally, it should be possible to export the gene list in Excel-readable CSV format (Fig. 1).
A difficulty often encountered when performing analyses on genes, is that they have several synonyms and that in many scientific publications the systematic gene nomenclature is not followed (see [16]). Examples are RANTES (correct gene symbol Ccl5), MIP1a (Ccl3) and IP-10 (Cxcl10). For other genes, it may be not known to the researcher that they represent members of large gene families, and one has to choose one or all to proceed with the analysis. Examples are Hox, Fgf, Inhibin, and interferon genes. Here, we consider as the "correct gene name" the name which is given by the international nomenclature committees: Mouse (International Committee on Standardized Genetic Nomenclature for Mice [17]), human (HUGO Gene Nomenclature Committee [18]), and rat (Rat Gene Nomenclature Committee [19]).
It is thus important that the use-case allows entering any gene name, synonyms, incomplete names, etc., but still makes sure that the correct genes will be found. For this, entries will be searched in a first step against the MGI database for disambiguation [20]. For each gene name multiple hits may appear and the user is then able to select the correct ones and add them to the cart.
In a second step, it is possible to collect additional information from different databases for the genes in the cart list. Examples of databases are MGI and ENSEMBL/mouse for information on gene structure and links to other resources; Eurexpress [21], SymAtlas [22] and ArrayExpress [23] for gene expression information; and INTACT [24] for gene interaction data. After retrieval of this information the user may refine his gene list in a given cart by searching for other genes or deleting genes in the current list.
The list of collected genes in a shopping cart can then be used to perform meta-analyses. For example, an analysis of GO-terms will allow finding out if certain GO-categories are over-represented in the particular gene list, indicating that the genes may belong to a specific pathway or biological process. Similarly, an analysis of expression patterns may reveal if there is a certain tissue in which the genes from the list are preferentially expressed. Figure 1 Schematic outline of gene-centric use case. The gene-centric use case should make it possible to enter gene names into a query form which then automatically collects basic information like synonyms, gene IDs, descriptions and genome locations for the entries. The user should then be able to select the interesting hits. The final list will then be saved as a 'shopping' cart which can be revisited and exported. At present, only few of the currently existing databases offer some of the above-described functionalities, the most comprehensive one being MGI. And thus far, only BioMart represents an initiative which aims to allow the user to design queries on information from otherwise disparate databases. Also, BioMart allows refining searches and filtering out relevant information. However, Biomart is currently aimed at the advanced and trained user and is not yet designed for simple querying and collection of results in a shopping cart to which new genes and information can be added.
Phenotype-centric use-case
A second use-case was defined through the interaction with the user groups. It should allow researchers to begin their search with a phenotype description (Fig. 2). In this use-case, the scientist will search a phenotype ontology, obtain the closest hits and then decide which terms should be used in the following query. The usecase should also allow browsing of the phenotype ontology and the selection of terms of interest. The result of the searches for phenotype descriptions should then link to the associated genes.
At present, the most extensive and well structured phenotype ontology for the mouse is the Mammalian Phenotype (MP) ontology [25], accessible at MGI. MP is therefore used as a first standard which will allow querying MGI but also other databases that are using MP terms for phenotype descriptions, like EuroPhenome [26].
In the future, cross-referencing mouse MP terms with ontologies that describe diseases (such as the Disease Ontology -DO [27]) and phenotypes in humans (such as the Human Phenotype Ontology HPO [28] and Mouse Pathology Ontology MPATH [29]) should allow users to make cross-species searches by starting from phenotype descriptions. This will be particularly useful for human clinician researchers who are not familiar with mouse databases but who would like to know if there is a mouse model available for a given human disease.
The results from the phenotype-driven searches should then be linked to gene names associated with a given phenotype. These genes are presented as a list from which the user can choose the genes of interest and save them in a shopping cart. It is then possible to feed the genes into the gene-centric use-case and perform a more detailed data mining or meta-analysis.
The description and further development of the phenotype-driven use-case may represent a very useful concept for scientists and clinicians outside the mouse community. For example the Human Phenotype Ontology HPO is based on OMIM [30] and a search may be generated using HPO as a starting point to retrieve disease ID's from OMIM which can then be linked to gene symbols. The Drosophilia phenotype ontology [31] developed by the Flybase group could be used to retrieve gene symbols and thereby gene function information from Flybase [32]. Or the C. elegans phenotype ontology [33] could be used to retrieve gene symbols from Wormbase [34]. Gene symbols retrieved from these databases could then be stored in a shopping cart. Figure 2 Schematic outline of phenotype-centric use case. The user will search or browse a phenotype ontology, obtain the closest hits and then decide which terms should be used for searches of phenotype descriptions. The latter should then link to genes associated with these phenotypes. Implementation of the use-cases: MUSIG-Gen and MUSIG-Phen Web services for database integration A prerequisite for computer-supported data integration is programmatic access to select and retrieve data from distributed resources. As described by [15] there are several possible technical solutions to integrate data from different mouse informatics databases. The "CASIMIR strategy" is based on semantic standardization or wrapping of information transferred by web services. Currently the most popular implementations of web services use the SOAP/WSDL or the XML-REST protocols. The advantages of opening APIs and transferring information using XML schemas are discussed in [15].
For Europhenome and Mugen [35] SOAP/WSDL web services were available which could be used for MUSIG-Phen, and we set up a BioMart web service for part of the MGI data. Other databases such as the Ontology Lookup Service (OLS [36]) for ontology data and INTACT already had web services.
Users may want to integrate their local database or other databases. To demonstrate how this can be achieved, we generated web services for accessing GNF SymAtlas expression data. For this, we first saved the SymAtlas data locally. We then defined the Entrez Gene ID's as a common field which could be retrieved from the MGI Biomart and matched to the records in the local SymAtlas database. We then used MOLGENIS to create the relevant SOAP web services to retrieve the data from the local database, to subsequently load and display them in the shopping cart interface.
Implementation of MUSIG-Gen
After having defined the use-cases we wanted to provide users and developers with a first implementation which may then be tested and further revised in the future. Thus, certain parts of the use-case scheme outlined in Fig. 1 were implemented in the application MUSIG-Gen http://www.casimir.org.uk/usecase1/. In the following, we describe this tool from the perspective of the scientific user. Fig. 3 displays the entry form of MUSIG-Gen where the user can type in gene names or synonyms (example: synonyms for chemokines). The result of the subsequent search query shows a list of hits from the MGI database which contain the query name (Fig. 4) and, in the default setting, additional information for each gene, like gene symbol, full gene name, all synonyms, and chromosomal location. This information allows the user to decide which one of the hits in the list corresponds to the gene of interest. As shown for the inputs "RANTES" and "IP-10", the correct gene names are displayed together with the search term and all other synonyms. If, for example, "Fgf" is used as query, all Fgf gene family members are displayed. The user may now decide which members to follow further. The genes selected in this process via the check box may then be saved in a shopping cart.
The gene list can subsequently be retrieved from the cart (Fig. 5) and additional information added, for example MGI IDs. These are hyper-linked to the corresponding entry at MGI so that the user has access to all MGI information on this particular gene with a single mouse click. Similarly, information on gene expression can be retrieved from the SymAtlas database. This query creates a new column for all genes on the list, displaying the SymAtlas IDs. The ID is again hyper-linked to SymAtlas and the corresponding data can be visualized with one mouse click (Fig. 6). Also, a search for information on Single Nucleotide Polymorphisms (SNPs) has been implemented. This function queries the Ensembl database and is currently set to display SNPs which result in non-synonymous coding changes in the open reading frame of the genes as well as the SNP Variation ID and a link to the Ensembl page with more details. (Fig. 6).
New genes can be easily added to an existing cart by calling up the entry form from within a cart and follow the same procedure as described above.
Because the genes listed in a cart contain a correct and unique identifier (MGI and/or Ensembl IDs) they can be directly used to query other databases. Such features and searches could be easily added to the existing MUSIG-Gen application. But even more important, it may now possible to perform an analysis on the entire group of genes in the cart. In the current version of the use-case, we implemented a GO term count as a proofof-concept for the user interface. GO terms can be associated with all genes of the list using the 'load more data' feature and the representation of different GOterms across the whole gene list be displayed (Fig. 7). These analyses may be extended to more sophisticated meta-analysis including also statistical evaluations in the future. Similarly, we added a tool to associate phenotype terms from the MP ontology and show their representation in the cart gene list.
As a final step, we added an export function to the shopping cart which allows the user to export his data in CSV format and then perform highly customized analysis locally.
Technical aspects of the implementation of MUSIG-Gen
The application layer of the shopping cart was developed in PHP. PHP proved to be a good choice for the development of the user interfaces, but did create some problems for the development of the web service client scripts because of a lack of multi-threading. The latter makes it impossible to retrieve data from different web services at the same time. The major problem is that some web pages access multiple services and depending on the network speed and the kind of query some web services are slow to respond. This operation would thus stop the page from loading in the browser. We managed to mitigate this problem by creating an AJAX (Asynchronous JavaScript and XML) based loading system using the PHP PEAR AJAX [37] libraries. This system loads the main page first and then accesses each web service individually, thereby creating a more responsive system which lets the user interact with some data while the remainder of the data is still being retrieved. The shopping cart system uses a Postgresql database to store user data. The data stored comprises the user's personal data (which is integrated into our web site management system to allow for a single login system) as well as the data retrieved from the different web services. The system imposes no limits as to how many data fields or data values a user can download and store in his shopping carts.
The application initially retrieves gene nomenclature and genome location data based on gene symbol: By default, nomenclature and genome location data is loaded from our MGI BioMart http://www.casimir.org. uk/biomart/martview/. Other data from the MGI Bio-Mart can also be loaded, such as MGI, Ensembl, Entrez-Gene IDs as well as GO and MP ontologies. The Ensembl BioMart can also be queried at this stage for Uniprot IDs. Both BioMarts are accessed using the default BioMart XML-REST services. For this, we developed and used a generic BioMart XML-REST PHP client class which can be used to query any BioMarts.
Data may also be loaded from the Eurexpress BioMart or from the GNF and INTACT SOAP web services (using generic PHP SOAP libraries). There are also some fields which have the option of loading additional information, e.g. the GO and MP ID fields. The user can choose to load the ontology term names which are loaded from the OLS SOAP web service.
The source code and documentation for the MUSIG-Gen prototype may be downloaded form the following web server: http://www.casimir.org.uk/sourcecode/
Implementation of MUSIG-Phen
Based on the scheme outlined in Fig. 7, certain parts of the phenotype-centric use-case were implemented in the application MUSIG-Phen http://www.casimir.org.uk/ Figure 3 Entry page for MUSIG-Gen. Here, the user can type in gene names or synonyms (circled entry box) and specify additional criteria under "output" which will be displayed on the hit list. Chemokine names and synonyms are displayed as an example. Clicking on the box "Search" (arrow) will start the search. A click on "Reset" will delete the gene names in the entry box and allow entering new names.
Figure 4
First result page displaying list of all possible hits from MGI. The results of a search will be displayed as a list of hits from the MGI database which contain the query name and additional information for each gene. This information will allow the user select the hits which correspond to the gene of interest by clicking on the check box (arrow). A click on the button "Add selected genes to cart" will save the selected entries to a cart (arrow). usecase2/. The MUSIG-Phen prototype starts from a phenotype description, collects the genes associated with this phenotype in a cart and then performs all the analyses described above for MUSIG-Gen.
The starting point of MUSIG-Phen is a search page in which a free text entry will display a list of MP terms that most closely resemble the search term. The user may now choose the appropriate term, send a query to MGI and retrieve a list of genes that are associated with it. The list of genes can then be saved in a cart and further analyzed as described for MUSIG-Gen, e.g. add more information, perform meta-analysis, export lists. Alternatively, the user may start his query by browsing the hierarchical list of MP terms, select one and then retrieve the genes associated to the MP term (Fig. 8).
At this stage, the implementation is very similar to the services already provided by MGI. Thus, in addition to the current MGI search options, we implemented the possibility to query other external databases which contain phenotype descriptions based on MP terms. We demonstrated feasibility of this feature for searches of the Mugen and Europhenome databases.
At the present state, the MUSIG-Phen software was not designed for more sophisticated queries, because discussions with users revealed that further detailed queries very soon become highly specialized and Figure 5 Retrieving list of genes from cart and adding more information. A gene list saved as a cart may be retrieved and additional information added by choosing the option "Load more data for all genes in this cart" (arrow). This will open a new window (insert) in which more information can be loaded from various databases. complex for certain user subgroups. However, the present use-case implementation may already serve to query nascent databases (e.g. phenotype data from EUMODIC) and represents a very useful platform to test new developments which aim to connect mouse and human phenotype databases.
Technical aspects of the implementation of MUSIG-Phen
The implementation of the phenotype-centric use-case uses three SOAP/WSDL web services and our MGI Bio-Mart web service: Initially the Mammalian Phenotype (MP) ontology is loaded from the OLS web service. The user-selected MP term is sent as query input to the MGI, EuroPhenome and MUGEN web services and matching gene symbols are returned. Gene symbols can then be selected and sent to the gene-centric use-case shopping cart.
Basic information about web services, such as type (for example BioMart or SOAP) and location URL is currently stored in a separate table. However, a larger web service catalogue such as BioMoby [38], Biocatalogue Figure 6 Extended result list. Result list displaying information from various databases. The MGI (column 5) and Ensembl IDs (arrows) are hyper-linked to the MGI and Ensembl databases, respectively. For the identification of non-synonymous SNPs, a query to the Ensembl database will create a new column (arrow column 10) in which predicted amino acid changes are displayed (circle). Figure 7 GO-term analysis. The representation of different GO-terms across the whole gene list in a cart can be displayed. For this option to be active, GO terms have to be loaded first with the "Load more data for all genes in this cart" option.
Figure 8
Browsing MP terms in MUSIG-Phen. On the MUSIG-Phen search site, the user may type in a phenotype description. Hitting the "Search" button will retrieve appropriate hits from the MP ontology. The different levels of the ontology may then be displayed by clicking in the "+" box. A click onto the MP term itself (arrow) will activate a query to the MGI database and retrieve all genes that are associated with it. [39] or the mouse-centric MRB could easily be integrated and used to create a wider array of services. These services could also be linked to create a Tavernalike workflow tool which automatically matches IDs and fields from different services. The current limitation to this approach is the lack of standardization across databases and web services with respect to the use of ontologies and the naming of web service fields. For example a field for MGI gene IDs could be called mgi_id, gen-e_id, MGIGeneId etc. which would make automatic matching impossible. We therefore favor the idea to develop a web service field ontology which should be integrated into MRB or Biocatalogue to provide a lookup service for field names. Currently developments are ongoing within the Biocatalogue project to create a web service ontology to which web service developers annotate their fields which may provide a suitable solution to this problem.
The source code and documentation for the MUSIG-Phen prototype may be downloaded form the following web server: http://www.casimir.org.uk/sourcecode/
Discussion and Conclusion
The aim of generating the MUSIG-Gen and MUSIG-Phen applications was to provide a first set of solutions to user-defined use-cases and thereby generate a test environment for a fully distributed integration strategy. We also presented the applications to various user groups and collected their feed-back. All users appreciated the tools which were able to integrate data from several databases, and they especially liked the principle of the shopping cart. An additional, often mentioned suggestion was to link the genes in MUSIG-Gen to mouse mutants and phenotypes as well as gene expression information. We are planning to add these functionalities to future prototypes.
Our plan for a third use-case is to define the needs for an integration of mouse and human functional genomics databases. Here, we believe that the phenotype-centric use case may serve as a valuable basis to provide an entry point for clinical researchers. The concept would be to enter descriptions of human disease phenotypes as queries and to obtain mouse phenotype descriptions which relate to these terms. However, for such a query, it will first be necessary to relate the human phenotype descriptions with MP terms or with more detailed EQbased phenotype descriptions. | 2017-06-17T08:58:24.815Z | 2010-01-22T00:00:00.000 | {
"year": 2010,
"sha1": "7575d1d1078a37079874f66480ee31a1487a572a",
"oa_license": "CCBY",
"oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/1756-0500-3-16",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7575d1d1078a37079874f66480ee31a1487a572a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246893384 | pes2o/s2orc | v3-fos-license | Effectiveness of Maternal Vaccination with mRNA COVID-19 Vaccine During Pregnancy Against COVID-19–Associated Hospitalization in Infants Aged <6 Months — 17 States, July 2021–January 2022
COVID-19 vaccination is recommended for persons who are pregnant, breastfeeding, trying to get pregnant now, or who might become pregnant in the future, to protect them from COVID-19.§ Infants are at risk for life-threatening complications from COVID-19, including acute respiratory failure (1). Evidence from other vaccine-preventable diseases suggests that maternal immunization can provide protection to infants, especially during the high-risk first 6 months of life, through passive transplacental antibody transfer (2). Recent studies of COVID-19 vaccination during pregnancy suggest the possibility of transplacental transfer of SARS-CoV-2-specific antibodies that might provide protection to infants (3-5); however, no epidemiologic evidence currently exists for the protective benefits of maternal immunization during pregnancy against COVID-19 in infants. The Overcoming COVID-19 network conducted a test-negative, case-control study at 20 pediatric hospitals in 17 states during July 1, 2021-January 17, 2022, to assess effectiveness of maternal completion of a 2-dose primary mRNA COVID-19 vaccination series during pregnancy against COVID-19 hospitalization in infants. Among 379 hospitalized infants aged <6 months (176 with COVID-19 [case-infants] and 203 without COVID-19 [control-infants]), the median age was 2 months, 21% had at least one underlying medical condition, and 22% of case- and control-infants were born premature (<37 weeks gestation). Effectiveness of maternal vaccination during pregnancy against COVID-19 hospitalization in infants aged <6 months was 61% (95% CI = 31%-78%). Completion of a 2-dose mRNA COVID-19 vaccination series during pregnancy might help prevent COVID-19 hospitalization among infants aged <6 months.
On February 15, 2022, this report was posted as an MMWR Early Release on the MMWR website (https://www.cdc.gov/mmwr).
COVID-19 vaccination is recommended for persons who are pregnant, breastfeeding, trying to get pregnant now, or who might become pregnant in the future, to protect them from COVID-19. § Infants are at risk for life-threatening complications from COVID-19, including acute respiratory failure (1). Evidence from other vaccine-preventable diseases suggests that maternal immunization can provide protection to infants, especially during the high-risk first 6 months of life, through passive transplacental antibody transfer (2). Recent studies of COVID-19 vaccination during pregnancy suggest the possibility of transplacental transfer of SARS-CoV-2-specific antibodies that might provide protection to infants (3)(4)(5); however, no epidemiologic evidence currently exists for the protective benefits of maternal immunization during pregnancy against COVID-19 in infants. The Overcoming COVID-19 network conducted a test-negative, case-control study at 20 pediatric hospitals in 17 states during July 1, 2021-January 17, 2022, to assess effectiveness of maternal completion of a 2-dose primary mRNA COVID-19 vaccination series during pregnancy against COVID-19 hospitalization in infants. Among 379 hospitalized infants aged <6 months and 203 without COVID-19 [control-infants]), the median age was 2 months, 21% had at least one underlying medical condition, and 22% of case-and control-infants were born premature (<37 weeks gestation). Effectiveness of maternal vaccination during pregnancy against COVID-19 hospitalization in infants aged <6 months was 61% (95% CI = 31%-78%). Completion of a 2-dose mRNA COVID-19 vaccination series during pregnancy might help prevent COVID-19 hospitalization among infants aged <6 months.
Using a test-negative, case-control study design, vaccine performance was assessed by comparing the odds of having completed a 2-dose primary mRNA COVID-19 vaccination series during pregnancy among mothers of case-infants and control-infants (those with negative SARS-CoV-2 test results) (6). Participating infants were aged <6 months and admitted outside of their birth hospitalization to one of 20 pediatric hospitals during July 1, 2021-January 17, 2022. During this period, the SARS-CoV-2 Delta variant was the predominant variant in the United States through mid-December, after which Omicron became predominant. ¶ Case-infants were hospitalized with COVID-19 as the primary reason for admission or had clinical symptoms consistent with acute COVID-19,** and case-infants had a positive SARS-CoV-2 reverse transcription-polymerase chain reaction (RT-PCR) or antigen test result. No case-infant received a diagnosis of multisystem inflammatory syndrome. Control-infants were those hospitalized with or without COVID-19 symptoms and with negative SARS-CoV-2 RT-PCR or antigen test results. Enrolled control-infants were matched to case-infants by site and were hospitalized within 3-4 weeks of a case-infant's admission date. Baseline demographic characteristics, clinical information, and SARS-CoV-2 testing history were obtained through parent or guardian interviews performed by trained study personnel during hospitalization or after discharge, and electronic medical record review of the infant's record. Mothers were asked about their COVID-19 vaccination history, including number of doses and whether a dose had been received ¶ https://covid.cdc.gov/covid-data-tracker/#variant-proportions ** Symptomatic COVID-19-like illness was defined as one or more of the following: fever, cough, shortness of breath, gastrointestinal symptoms (e.g., diarrhea, vomiting, or "stomachache"), use of respiratory support (high-flow oxygen by nasal cannula, new invasive or noninvasive ventilation) for the acute illness, or new pulmonary findings on chest imaging consistent with pneumonia. Four case-infants tested at an outside hospital or other facility had some missing data on positive test results and were not retested at the study hospital.
during pregnancy, location where vaccine was received, vaccine manufacturer, and availability of a COVID-19 vaccination card. Study personnel reviewed documented sources, including state vaccination registries, electronic medical records, or other sources (e.g., documentation from primary care providers) to verify vaccination status. Mothers were considered vaccinated against COVID-19 if they completed a 2-dose series of either Pfizer-BioNTech or Moderna mRNA COVID-19 vaccine, based on source documentation or by plausible self-report (provision of vaccination dates and location). Maternal COVID-19 vaccination status was categorized as 1) unvaccinated (mothers who did not receive COVID-19 vaccine before their infants' hospitalization) or 2) vaccinated † † (mothers who completed their 2-dose primary mRNA COVID-19 vaccine series during pregnancy ≥14 days before delivery). SARS-CoV-2 infection status of the mother during pregnancy or after delivery was not documented in this evaluation. Mothers were excluded if they were partially vaccinated during pregnancy (1 dose during pregnancy and none before pregnancy) or vaccinated after pregnancy (71), received Janssen (Johnson & Johnson) COVID-19 vaccine (four), received 2 doses of COVID-19 vaccination before pregnancy (seven), or received >2 doses of COVID-19 vaccine ≥14 days before delivery (10).
Descriptive statistics (Pearson chi-square tests and Fisher's exact tests for categorical outcomes or Wilcoxon rank-sum tests for continuous outcomes) were used to compare characteristics of case-and control-infants; p-values <0.05 were considered statistically significant. Effectiveness of maternal vaccination (i.e., vaccine effectiveness [VE]) against infant COVID-19 hospitalization was calculated using the equation VE = 100% × (1 -adjusted odds ratio of completing 2-doses of COVID-19 mRNA vaccines during pregnancy among mothers of case-infants and control-infants), determined from logistic regression models. Models were adjusted for infant age and sex, U.S. Census region, calendar time of admission, and race/ethnicity (6). Other factors were assessed (e.g., infant's underlying health conditions, Social Vulnerability Index, and behavioral factors) but were not included in the final model because they did not change the odds ratio of vaccination by >5% or because data on many infants were not available (e.g., breastfeeding history, prematurity, or child care attendance). In a secondary analysis, effectiveness of maternal receipt of † † Mothers were defined as vaccinated after completing their 2-dose primary mRNA COVID-19 vaccine series during pregnancy, including both doses received during pregnancy or the first dose received before pregnancy and the second dose, completing their primary series, received during pregnancy. Data on maternal moderately or severely immunocompromising conditions were not recorded for mothers of enrolled infants to determine whether mothers needed an additional mRNA COVID-19 vaccine dose to complete their primary series.
the second dose of COVID-19 vaccination early in pregnancy (within the first 20 weeks) and late in pregnancy (21 weeks through 14 days before delivery) was assessed. Statistical analyses were conducted using SAS (version 9.4; SAS Institute). Procedures were approved as public health surveillance by each participating site and CDC and were conducted consistent with applicable federal law and CDC policy. § § During July 1, 2021-January 17, 2022, among 483 eligible infants in 20 pediatric hospitals in 17 states, 104 (22%) were excluded; 71 excluded infants were born to mothers partially vaccinated during pregnancy or vaccinated after delivery, 10 were born to mothers who received a third vaccine dose ≥14 days before delivery, and 23 were excluded for other reasons. ¶ ¶ Among the remaining 379 hospitalized infants (176 case-infants and 203 control-infants), the median age was 2 months, 80 (21%) had at least one underlying medical condition, and 72 (22%) were born premature (Table 1). Among case-infants, 16% of mothers had received 2 COVID-19 vaccine doses during pregnancy, whereas 32% of control-infant mothers were vaccinated. Case-and control-infants had similar prevalences of underlying medical conditions (20% and 23%, respectively; p = 0.42) and prematurity (23% and 21%, respectively; p = 0.58). Case-infants were more commonly non-Hispanic Black (18%) and Hispanic (34%) than were control-infants (9% and 28%, respectively).
Among case-infants, 43 (24%) were admitted to an intensive care unit (ICU) ( Table 2). A total of 25 (15%) case-infants were critically ill and received life support during hospitalization, including mechanical ventilation, vasoactive infusions, or extracorporeal membrane oxygenation (ECMO); among these critically ill infants, one (0.4%) died. Of the 43 case-infants admitted to an ICU, 88% had mothers who were unvaccinated. The mothers of the one case-infant who required ECMO and one case-infant who died were both unvaccinated.
VE of a completed 2-dose maternal primary mRNA COVID-19 vaccination series during pregnancy against COVID-19-associated hospitalization in infants aged <6 months was 61% (95% CI = 31% to 78%) ( Table 3). Among 93 mothers classified as vaccinated, 90 (97%) had documented dates of vaccination. Effectiveness of a completed 2-dose COVID-19 vaccination series early in pregnancy (first 20 weeks) was 32% (95% CI = -43% to 68%), although the § § 45 C.F.R. part 46.102(l)(2), 21 C.F.R. part 56; 42 U.S.C. Sect 241(d); 5 U.S.C. Sect 552a; 44 U.S.C. Sect 3501 et seq. ¶ ¶ Other reasons for excluding infants from the analysis included May or June hospital admission (two); birth to mothers who received Janssen (Johnson & Johnson) COVID-19 vaccine (four), who received their second dose of vaccine <14 days before delivery (three), who received a 2-dose primary mRNA COVID-19 vaccine before pregnancy (seven), or with unknown vaccination status (one); infants who received a positive SARS-CoV-2 test result but were admitted for non-COVID-19 reasons (four); and SARS-CoV-2 testing >10 days after illness onset or >3 days from hospitalization (two). Diego-Rady Children's Hospital (California). † If N is less than total. § Testing for statistical significance was conducted using the Pearson chi-square test and Fisher's exact test for comparisons with fewer than five observations. Wilcoxon rank-sum tests were used to compare continuous data. ¶ CDC/Agency for Toxic Substances and Disease Registry SVI documentation is available at https://www.atsdr.cdc.gov/placeandhealth/svi/index.html. Median SVI for case-infants and control-infants are based on 2018 U.S. SVI data. The SVI ranges from 0 to 1.0, with higher scores indicating greater social vulnerability. One control-infant was missing an SVI score. ** January numbers do not reflect the entire month. Patients included were admitted through January 17, 2022.
† † Other chronic conditions included rheumatologic/autoimmune disorder, hematologic disorder, renal or urologic dysfunction, gastrointestinal/hepatic disorder, metabolic or confirmed or suspected genetic disorder, or atopic or allergic condition. § § COVID-19 vaccination status included the following two categories: 1) unvaccinated (mothers who did not receive COVID-19 vaccine doses before their infant's hospitalization) or 2) vaccinated (mothers who completed their 2-dose primary mRNA COVID-19 vaccination series during pregnancy and ≥14 days before delivery). ¶ ¶ Timing of vaccination is based on date of receipt of the second dose of a 2-dose primary mRNA COVID-19 vaccine series during pregnancy. *** Behavioral factors are reported during interview with mother or proxy. Breastfeeding included any breastfeeding (either exclusive or partial). (100) Hospital length of stay, median days § (IQR) (8)
Discussion
During July 2021-January 2022, maternal completion of a 2-dose primary mRNA COVID-19 vaccination series during pregnancy was associated with reduced risk for COVID-19 hospitalization among infants aged <6 months in a real-world evaluation at 20 U.S. pediatric hospitals during a period of Delta and Omicron variant circulation. Among 176 infants aged <6 months hospitalized with COVID-19, 148 (84%) were born to mothers who were not vaccinated during pregnancy. Although booster doses are recommended for pregnant women, VE of maternal booster doses received during pregnancy could not be assessed because of small sample size, which likely underestimated VE. Overall, these findings indicate that maternal vaccination during pregnancy might help protect against COVID-19 hospitalization among infants aged <6 months. COVID-19 during pregnancy is associated with severe illness and death (7), and pregnant women with COVID-19 are more likely to experience preterm birth, stillbirth, and other pregnancy complications (8). Vaccination is recommended for pregnant women to prevent COVID-19, including severe illness and death. COVID-19 vaccination is safe and effective when administered during pregnancy (9,10). Receipt of COVID-19 vaccination during pregnancy is associated with detectable maternal antibodies in maternal sera at delivery, breast milk, and infant sera indicating transfer of maternal antibodies (3)(4)(5). The higher VE point estimates among infants born to women vaccinated later in pregnancy are consistent with the possibility of transplacental transfer of SARS-CoV-2specific antibodies that might provide protection to infants. The optimal timing of maternal vaccination for the transfer
Summary
What is already known about this topic? COVID-19 vaccination during pregnancy is recommended to prevent severe illness and death in pregnant women. Infants are at risk for COVID-19-associated complications, including respiratory failure and other life-threatening complications.
What is added by this report?
What are the implications for public health practice?
Completion of a 2-dose mRNA COVID-19 vaccination series during pregnancy might help prevent COVID-19 hospitalization among infants aged <6 months.
of antibodies to protect the infant is currently uncertain, and the direct effect of maternal COVID-19 vaccination in preventing severe COVID-19 in infants has not previously been described. Further, with infants not currently age-eligible for vaccination and infant hospitalization rates remaining at the highest levels of the pandemic,*** this study suggests that maternal COVID-19 vaccination during pregnancy might protect infants aged <6 months from COVID-19-related hospitalization.
The findings in this report are subject to at least seven limitations. First, VE could not be assessed directly against specific variants. Second, the sample was too small to assess VE by pregnancy trimester of vaccination, and the small sample size resulted in wide confidence intervals for some estimates that should be interpreted with caution. Third, the analysis did not assess whether pregnant women were infected with SARS-CoV-2 before or during pregnancy, which might have provided maternal antibodies. Fourth, residual confounding such as additional differences in behaviors between vaccinated and unvaccinated mothers, including whether mothers had prenatal care, that might affect risk for infection cannot be excluded, and potential confounders (e.g., breastfeeding, child care attendance, and prematurity) could not be accounted for in the model because this information was not available for all infants. Fifth, because this analysis included self-reported data for a few participants, maternal vaccination status might be misclassified for a few infants, or there might be imperfect recollection of whether the mother completed COVID-19 *** https://gis.cdc.gov/grasp/covidnet/COVID19_5.html vaccination during pregnancy. Sixth, immunocompromising maternal conditions were not collected to determine whether mothers needed an additional mRNA COVID-19 vaccine dose to complete their primary series. Finally, VE of maternal booster doses received during pregnancy could not be assessed because of small sample size.
Completion of a 2-dose primary mRNA COVID-19 vaccination series during pregnancy was associated with reduced risk for COVID-19-associated hospitalization among infants aged <6 months, and protection was higher among infants whose mothers were vaccinated later in pregnancy. Additional evaluation should examine timing of vaccination before pregnancy compared with during pregnancy. CDC recommends that women who are pregnant, are breastfeeding, are trying to get pregnant now, or might become pregnant in the future get vaccinated and stay up to date with COVID-19 vaccination. † † † | 2022-02-17T16:28:09.414Z | 2022-02-18T00:00:00.000 | {
"year": 2022,
"sha1": "263494876580e4f57ad329ce4fd5dbfe8ec58f41",
"oa_license": "CC0",
"oa_url": "https://www.cdc.gov/mmwr/volumes/71/wr/pdfs/mm7107e3-H.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "febd07909590946e484d181d374411eaeba46371",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221803380 | pes2o/s2orc | v3-fos-license | The Impact of Environmental, Social and Governance Index on Firm Value: Evidence from Malaysia
In this study we investigates the relationship of environmental, social and governance (ESG) practices and the consequences related to their disclosure on the firm’s value. Our data is extracted from the final accounts of 122 firms listed on Bursa Malaysia over the period 2011 to 2019 with 1098 observations. We used three instrumental variables in this study in order to find the endogeneity of ESG performance namely, the existence of a CSR committee on the Board of directors, dispersion of forecasted earnings and finally the ownership concentration of the firm. We used three first stage regression models related to ESG disclosure and the interaction between the strength, concern, and disclosure of ESG. Besides that, we also use the second stage regression to investigate the insider effects of ESG activities and ESG disclosure. Our results are consistent with the approach that indicate that ESG strength increases firm value whereas ESG disclosure and ESG concern decreases the firm value. Most importantly, this study finds that ESG disclosures can play the role by which a firm can reduce the negative effect of weakness and improve the positive effect of strength.
INTRODUCTION
The research objective is driven by the concern that the ESG index affects the market values of Malaysian listed firms. To identify the association between ESG index and firm value, we investigate the interrelationship between the strength and weakness of the firm with respect to environmental and social practices. Lately, numerous researchers worldwide tend to evaluate the impact of ESG in respect of the different domains of business. The focus of this study is to identify the valuation of possible effects of ESG on firm's financial status. However, the question that arises is how the firm value is linked to the ESG related disclosures. Therefore, our research proposal is concerned with identifying the effects of ESG disclosure on both the firm internally and the economy more largely as well as the moderating impact of such disclosures on the firm value. Moreover, ESG disclosure may play dual roles of both strength and weakness for a firm as it reduces information asymmetries for investors that helps them to realize the firm value more effectively when making their decisions. Similarly, firm value may also be impaired by ESG disclosure in cases where investors find any eye wash or frivolous remarks when making their disclosures.
As far as the financial performance of firms are concerned with respect to ESG disclosures, it needs to be investigated rigorously in the Malaysian economy. As Malaysia is an emerging economy, it should be aware of its strengths and weakness in order to protect both local and international investors. However, previous studies which investigated this same issue in various parts of the world found conflicting results and a lack of unanimity. In the initial stages, ESG practices were considered as a cost and as it exceeded the minimum requirements of legal standards, it started reducing the firm value. However, the whole idea of environmental and social regulatory restrictions is based on the notion that firms must This Journal is licensed under a Creative Commons Attribution 4.0 International License be compelled to adopt practices that make them responsible for the betterment of the environment and society. Otherwise, they will find it unprofitable or costly and become reluctant to adopt these practices on their own volition. Giese et al. (2019) study found that ESG practices have the potential to increase firm value. Moreover, the disclosure of social and environmental activities improves the management's ability to attract qualified employees and negotiate with them on their own terms. However, these activities also strengthen the firm's interaction with its stakeholders and enhances its reputation in the eyes of the community (Duque-Grisales and Aguilera-Caracuel, 2019).
Some of the earlier empirical studies had conflicting results related to the firm's performance in dealing with ESG effects. Some of the studies reported adverse results regarding the relationship between ESG and firm value. However, Fatemi et al. (2018) found a positive association between ESG and firm value in a meta-analysis but this relationship gradually decreased over time.
It is also been noted within academic circles that many firms, especially well-known multinationals are intending to report on ESG matters with the aim of showing legitimacy and to also enhance their reputations. For instance, in 1996, corporate social responsibility (CSR) was only reported by 300 firms globally. This number gradually increased as time passed and by the end of 2014 the corresponding figure was more than 7000 firms worldwide. However, despite this, the global reporting initiative (GRI) guideline council notes that the overall quality of ESG disclosures remains heterogeneous (Ashwin et al., 2016).
While assessing the relationship between ESG and firm value, it is necessary to realize that ESG reporting may reflect many motives beyond merely the limitations of strengths and weaknesses. Changes in ESG policies can be explained by disclosures and bad reputations can also be mitigated by disclosures. There is also the possibility that the firm underreports its ESG disclosures for fear that it would not be able to maintain its earnings track record in future.
Environmental, Social and Governance
In the initial stages when the concept of ESG was introduced, the relationship between ESG activities and firm periodic growth was consistently negative (Fatemi et al., 2011). This argument is best elaborated by (Kim et al., 2011), that the primary responsibility of a firm should be to maximize the shareholder's profit. The underlying assumption behind this statement is that ESG costs must not exceed the payoff from its activities. A recent study carried out by Aboud and Diab (2018) which explored firm's ESG reporting after winning green awards found that they are experiencing negative abnormal returns. Findings such as these lend credence to suggestions that firms would ultimately be punished by investors for what they perceived to be loss making investments.
Currently there is increased awareness of socially responsible behavior worldwide, so it is generally assumed that activities related to the welfare of community tend to have a positive reflection on firm's status in term of monetary value (Mervelskemper and Streit, 2017). Velte (2017) notes stakeholder theory argues that non-owner stakeholders have greater opportunities to safeguard their interests under the umbrella of socially responsible behavior. It provides more opportunities for non-owner stakeholders including customers, employees, debtors, and state regulators have comparatively better contracting options and provides a new path for further growth and reduces risk. Moreover, from a strategic management perspective, CSR is neither a cost, constraint nor a charitable act. Indeed, it is a source of opportunity, innovation and competitive advantage (Husted and de Sousa-Filho, 2017).
With respect to empirical analysis, there is a large volume of literature regarding ESG (CSR) factors. Several studies have reported a positive association between ESG and non-financial performance measures such as efficient production process and minimum material and energy consumption.
However, the relationship between ESG practices and financial performance has also been examined in various studies. Numerous studies have found a negative or insignificant relationship between ESG performance and firm value depending on the sample data chosen. Similarly, other studies found a positive relationship between ESG performance and firm value by deploying structural equation modelling in which environmental performance and control is peroxide by economic performance. Previously, El Ghoul et al. (2017) recognize a positive relationship between ESG performance and firm value using the data of 53 different countries.
Environmental, Social and Governance (ESG) Disclosure
The intensity, methods, and format of EGS (CSR) reporting varies from firm to firm. Some firms started reporting ESG performance in accordance with guidelines put forward by the Global Reporting Initiative (GRI) (Vigneau et al., 2015). In order to align with international standards, the initiative of integrated reporting (IIR) has put forward a set of standards which was developed in accordance with the internationally published framework in 2013 (Camilleri, 2018). However, these conventional methods are old and are not accessible by everyone, therefore firms have started using non-traditional methods such as websites and social media in order to disclose their ESG initiatives.
Independent researchers collect the data manually from annual reports and corporate websites in order to develop ratings that define the quality of ESG reporting . The most recently available ratings of ESG disclosure are provided by specialist commercial information providers. Bloomberg is one of the information providers that compiles the database for ESG ratings.
ESG reporting may use to portray a good image to the public in order to create a favorable perception through documenting changes in its existing policies with respect to ESG matters. For example, the firm may exaggerate its disclosures in order to hide negative effects on the environment caused by its production. In this manner, the firm can maintain its reputation and market value if checks and balances are weak . ESG disclosures can be eye washed by slick promotion instead of reflecting the true picture. Furthermore, firms try to show that they are more ESG conscious. There are several ways in which managerial and accounting literature contributed by this study. Initially we are not aware of any study related to social and environmental disclosure, discussing the impact on firm value. Moreover, using Malaysian firms' data we will examine the association of social and environmental disclosure with the main element of that is called firm performance, as firm value always driven from firm performance. We also investigate the notion that state that good corporate investors evaluate the firm value by analyzing firm's reputation, operating environment and earnings growth that ultimately results in quality reported earnings. This study covers the social and environmental disclosure wholly instead of considering impact of one or two areas such as customer satisfaction, environmental performance, and workplace quality.
However, environmental and social disclosure is a mean of transparency to the investors and accountability to the regulators. Moreover, it is helpful for investors and stakeholders to make appropriate decisions easily. It is assumed that, environmental and social disclosure is activity of giving relevant information to the investors and stakeholders in order to make strategic decision (Butar-Butar and Indarto, 2018). Since the relationship between ESG and firm value is concern, the instrumental theory, legitimacy theory and signaling theory is essential to explain this relationship (Moesono et al., 2016).
There is a massive debate going on across the world related to the returns associated with environmental and social governance. The corporate investors focus on the interlinked cost of these practices with respect to the financial returns. Previously, a number of studies tend to prove that the financial performance and corporate social responsibility have a positive relationship (Wang et al., 2018). This is because the management used to exercise discretionary accounting in order to manipulate numbers to satisfy the desired target of investors.
Hypothesis Development
As Malaysia is a rapidly emerging economy and the use of both corporate and social practices is steadily increasing, we have the incentive to investigate the relationship between ESG disclosures and firm value using a sample of Malaysian listed firms.
Our hypothesis assumes that firm value is directly proportional to the ESG index. The more positive the ESG disclosure in annual reports, the more it acts to enhance the firm's value. In other words, we argue that firm disclosures related to ESG activities are positively associated with the moderation of firm value. Therefore, we derived our theoretical model as follows: In keeping with the results of previous studies, we are anticipating a positive relationship between ESG disclosures and firm value in all firms. Hence, keeping everything else constant we assume that ESG strength and firm value are positively associated with each other whereas ESG's concern and firm value are negatively associated with each other. Moreover, managerial motives also have the potential to drive ESG disclosures in different ways. With respect to the previous results, due to inconclusive findings, we are unable to draw the first-order relationship between ESG disclosures and firm value or different attributes of ESG such as ESG strength and ESG weaknesses. We therefore test the simple hypothesis by claiming no relationship amongst each other.
RESEARCH METHODOLOGY
In order to examine the performance of ESG activities and ESG disclosure individually and ESG activities and ESG disclosure as combined with respect to the increase in firm value, we need to address the omitted variable or simultaneity that can possibly produce the endogeneity of ESG disclosure. Therefore, if the firm value is impacted by ESG disclosure then an error term will arise between firm value and ESG disclosure in the results of regression analysis. Moreover, inconsistency and biasedness may exist in the estimated coefficient. Thus, the instrumental variable approach will be applied in order to manage this endogeneity (Eugster and Wagner, 2015).
In this study, three instrumental variables are used to analyze our potential endogenous variables of ESG disclosure (ESGDIS). The first variable relates to the existence of the CSR committee on the Board of Directors (CSCOM). Previous researchers found that the existence of CSR committee in the firm supports the idea of disclosing information related to the emission of greenhouse gases and reported high quality disclosures. Moreover, previous studies found that the existence of CSR committee on the Board, push the firm to disclose more comprehensive information regarding social issues (Liao et al., 2015). The evidence suggests that the main role of the CSR committee is to provide information related to sustainability to the stakeholders. Therefore, our expectations thus far are related to the correlation of ESG disclosures and our first instrumental variable and we consider it to be high enough to satisfy the relevance requirement. One researcher argued that the firm performance cannot have any impact due to the existence of CSR committee. The evidence extracted by comparing the market value of the firms with audit committee and without audit committee but the results were almost similar. Hence, the instrumental variable used in this study hardly affects firm value, unless it reaches exogeneity.
The second instrumental variable we use in this study is the dispersion of analysts' earnings forecasts (DAEF). Previous evidence from the literature related to this disclosure indicates that DAEF is negatively associated with the level of disclosure. Anecdotal evidence suggests that disclosure reduces uncertainty in the forecasted earnings and improves the quality of information for analysts. In addition, studies have found that forecasted earnings by analysts are negatively associated with mandated CSR activities and positively associated with voluntary ESG activities (Harjoto and Laksmana, 2018). By comparing all the previous findings, we concluded that DAEF is enough to satisfy the relevance condition of the instrument. There is no conclusive evidence available regarding the association of DAEF and firm value. Various studies found that higher DAEF results in inflated stock prices and deflated future returns (Sandwidi and Cellier, 2019). On the other hand, other studies reported the association of higher DAEF with low current stock price and higher future profits. Moreover, some studies reported no monotonic association while others failed to find a significant association. In the light of this evidence, we are unable to anticipate the exogeneity of our second instrument. Therefore, we will conduct several post estimations tests to investigate the relevance of our instrument.
The third instrumental variable we used to evaluate the ESG disclosure is the ownership concentration of the firm's stock (OWCFS). Lagasio and Cucari (2019) found the association between OWCFS and the index of disclosure to be negative. For instance, family-owned firms have majority shareholders and are unlikely to disclose information required by law because the disclosures requirements of the public are relatively low. Majority shareholders have access to various resources other than the publicly available disclosure reports which they can easily access during board meetings. Prior research found that firms with high ownership concentrations are less interested in making any type of disclosures (Fatemi et al., 2018). Therefore, we will maximize our efforts to make sure the OWCFS fulfill all the relevant conditions. However, the evidence related to ownership structure and firm performance are mixed, some studies reported a positive relationship, while others reported a negative relationship and some other researchers found no significant relationship at all (Wanzenried, 2018). Therefore, we will conduct several post estimations tests to investigate the validity of our instrument.
To satisfy the requirement of our model, we must look into the interaction terms and account for the potential endogeneity concern with ESG disclosure and ESG activities. We have to draw the line between ESG strength and ESG concern. In the light of previous studies, we use the interaction between ESG strength (ESGSTR) and ESG disclosure (ESGDIS) which is driven by instrumental variables; and the interaction between ESG concerns (ESGCONR) and ESG disclosure (ESGDIS) as the instrumental interaction term among ESGSTR, ESGSDIS, and ESGCONR. Hence, our empirical approach based on three first-stage regressions, derived from each of the endogenous variables. In order to examine the satisfaction level of relevance condition and exogenous condition related to our instrumental variable, we will rely on test statistics of several post estimations. The regression results are based on Angrist-Pischke's Partial F-statistic and Shea's Partial R 2 and the results for tests of under-identification, weak identification, and over-identification.
In order to satisfy this research ideology, we use different control variables as followed by previous literature that may play a part in the development of ESG disclosure, ESG activities, and firm value. These variables include return on asset (ROA) and increase in ROA (INCROA) as a proxy of firm value. Firm size considered to be the simple logarithm of total sale and is denoted by (FSALE). The strength of asset in terms of sales are obtained by asset to sales ratio and denoted by (ASSAL). The amount of debt used by the firm with respect to the equity would be the ratio of debt over equity and is denoted by (DEEQ). Advertising is an expenditure and denoted by (SAEX). Research and development is also an expenditure and is denoted by (REDE). Furthermore, we include the fixed effects of industry and year. Hence, our two-stage model is as follows:
Data and Sample
Since Malaysia is an emerging economy, it must build the ESG mechanisms in order to compete in international markets. Sustainable investment has already been proven to be significant in the development of business and is liked by the shareholders. Stock exchange analysis indicates that investors are deeply interested in ESG related activities and disclosures. Industry experts in Malaysia acknowledge that ESG disclosures would be paramount for the long-term development and stability of a healthy capital market.
Both individual investors and institutions integrate the ESG factor when making investment decisions.
Our sample data was hand collected from the web sites of Malaysian companies. The data collected belongs to companies listed on the Malaysian stock exchange Bursa Malaysia and represents nonfinancial companies listed on the Malaysian bourse over the period 2011 to 2019. These firms represent different business sectors except the financial sector (because they are subject to different regulatory bodies and compliance requirements) and fulfill the criteria required by the factors comprising ESG research. We didn't include the data of those firms whose core information is missing, for example, total sales or total expenses, or were reluctant to provide transparent information. There are more than 900 companies listed on the Malaysian stock exchange. We selected 122 listed companies which were mostly non-family owned (since family owned companies were not willing to disclose the ESG information in their reports or else try to hide the actual information) that belong to different business sectors with a total of 1098 observations. Table 1 below contains the descriptive statistics of our sample data gathered for this study. The mean value of our first variable TOBINQ is 1.89 along with the median of 1.86 and the standard deviation of 1.0. ESG disclosure has a mean value of approximately 21.9 along with the median of 22.9 and the standard deviation of 12.8. However, the mean value of ESG strength and ESG concern is 3.313 and 3.67 along with the median value of 3.51 and 3.57; and the standard deviation of 3.81 and 2.47. These results are considered reasonable because they are consistent with previous studies in the same area. The descriptive stats of our instrumental variable CSR committee show a mean of 51% and a median of 54%. The mean estimation of our second instrumental variable related to earnings forecasting is 13.6% and the median is 13.8% along with the standard deviation of 21%. The third instrumental variable, the ownership concentration has a mean estimation of 11.8% with a standard deviation of around 12%. Whereas, control variables have mean values as ROA, FSALE, ASSAL, DEEQ, SAEX and REDE are 8%, 8.3%, 201%, 41.7%, 1.8% and 2.1% respectively. This data analysis is consistent with most prior studies related to CSR performance and corporate governance. ESG concern has a positive correlation with FSALE and DEEQ and negative correlation with ROA, INCROA, and ASSAL. ESG concern has significant correlation with all control variables except for ROA and INCROA which are indicative of performance. Table 3 contains the results of 2 stage least squares. The first three columns present the investigation of first stage regression related to t-statistics and P-values are given in ().***, **and* denotes significance at the 1%, 5% and 10% level two-sided test ESG disclosure and the interaction between the strength, concern, and disclosure of ESG. The fourth column presents the second stage regression, which investigates the influence of ESG activities and ESG disclosure. Moreover, it also investigates the interaction between ESG activities, ESG disclosure, and firm performance.
Regression Results of ESG Performance, ESG Disclosure and Firm Value
In Table 3, the reported results determine that all three first-stage regression shows that CSR committee (CSCOM) is a significant determinant in all three first-stage regressions explaining the ESG disclosure (ESGDIS). Earning forecasting (DAEF) only has significant results with the third regression result of first stage which is the interaction of ESG concern and ESG disclosure. However, the ownership concentration is a significant determinant in first and second, first-stage regression analysis that is ESGDIS and the interaction of ESGSTR and ESGDIS. Hence, the existence of the CSR committee supports ESG disclosures and the ownership structure tends to resist it. Our results show a variance of around 60% in the second stage of Tobin's q. The results indicate that firm value is significantly increased by ESG strength whereas, it is significantly decreased by ESG concern and ESG disclosure.
CONCLUSION
This study examines the association between ESG index and firm value. The results obtained by the regressions indicate that firm value increases with ESG strength and decreases with ESG concern and ESG disclosure. It is suggested that in the presence of ESG strength, the ESG disclosures shouldn't be higher because it may weaken the positive uplift in valuation derived from ESG strength.
A possible explanation for this finding, is that if the disclosure is higher, then it could be surmised that the firm is trying to justify the high level of ESG cost. The negative valuation effects of ESG concern are also weaken by the disclosures. However, it provides the opportunity for the firm to legitimize their behavior by explaining the appropriateness of their ESG policies and the related operational benefits for the investors. In other words, the firm can convince investors that they can overcome the weaknesses identified in ESG by changing the existing mode of conducting their operations. | 2020-08-13T10:05:24.494Z | 2020-08-10T00:00:00.000 | {
"year": 2020,
"sha1": "fa64a1d7166d345c805b3dcd35642d2d48917d02",
"oa_license": "CCBY",
"oa_url": "https://econjournals.com/index.php/ijeep/article/download/10217/5319",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ef3ecd7c1184d5b3eae730026b1842c7d907682a",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
119623558 | pes2o/s2orc | v3-fos-license | Equiangular lines and the Lemmens-Seidel conjecture
In this paper, claims by Lemmens and Seidel in 1973 about equiangular sets of lines with angle $1/5$ are proved by carefully analyzing pillar decompositions, with the aid of the uniqueness of two-graphs on $276$ vertices. The Neumann Theorem is generalized in the sense that if there are more than $2r-2$ equiangular lines in $\mathbb{R}^r$, then the angle is quite restricted. Together with techniques on finding saturated equiangular sets, we determine the maximum size of equiangular sets"exactly"in an $r$-dimensional Euclidean space for $r = 8$, $9$, and $10$.
Introduction
A set of lines in a Euclidean space is called equiangular if any pair of lines forms the same angle. For examples, the four diagonal lines of a cube are equiangular in R 3 with the angle arccos(1/3), and the six diagonal lines of an icosahedron form 6 equiangular lines with angle arccos(1/ √ 5). The structure of methan CH 4 also contains equiangular lines: carbon-hydrogen chemical bounds form the same angle (about 109.5 degrees). Equiangular lines in real and complex spaces are related to many beautiful mathematic topics and even quantum physics, such as SIC-POVM [RBKSC04,SG10,Sco06,Zau]. First, equiangular lines in real spaces are equivalent to the notion of two-graphs which caught much attention in algebra [GR13]. A classical way to construct equiangular lines comes from combinatorics designs. For instance, the 90 equiangular lines in R 20 and 72 equiangular lines in R 19 can be obtained from the Witt design. The details can be found in Taylor's thesis in 1971 [Tay71]. The spherical embedding of certain strongly regular graphs can also give arise to equiangular lines [Cam04]; the maximum size of equiangular lines in R 23 is 276 which can be constructed from the strongly regular graphs with parameters (276, 135, 78, 54). Such configuration is the solution to the energy minimizing problems [SK97], also known as the Thomson Problem. The Thomson problem is to determine the minimum electrostatic potential energy configuration of N electrons constrained to the surface of a unit sphere that repel each other with a force given by Coulomb's law. The physicist J. J. Thomson posed the problem in 1904 [Tho04]. The configuration of several maximum equiangular lines would give arise to the minimizers of a large class of energy minimizing problems called the universal optimal codes [CK07]. Furthermore, if we have r(r+1) 2 equiangular lines in R r (which is known as the Gerzon bounds [LS73]), then they will offer the construction of tight spherical 5-designs [Del77] which are also universal optimal codes. So far, only when r = 2, 3, 7, and 23 can the Gerzon bounds be achieved. The special sets of equiangular lines, called equiangular tight frames (ETFs) refer to the optimal line packing problems [MS18]. ETFs achieve the classical Welch bounds [Wel74] which are the lower bounds for maximum absolute value of inner product values between distinct points on unit sphere, i.e. if we have M points {x i } M i=1 on the unit sphere in R r , then .
The study of ETFs has numerous references [FJMP18, SH03, FMJ16, JMF14, BGOY15, Wal09,SH03]. From another point of view, a set of equiangular lines can be regarded as the collection of points on the unit sphere such that distinct points in the set have mutual inner products either α or −α for some α ∈ [0, 1). Below we formally state its definition.
By abuse of language, we will say that a set of vectors which satisfy the condition (1) are equiangular with angle α, although the actual angle of intersection is arccos α. A natural question in this context is: what is the maximum size of equiangular sets in R r ? We denote by M (r) for this quantity. The values of M (r) were extensively studied over the last 70 years. It is easy to see that M (2) = 3 and the maximum construction is realized by the three diagonal lines of a regular hexagon. In 1948, Haantjes [Haa48] showed that M (3) = M (4) = 6. In 1966, van Lint and Seidel [vLS66] showed that M (5) = 10, M (6) = 16, and M (7) ≥ 28. Currently, there are only 35 known values for M (r) and all of them have that r ≤ 43. To the best of our knowledge, the ranges of M (r) for 2 ≤ r ≤ 43 are listed in Table 1 (see [AM16,BY14,GKMS16,Gre18,Gre18,Szö17,Yu15]). [BY14,OY16,GY18], the analysis of eigenvalues of the Seidel matrices [GKMS16,Gre18,Gre18], polynomial methods [GY18], Ramsey Theorem for asymptotic bounds [BDKS18], forbidden subgraphs for graphs of bounded spectral radius [JP17], and algebraic graphs theory [GR13,Szö17].
The motivation for the study of equiangular lines can also be various. For instance, Bannai, Okuda and Tagami [BOT15] considered the tight harmonic index 4-designs problems and proved that the existence of tight harmonic index 4-designs is equivalent to the existence of (r+1)(r+2) 6 equiangular lines with angle 3 r+4 in R r . Later, Okuda-Yu [OY16] proved such equiangular lines do not exist for all r > 2. For more informations about harmonic index t-designs, please see the references [BOT15, ZBB + 17, BZZ + 18, BBX + 18].
The main contribution for this paper is that we proved the result which Lemmens-Seidel claimed true in 1973. In [LS73], Lemmens and Seidel claimed that the following conjecture holds when the base size K = 2, 3, 5 (for the definition of base size, see Definition 2.5): Conjecture 1.2 ([LS73], Conjecture 5.8). The maximum size of equiangular sets in R r for angle 1 5 is 276 for 23 ≤ r ≤ 185, and ⌊ 1 2 (r − 5)⌋ + r + 1 for r ≥ 185. Although the conjecture was prominent in the study of equiangular lines, no proof was found in the literature for the cases K = 3, 5. Following the discussion of pillar methods, we use techniques from linear algebra, linear programming, and the uniqueness of the two-graphs with 276 vertices to prove the K = 3, 5 cases, and offer a partial solution for K = 4. We also offer better upper bounds for the equiangular sets for some special setting on pillar conditions. There is another interesting phenomenon that receives our attention. It is well known that M (8) = 28 (see Table 1), but those 28 lines always live in a 7-dimensional subspace of R 8 ([GY18], Theorem 4). Glazyrin and Yu [GY18] asks the maximum size of equiangular sets of general ranks. The following theorem essentially states that the angle is restricted when the size of equiangular set is large enough. Theorem 1.3 (Neumann, cf. [LS73]). Let X be an equiangular set with angle α in R r . If |X| > 2r, then 1 α is an odd integer. We first give a generalization of the Neumann theorem (see Theorem 5.3), then we employ the techniques about saturated equiangular sets in [LY18] to determine the maximum size of equiangular sets of ranks 8, 9, and 10.
The organization of the paper is as follows. In Section 2 we review the basic notations in the study of equiangular sets and recall the pillar decompositions introduced by Lemmens and Seidel [LS73]. In Section 3 we determine the maximum size of a pillar with orthogonal vectors only. In Section 4 we provide a proof for the Lemmen-Seidel conjecture when the base size K = 3 or 5, and also give a new upper bound for K = 4. In Section 5 we discuss the maximum size of equiangular sets of prescribed rank. We close this paper with some discussions and proposing two conjectures based on our computations.
Prerequisites
Throughout this paper,x denotes the unit vector in the same direction as a non-zero vector x in an Euclidean space. We start with some basic definitions for equiangular sets. Let X be an equiangular set with angle α in R r . There are a few mathematical objects that could be associated to X.
Definition 2.1. Let X = {x 1 , . . . , x s } ∈ R r be a finite set of vectors. The Gram matrix of X, denoted by G(X) or G(x 1 , . . . , x s ), is the matrix of mutual inner products of x 1 , . . . , x s ; that is, When X is equiangular with angle α, then its Gram matrix G(X) is symmetric and positive semidefinite, with entries 1 along its diagonal and ±α elsewhere. The rank of G(X) is the dimension of the span of vectors in X; X is linearly independent if and only if G(X) is of full rank (or equivalently, positive definite).
Definition 2.2. For an equiangular set X = {x 1 , . . . , x s } with angle α, the Seidel graph of X is a simple graph S(X) whose vertex set is X, and two vertices x i and x j of S(X) are adjacent if and only if x i , x j = −α.
Since we are interested in equiangular lines in R r , choices need to be made between two unit vectors that span the same line. However, the choices could affect the signs of their mutual inner products. If two sets of vectors represent the same set of lines, they are called in the same switching class. This terminology comes from the graph theory: if we switch a vertex v in a simple graph, the resulting graph is obtained by removing all edges that are incident to v but adding edges connecting v to all vertices that were not adjacent to v. We also have the freedom to relabel the vertices of the graph. All these actions lead to the following proposition about the switching equivalence for two Gram matrices. (CB) T · G(X) · (CB) = G(Y ). We would also say that G(X) is switching equivalent to G(Y ), and write G(X) ≃ G(Y ).
As usual, let I s (resp. J s ) denote the identity matrix (resp. all-one matrix) of size s × s; the subscript s will sometimes be dropped when the size is clear from the context. Under a suitable choice of signs, the vectors ±p 1 , . . . , ±p k from an equiangular set X will form a k-clique in its Seidel graph. Following [LS73], we will define two important notions that are associated to an equiangular set X (Definitions 2.5 and 2.7).
Definition 2.5. Let X be an equiangular set in R r with angle α. The base size of X, denoted by K(X), is defined as K(X) := max{k ∈ N : there exist p 1 , . . . , p k in X such that G(p 1 , . . . , p k ) ≃ (1 + α)I − αJ}.
In other words, K(X) is the maximum of the clique numbers of Seidel graphs that are switching equivalent to that of X.
Note that the clique numbers of Seidel graphs in the switching class of X are not constant, therefore we need to take their maximum. Nevertheless K(X) is always bounded by 1 α + 1 by Proposition 2.4. Since we are interested in large equiangular sets, we will assume that 1 α is an odd integer, thanks to Theorem 1.3. The following proposition states that the only meaningful range of base size is 2, 3, . . . , 1 α + 1.
Proof. If two vertices in the Seidel graph S(X) are independent, then we switch one of the them to form a 2-clique.
Definition 2.7. Let X be an equiangular set with angle α and base size K. A set of K vectors p 1 , . . . , p K is called a K-base of X if p 1 , . . . , p k belong to some set which is switching equivalent to X, and G(p 1 , . . . , p K ) = (1 + α)I − αJ.
Let K be the base size of an equiangular set X. We will fix a K-base P = {p 1 , . . . , p K } that forms a K-clique in the Seidel graph of X. Now we introduce the pillar decomposition of X with respect to P , following [LS73]. (More details can also be found in [KT16].) For each vector x ∈ X \ P , there is a (1, −1)-vector ε(x) ∈ R K such that x, p 1 , . . . , x, p K = α · ε(x).
A vector x in X will be replaced by −x if ε(x) has more positive entries than ε(−x), or ε(x) has the same number of positive entries as ε(−x) and x, p K = α; otherwise the vector x stays put. Let Σ(ε(x)) denote the number of positive entries in ε(x). A pillar (with respect to a K-base P ) containing a vector x ∈ X \ P , denoted byx, is the subset of vectors x ′ ∈ X \ P such that ε(x ′ ) = ε(x);x is called a (K, n) pillar when Σ(ε(x)) = n. Thus the vectors in X \P are partitioned into several (K, n) pillars for 1 ≤ n ≤ ⌊ K 2 ⌋. The number of different (K, n) pillars is at most K n when 1 ≤ n < K 2 , but is at most 1 2 K K/2 when n = K 2 . However, if K = 1 α + 1, then p 1 , . . . , p K form a K-simplex and K i=1 p i = 0. Therefore ε(x) has the same number of positive entries as negative entries, thus only (K, K 2 ) pillars can exist. The collection of all (K, n) pillars in an equiangular set X will be denoted by X(K, n).
The following fact will be used in many occasions.
Proposition 2.8. Let X be an equiangular set with angle α and base size K, and P = {p 1 , . . . , p K } be a K-base. If two vectors x, y belong to the same (K, 1) pillar with respect to P , then x, y = α.
Proof. By definition of x and y being in the same (K, 1) pillar, there are K − 1 vectors in P to which both x and y are adjacent in the Seidel graph S(X) of X. If x and y are also adjacent to each other in S(X), x and y together with those K − 1 vectors that they are connected to form a (K + 1)-clique in S(X), which contradicts to the definition of the base size K = K(X). Hence there is no edge connecting x and y in S(X), which is equivalent of saying that x, y = α > 0.
Schur decomposition for symmetric positive semidefinite matrices
In checking a matrix being positive (semi-)definite, we use the Schur decomposition.
The Lemmens-Seidel conjecture
Throughout this section we assume that the common angle is α = 1 5 . Let us first recall a theorem in [LS73]. . Any set of unit vectors with inner product ± 1 5 in R r , which contains 6 unit vectors with inner product − 1 5 , has maximum cardinality 276 for 23 ≤ r ≤ 185, This theorem corresponds to the case where the common angle α = 1 5 and base size K = 6. Lemmens and Seidel concluded Section 5 of [LS73] with the following remark, which we quote here: It would be interesting to know whether Theorem 5.7 holds true without the requirement of the existence of 6 unit vectors with inner product − 1 5 . . . . The authors have obtained only partial results in this direction. In fact, the cases where [the base size K] = 2, 3, 5 have been proved, but the case [K = 4] remains unsettled. Yet, there is enough evidence to support the following conjecture. . . .
be two nonempty (3, 1) pillars. Then the Gram matrix ofĉ i andd i has the following form: where v 1 , . . . , v 4 are column vectors whose entries are 1 4 or − 1 5 . Since G needs to be positive semidefinite, by Theorem 3.1 we see that (9) M := 9 10 I 4 + 1 10 J 4 − V T 9 10 The following setup is used to facilitate the computation. Consider the Seidel graph S ′ generated by the vectors inx∪ū. By Proposition 2.8, S ′ is a bipartite graph because every edge must connect a vertex inx to a vertex inū. Let us classify the vectors inx by how they are connected to the vectors u 1 , . . . , u 4 inū. Let B 4 be the set of binary strings of length 4, and let B 4,i denote the subset of B 4 consisting of those binary strings In total there are 2 4 = 16 variables t B , B ∈ B 4 , of non-negative integral values. Obviously n = B∈B4 t B , which is the total number of vectors inx, and B∈B4,i t B is the number of vertices of degree i inx, for i = 0, 1, 2, 3, 4.
Since 9 10 I n + 1 10 J n −1 = 10 9 I n − 1 9 + n J n , we use these informations to expand the left-hand side of (9) as (10) M = 9 10 I 4 + 1 10 J 4 − 10 9 V T I n V + 10 9(9 + n) where the entries m ij are Remind that we want to maximize the sum n = B∈B4 t B subject to the conditions t B ∈ Z, t B > 0 for all B ∈ B 4 , and M 0. Notice that when we set some of the variables t B to be zero, we are focusing on a particular subset of vectors in the pillarx. We argue that each of the variables t B has an upper bound as follows: • Set t 0000 = n and t B = 0 for all B = 0000. Then M = 9 10 I 4 + 1 10 − 5n 8(9 + n) J 4 .
Solving this inequality for n, we get −9 ≤ n ≤ 39 4 . Since n only assumes a non-negative integral values, we see that 0 ≤ n ≤ 9; this is the range for t 0000 . Hence M is positive semidefinite if and only if 0 ≤ n ≤ 39; this is the range for t 1111 . Up to this point, we find that there are only a finite number of combinations of 16-tuples (t B : B ∈ B 4 ) that will make the matrix M positive semidefinite; so far there are 10·8 4 ·8 6 ·10 4 ·40 ≈ 2.8 × 10 15 cases to check. To further reduce the computations, we have observed the following 1 : (i) Let us consider the upper bounds on the number of vertices inx of each of the degrees in the Seidel graph S ′ (generated byx ∪ū), that is, upper bounds for B∈B4,i t B , i = 0, 1, 2, 3, 4. For example, when we only look for vertices of degree 1, we set t B = 0 whenever B ∈ B \ B 4,1 . Since 0 ≤ t B ≤ 7 for B ∈ B 4,1 , we only need to pick out those quadruples (t 0001 , t 0010 , t 0100 , t 1000 ) ∈ {0, 1, . . . , 7} 4 such that the resulting matrix M in (10) is positive semidefinite (there are only (7 + 1) 4 = 4096 cases to check). Among those quadruples which survive the test, the maximum for the sum B∈B4,1 t B is 16, which occurs at t B = 4 for each B ∈ B 4,1 .
The computations for other degrees are similar and we find that Table 2 lists the upper bounds for t B , B ∈ B 4,i , i = 0, 1, 2, 3, when the value of t 1111 is specified.
Denote the upper bound for t B for B ∈ B 4,i found in Table 2 by m i , i = 0, 1, 2, 3. Since |B 4,0 | = 1, |B 4,1 | = 4, |B 4,2 | = 6, and |B 4,3 | = 4, an upper bound for the size of the pillarx is given by The values for Mx are also listed in Table 2. From here we conclude that the size of a (3, 1) pillar cannot exceed 54 when another (3, 1) pillar with 4 or more vectors is present.
Remark. We note here that when a (3, 1) pillarū has 3 vectors only, it is possible to have another (3, 1) pillarx with as many vectors as possible. This occurs when the inner product between any one vector inx and any one vector inū is − 1 5 . Assume that |x| = n. Then the Gram matrix where v is the vector (− 1 5 , − 1 5 , . . . , − 1 5 ) in R n , and G has the Schur decomposition: which is always positive definite for any n ∈ N.
Proof. The equiangular set X is decomposed as a disjoint union of P = {p 1 , p 2 , p 3 } and three (3, 1) pillars. If there are two (3, 1) pillars with four or more vectors, then by Lemma 4.2 we have Otherwise there is only one big (3, 1) pillar and the other two pillars can have at most 3 vectors each. Since vectors in a single (3, 1) pillar is linearly independent of rank r − 3, we see that in this case |X| = |P | + |X(3, 1)| ≤ 3 + (r − 3) + 3 + 3 = r + 6.
These inequalities finish the proof of the theorem.
Note that max{165, r + 6} is certainly less than the bound max{276, r + 1 + ⌊ r−5 2 ⌋} given in the Lemmens-Seidel conjecture, hence we have finished the proof when the base size K(X) = 3.
Proposition 4.4. In an equiangular set X with angle 1 5 and the base size K(X) = 4 in R r , the maximum number of vectors that are contained in the four (4, 1) pillars is max{96, r − 1}.
Proof. If there are two (4, 1) pillars with two or more vectors, there are at most 24 × 4 = 96 vectors in those pillars. Otherwise, there can be one large pillarx together with three other pillars each of which contains at most one vector. In the case, since the vectors inx are linearly independent in the (r − 4)-dimensional subspace Γ ⊥ , the number of vectors in these (4, 1) pillars is at most (r − 4) + 3 = r − 1.
Remark. Under computations similar to Theorem 3.2, we find that if there are two nonempty (4, 1) pillars, then another (4, 1) pillar can hold at most 25 vectors. Hence in the case where there is only one large pillar of size r − 4 in Proposition 4.4, there can only be one other nonempty (4, 1) pillar consisting of one vector when r − 4 > 25, i.e., r ≥ 30.
Notice that the right-hand side of (11) will never beat the Lemmens-Seidel bound. Details will be elaborated in Section 6. 4.3. K = 5. Let X ⊂ R r be an equiangular set with angle 1 5 in R r , with the base size K = K(X) = 5. Let P = {p 1 , p 2 , p 3 , p 4 , p 5 } be a 5-base in X. With respect to P , X \ P can be partitioned into 5 possible (5, 1) pillars and 10 possible (5, 2) pillars. By carefully analyzing those pillars, we answer affirmatively to the Lemmens-Seidel conjecture for the case K = 5.
Theorem 4.6. Let X be an equiangular set with angle 1 5 and base size K(X) = 5 in R r . (1) If there are two or more nonempty (5, 2) pillars, then |X| ≤ 272.
Proof of Lemma 4.8. Again we consider Y = P ∪ {p 6 } ∪x. Nowx becomes a (6, 3) pillar in Y . By Theorem 5.1 of [LS73], any connected component of the Seidel graph ofx is a subgraph of one of the graphs in Figure 2, which are those connected graphs with maximum eigenvalue 2.
Except for C 3 = K 3 , we check each case to ensure that |x| = rank(x): • Type I, C n with n ≥ 4. This follows from the property of circulant matrices (for example, see [Mey00]). • Type II: Let G be the Gram matrix for a Type II graph of n vertices, and x = (x 1 , x 2 , . . . , x n ) be any vector in R n . Then and the equality holds if and only if x = 0. Therefore G is positive definite and of full rank. • Types III, IV, V: Direct checks. And the lemma is now proved. Back to the proof of the main theorem. Ifx is a (5, 2) pillar and |x| > rank(x), thenx must contain a 3-clique by Lemma 4.8. Together with P , X must contain a 6-clique in its switching class, which is a contradiction to the assumption that K(X) = 5. Therefore |x| = rank(x) = r − 5. Hence |X| = |P | + |X(5, 1)| + |X(5, 2)| ≤ 5 + 15 + (r − 5) = r + 15, and the proof is now completed.
Maximum equiangular sets of certain ranks
Besides the maximum cardinality of equiangular sets in R r , Glazyrin and Yu considered a similar question in [GY18].
Definition 5.1. Let r be a positive integer. We define the number M * (r) to be the maximum cardinality of equiangular lines of rank r.
For example, we know that maximum size of equiangular line in R 8 is 28. However, such 28 equiangular lines in R 8 actually live in a 7-dimensional subspace by the Theorem 4 in [GY18], yet M * (8) is unknown. It is well known that M * (7) = 28 and M * (23) = 276. It seems that M * (r) is an increasing function on n, but Glazyrin and Yu [GY18] refuted this by showing M * (24) < 276 = M * (23). Moreover, not every value of M * (r) is known even for small r in the literature, for instance M * (8).
We first deal with M * (8) and start with the following result. The main technique of identifying saturated equiangular sets can be found in the authors' previous work [LY18].
Proposition 5.2. There are at most 14 equiangular lines of angle 1 3 of rank 8. Proof. We first construct 8 × 8 symmetric matrices whose diagonals are 1, and ± 1 3 elsewhere. By considering their switching classes, we may assume that the entries in the first column and the first rows are all 1 3 , except that the top-left corner being 1. Since these matrices are Gram matrices for some bases for R 8 , they are required to be positive definite. The associated graph of such a matrix is a disjoint union of a graph of 7 vertices and one isolated vertex. By checking all 1044 such graphs (see [FS09], Example II.5), we find that there are only 3 graphs that satisfy all conditions listed above. For each of those 3 graphs, we collect all the unit vectors whose mutual inner products with each vector represented by the graph are ± 1 3 , and transform these vectors as vertices of a new graph in which two vectors are adjacent if and only if their mutual inner products are ± 1 3 . The clique number of the new graph plus 8 will be the size of a saturated equiangular set, and we identify the maximum in these clique numbers. Saturated equiangular sets containing these three sets of 8 basis vectors consist of 8, 14, and 14 lines respectively, from which we conclude that M 1 3 (8) = 14. Remark. Lemmens and Seidel showed that M 1 3 (r) = 2r − 2 for r ≥ 8 (cf. [LS73], Theorem 4.5). The same technique as in the proof of Proposition 5.2 is applied to produce Table 3.
We indicate that the technique in [LY18] is more powerful than semidefinite programming method in [BY14]. For instance, the semidefinite programming bound on equiangular sets with angle 1 5 in R 8 is 11.2 and the technique in [LY18] obtains the bound 10. Before we proceed further, we find the following generation of the Neumann theorem (Theorem 1.3) is necessary.
Theorem 5.3 (Generalization of Neumann Theorem). Let r > 3 be a positive integer. If there are more than 2r − 2 equiangular lines with angle α in R r , then: • When r is odd, 1 α is either an odd integer, or √ 2r − 1.
• When r is even, Proof. Let X be an equiangular set with angle α in R r with |X| = 2r − 1. Consider the matrix A = 1 α (G(X) − I), which is a (2r − 1) × (2r − 1) symmetric matrix with integer coefficients and diagonal entries are all zeros, but non-diagonal entries are either 1 or −1. The matrix A will have an eigenvalue a = − 1 α with multiplicity at least r − 1, since G has an eigenvalue zero with multiplicity at least r − 1. If a is rational, then a must be an odd integer using the same argument as in the proof of the original Neumann theorem (cf. [LS73], Theorem 3.4). Otherwise a is irrational, and by degree count a must a zero of an irreducible quadratic polynomial over R; let a * be its conjugate, which must also be an eigenvalue of A.
• c 1 = 1. Then (c 2 , c 3 ) = (− 3r 2 + 1, −(r + 1)). But this case is not allowed, because the matrix G, being a Gram matrix, must be positive semidefinite, hence 0 is the smallest eigenvalue of G, and this implies that a = − 1 α is the smallest eigenvalue of A. However, c 3 = −(r + 1), which is also an eigenvalue of A by assumption, is always smaller than a, contradiction.
We also need the inequality (14), which is the so-called relative bound for equiangular lines.
Notice that Theorem 5.3 is universal for every dimension r. We may solve for more exact values of M * (r) if we spend more time on computer calculation. However the work will be repetitious so we stop here.
Closing remarks
We note that the results of Theorem 3.2 and Lemma 4.2 are not optimal in the sense that the upper bound on the cardinality of a pillar can be lowered if more vectors are presents in another pillar. For instance, with the angle 1 5 and base size 4, it should not be possible to have four (4, 1) pillars with 24 vectors each (this produces the number 96 in Proposition 4.4). Nevertheless our bounds are sufficient to beat Lemmens-Seidel's conjecture, so we did not pursue further. On the other hand, these bounds are valid regardless of the dimensions or ranks where the equiangular set lives.
Based on our experimentations, we believe that there can only be a large pillar; by this we form the following conjecture.
Conjecture 6.1. There is a constant C that depends on the angle α and the base size K, but not to the dimension or rank, of any equiangular set, such that there could not be two pillars of size at least C.
This conjecture is coherent to Sudakov's result that when the angle is fixed except for 1 3 , the upper bound for equiangular sets in R r is at most 1.92r asymptotically (see [BDKS18]). Sudakov had a construction of equiangular sets with angle α = 1 2n+1 and rank r which concentrates in one pillar whose cardinality is asymptotic to (n+1)r n for every positive integer n (see [BDKS18], Conjecture 6.1).
The only unsolved case towards the Lemmens-Seidel conjecture is the (4, 2) pillars. King and Tang [KT16] showed that the unit vectors within one (4, 2) pillar form a 2-distance set of angles 1 13 and − 5 13 . But the semidefinite linear programming bound s(r, 1 13 , − 5 13 ) cannot be small. Consider the 3ℓ × 3ℓ matrix of the following block form: B 1 13 J 3 · · · 1 13 J 3 1 13 J 3 B · · · 1 13 J 3 . . . . . . . . . . . . This matrix has rank 2ℓ + 1 and positive semidefinite, so it is the Gram matrix of 3ℓ vectors of rank 2ℓ + 1. On the other hand, the base size of the equiangular set generated from this matrix is 6, for in such a pillar there are many independent 3-cliques, and two independent 3-cliques are switching equivalent to a 6-clique (by switching all three vertices in one of the 3-cliques). So we raise another conjecture which is related to Theorem 5.1 of [LS73].
Conjecture 6.2. In the case where α = 1 5 and base size K = 4, there are only a finite number of families of connected graphs S i 's such that the connected components of the Seidel graph of any (4, 2) pillar in an equiangular set is either a graph or a subgraph of a graph in S i . | 2018-07-30T03:49:30.000Z | 2018-07-17T00:00:00.000 | {
"year": 2020,
"sha1": "bc4cd28de07775afb8c1bca8039074d91e8ed77a",
"oa_license": null,
"oa_url": "https://www.sciencedirect.com/science/article/am/pii/S0012365X19303450",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "bc4cd28de07775afb8c1bca8039074d91e8ed77a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
229371542 | pes2o/s2orc | v3-fos-license | Classification and statistics of cut and project sets
We define Ratner-Marklof-Strombergsson measures. These are probability measures supported on cut-and-project sets in R^d (d>1) which are invariant and ergodic for the action of the groups ASL_d(R) or SL_d(R). We classify the measures that can arise in terms of algebraic groups and homogeneous dynamics. Using the classification, we prove analogues of results of Siegel, Weil and Rogers about a Siegel summation formula and identities and bounds involving higher moments. We deduce results about asymptotics, with error estimates, of point-counting and patch-counting for typical cut-and-project sets.
Introduction
A cut-and-project set is a discrete subset of R d obtained by the following construction. Fix a direct sum decomposition R n " R d ' R m , where the two summands in this decomposition are denoted respectively V phys , V int , so that R n " V phys ' V int , and the corresponding projections are π phys : R n Ñ V phys , π int : R n Ñ V int .
Also fix a lattice L Ă R n and a window W Ă V int ; then the corresponding cut-and-project set Λ " ΛpL, W q is given by ΛpL, W q def " π phys`L X π´1 int pW q˘. (1.1) We sometimes allow L to be a grid, i.e., the image of a lattice under a translation in R n , and sometimes require Λ to be irreducible, a notion we define in §2. Cut-and-project sets are prototypical aperiodic sets exhibiting long-term-order, and are sometimes referred to as model sets or quasicrystals. Beginning with work of Meyer [Mey70] in connection to Pisot numbers, they have been intensively studied from various points of view. See [BG13] and the references therein.
Given a cut-and-project set, a natural operation is to take the closure (with respect to a natural topology) of its orbit under translations. This yields a dynamical system for the translation group and has been studied by many authors under different names. In recent years several investigators have become interested in the orbit-closures under the group SL d pRq (respectively ASL d pRq), which is the group of orientation-and volume-preserving linear (resp., affine) transformations of R d . In particular, in the important paper [MS14], motivated by problems in mathematical physics, Marklof and Strömbergsson introduced a class of natural probability measures on these orbit-closures. The goal of this paper is to classify and analyze such measures, and derive consequences for the statistics and large scale geometry of cutand-project sets.
Classification of Ratner-Marklof-Strömbergsson measures.
We say that a cut-and-project set is irreducible if it arises from the above construction, where the data satisfies the assumptions (D), (I) and (Reg) given in §2.1. Informally speaking, (D) and (I) imply that the set cannot be presented as a finite union of sets whose construction involves smaller groups in the cut-and-project construction, and (Reg) is a regularity assumption on the window set W . We denote by C pR d q the space of closed subsets of R d , equipped with the Chabauty-Fell topology. This is a compact metric topology whose definition is recalled in §2.2, and which is also referred to in the quasicrystals literature as the local rubber topology or the natural topology. Since the groups ASL d pRq and SL d pRq act on R d , they also act on C pR d q. We say that a Borel probability measure µ on C pR d q is a Ratner-Marklof-Strömbergsson measure, or RMS measure for short, if it is invariant and ergodic under SL d pRq and gives full measure to the set of irreducible cut-and-project sets. We call it affine if it is also invariant under ASL d pRq, and linear otherwise (i.e., if it is invariant under SL d pRq but not under ASL d pRq).
A construction of RMS measures was given in [MS14], as follows. Let Y n denote the space of grids of covolume one in R n , equipped with the Chabauty-Fell topology, or equivalently with the topology it inherits from its identification with the homogeneous space ASL n pRq{ASL n pZq. Similarly, let X n denote the space of lattices of covolume one in R n , which is identified with the homogeneous space SL n pRq{ SL n pZq. Fix the data d, m, V phys » R d , V int » R m , π phys , π int , as well as a set W Ă V int , and choose L randomly according to a probability measureμ on Y n . This data determines a cut-and-project set Λ, which is random since L is. The resulting probability measure µ on cut-and-project sets can thus be written as the pushforward ofμ under the map L Þ Ñ ΛpW, Lq, and is easily seen to be invariant and ergodic under SL d pRq or ASL d pRq if the same is true forμ. One natural choice forμ is the so-called Haar-Siegel measure, which is the unique Borel probability measure invariant under the group ASL n pRq. Another is the Haar-Siegel measure on X n (i.e., the unique SL n pRq-invariant measure). It is also possible to consider other measures on Y n which are ASL d pRqor SL d pRq-invariant. As observed in [MS14], a fundamental result of Ratner [Rat91] makes it possible to give a precise description of such measures on Y n . They correspond to certain algebraic groups which are subgroups of ASL n pRq and contain ASL d pRq (or SL d pRq).
Our first result is a classification of such measures. We refer to §2 and §3 for more precise statements, and for definitions of the terminology.
Theorem 1.1. Let µ be an RMS measure on C pR d q. Then, up to rescaling, there are fixed m and W Ă R m such that µ is the pushforward via the map Y n Ñ C pR d q, L Þ Ñ ΛpL, W q of a measureμ on Y n , where n " d`m, W satisfies (Reg), the measurē µ is supported on a closed orbit HL 1 Ă Y n for a connected real algebraic group H Ă ASL n pRq and L 1 P Y n . There is an integer k ě d, a real number field K and a K-algebraic group G, such that the Levi subgroup of H arises via restriction of scalars from G and K, and one of the following holds for G: ‚ G " SL k (as a K-group) and n " k¨degpK{Qq. ‚ G " Sp 2k (as a K-group), and d " 2, n " 2k¨degpK{Qq. Furthermore, in the linear (resp. affine) case µ is invariant under none of (resp., all of ) the translations by nonzero elements of V phys .
Here the group Sp 2k is the group preserving the standard symplectic form in 2k variables; we caution the reader that this group is sometimes denoted by Sp k in the literature. As we will see in Proposition 3.3, any choice of K and G satisfying the description in Theorem 1.1 gives rise to an affine and a linear RMS measure. We note that the vertex sets of the famous Ammann-Beenker and Penrose tililngs, which are wellknown to have representations as cut-and-project constructions, are associated with the real quadratic fields K " Qp ? 2q and K " Qp ? 5q, resepctively, with d " 2 and G " SL 2 , see also §5.
Theorem 1.1 is actually a combination of two separate results. The first extends work of Marklof and Strömbergsson [MS14]. They introduced the pushforwardμ Þ Ñ µ described above, whereμ is a homogeneous measure on Y n , and noted that the measuresμ could be classified using Ratner's work. Our contribution in this regard (see Theorem 3.1) is to give a full list of the measuresμ which can arise. The second result, contained in our Theorem 4.1, is that this construction is the only way to obtain RMS measures according to our definition (which is given in terms of V phys rather than Y n ).
1.2. Formulae of Siegel-Weil and Rogers. In geometry of numbers, computations with the Haar-Siegel probability measure on X n are greatly simplified by the Siegel summation formula [Sie45], according to which for f P C c pR n q, ż Xnf pLq dmpLq " ż R n f pxq dvolpxq, wheref pLq " ÿ vPL t0u f pvq.
(1.2) Here m is the Haar-Siegel probability measure on X n , and vol is the Lebesgue measure on R n . The analogous formula for RMS measures was proved in [MS14]. Namely 1 , suppose µ is an RMS measure, and for each Λ P supp µ, and for f P C c pR d q, set We will refer tof as the Siegel-Veech transform of f . Then it is shown in [MS14,MS20], that for an explicitly computable constant c ą 0, for any f P C c pR d q one has żf pΛq dµpΛq " c ż R d f pxq dvolpxq.
(1.4) A first step in the proof of (1.4) is to show thatf is integrable, i.e., belongs to L 1 pµq. As a corollary of Theorem 1.1, and using reduction theory for lattices in algebraic groups, we strengthen this and obtain the precise integrability exponent of the Siegel-Veech transform, as follows: Theorem 1.2. Let µ be an RMS measure, let G and K be as in Theorem 1.1, let r def " rank K pGq denote the K-rank of G, and define q µ def " " r`1 µ is linear r`2 µ is affine. (1.5) Then for any f P C c pR d q and any p ă q µ we havef P L p pµq. Moreover, if the window W contains a neighborhood of the origin in V int , there are f P C c pR d q for whichf R L qµ pµq.
The proof involves integrating some characters over a Siegel set for a homogeneous subspace of X n . The special case for which K " Q, G " SL k and the measure µ is linear was carried out in [EMM98,Lemma 3.10]. Note that (1.6) We will say that the RMS measure µ is of higher rank when q µ ě 3; in light of the above this happens unless d " 2, G " SL 2 , and µ is linear. It follows immediately from Theorem 1.2 thatf P L 1 pµq, and in the higher-rank case, thatf P L 2 pµq. The proof of (1.4) given in [MS14] follows a strategy of Veech [Vee98], and relies on a difficult result of Shah [Sha96]. Following Weil [Wei82], we will reprove the result with a more elementary argument. Combined with Theorem 1.2, the argument gives a strengthening of (1.4).
Given p P N, write À p 1 R d " R dp , and for a compactly supported function f on R dp , define Theorem 1.3. Let µ be an RMS measure, and suppose p ă q µ where q µ is as in (1.5). Then there is a countable collection tτ e : e P Eu of Borel measures on R dp such that τ def " ř τ e is locally finite, and for every f P L 1 pτ q we have ż p f p dµ " ż R dp f dτ ă 8.
The measures τ e are H-c&p-algebraic, for the group H appearing in Theorem 3.1 (see Definition 7.3).
This result is inspired by several results of Rogers for lattices, see e.g. [Rog55,Thm. 4]. Loosely speaking, c&p-algebraic measures are images of algebraically defined measures on R np under a natural map associated with the cut-and-project construction.
Theorems 1.2 and 1.3 will be deduced from their more general counterparts Theorems 6.2 and 7.1, which deal with the homogenous subspace HL 1 Ă Y n arising in Theorem 1.1.
1.3. Rogers-type bound on the second moment. A fundamental problem in geometry of numbers is to control the higher moments of random variables associated with the Haar-Siegel measure on the space X n . In particular, regarding the second moment, the following important estimate was proved in [Rog55,Rog56,Sch60]: for the Haar-Siegel measure m on X n , n ě 3 there is a constant C ą 0 such that for any function f P C c pR n q taking values in r0, 1s we have ż wheref is as in (1.2). We will prove an analogous result for RMS measures of higher rank.
Theorem 1.4. Let µ be an RMS measure of higher rank. For p " 2 let τ be the measure as in Theorem 1.3. In the notation of Theorem 1.1, assume that G " SL k , or µ is affine.
(1.8) Then there is C ą 0 such that for any Borel function f : R d Ñ r0, 1s belonging to L 1 pτ q we have ż (1.9) The case in which (1.8) fails, that is, µ is linear and G " Sp 2k , and in which in addition K " Q, is treated in [KY18], where a similar bound is obtained. The symplectic case with K a proper field extension of Q is more involved, and we hope to investigate it further in future work.
There have been several recent papers proving an estimate like (1.9) for homogeneous measures associated with various algebraic groups. See [KS19] and references therein. The alert reader will have noted that, even though the measure µ is the pushforward of a measure supported on a homogeneous space HL 1 , we prove the bound (1.9) for functions defined on C pR d q rather than on HL 1 . Indeed, while we expect such a stronger result to be true, it requires a more careful analysis than the one needed for our application.
1.4. The Schmidt theorem for cut-and-project sets, and patchcounting. It is well-known that every irreducible cut-and-project set Λ has a density # pΛ X Bp0, T qq volpBp0, T qq " volpW q covolpLq , (1.10) where Λ " ΛpL, W q, volpW q is the volume of W , and covolpLq is the covolume of L (for two proofs, which are valid for a larger class of nice sets in place of Bp0, T q, see [Moo02] and [MS14,§3], and see references therein). In particular, the limit exists and is positive. Following Schmidt [Sch60], we would like to strengthen this result and allow counting in even more general shapes, and with a bound on the rate of convergence. We say that a collection of Borel subsets tΩ T : Theorem 1.5. Let µ be an RMS measure of higher rank, such that (1.8) holds. Then for every ε ą 0, for every unbounded ordered family tΩ T u, for µ-a.e. cut-and-project set Λ, (1.11) This result is a direct analogue of Schmidt's result for lattices, and its proof follows [Sch60]. In the special case Ω T " Bp0, T q, we obtain an estimate for the rate of convergence in (1.10), valid for µ-a.e. cutand-project set. For related work see [HKW14]. Note that for Bp0, T q, and for lattices, Götze [Göt98] has conjectured that an error estimate O´volpBp0, T qq 1 2´1 2d`ε¯s hould hold. Even for Ω T " Bp0, T q, one cannot expect (1.11) to hold for all cutand-project sets; in fact, a Baire category argument as in [HKW14,§9] can be used to show that for any error function EpT q with EpT q " opT d q there are cut-and-project sets for which, along a subsequence T n Ñ 8, |# pBp0, T n q X Λq´DpΛq¨volpBp0, T n qq| ě EpT n q.
Thus, it is an interesting open problem to obtain error estimates like (1.11) for explicit cut-and-project sets. Note that for explicit cut-andproject sets which can also be described via substitution tilings, such as the vertex set of a Penrose tiling, there has been a lot of work in this direction, see [Sol14] and references therein.
We now discuss patch counting, which is a refinement which makes sense for cut-and-project sets but not for lattices. For any discrete set Λ Ă R d , any point x P Λ and any R ą 0, we refer to the set P Λ,R pxq def " Bp0, Rq X pΛ´xq as the R-patch of Λ at x. Two points x 1 , x 2 P Λ are said to be R-patch equivalent if P Λ,R px 1 q " P Λ,R px 2 q. It is well-known that any cut-andproject set Λ is of finite local complexity, which means that for any R ą 0, #tP Λ,R pxq : x P Λu ă 8. Furthermore, it is known that whenever P 0 " P Λ,R px 0 q for some x 0 P Λ and some R ą 0, the density or absolute frequency exists; in fact, the set in the numerator of (1.12) is itself a cut-andproject set, see [BG13,Cor. 7.3]. Our analysis makes it possible to obtain an analogue of Theorem 1.5 for counting patches, namely: Theorem 1.6. Let µ be an RMS measure of higher rank, for which (1.8) holds. For any δ ą 0, set θ 0 def " δ m`2δ , where m " dim V int . Suppose the window W Ă V int in the cut-and-project construction satisfies dim B pBW q ď m´δ, where dim B denotes the upper box dimension (see §10). Then for every unbounded ordered family tΩ T u in R d , for µ-a.e. Λ, for any patch P 0 " P Λ,R px 0 q, and any θ P p0, θ 0 q, we have #tx P Ω T X Λ : P Λ,R pxq " P 0 u " DpΛ, P 0 q volpΩ T q`O´vol pΩ T q 1´θ¯. (1.13) For additional results on effective error terms for patch-counting in cut-and-project sets, see [HJKW19]. 2. Basics 2.1. Cut-and-project sets. In the literature, different authors impose slightly different assumptions on the data in the cut-and-project construction. For related discussions, see [BG13,Moo97,MS14]. Here are the assumptions which will be relevant in this paper: (D) π int pLq is dense in V int .
(I) π phys | L is injective. (Reg) The window W is Borel measurable, bounded, has non-empty interior, and its boundary BW has zero measure with respect to Lebesgue measure on V int .
We will say that the construction is irreducible if (D), (I) and (Reg) hold.
In the literature, a more general cut-and-project scheme is discussed, in which the groups V phys » R d , V int » R m may be replaced with general locally compact abelian groups. Note that if (D) fails, we can replace V int with π int pLq, which is a proper subgroup of V int , while if (I) fails, we can replace V int with V int {pL X ker π phys q. In both cases one can obtain the same set using smaller groups. Note that when (D) fails, the group π int pLq might be disconnected, and in that case, using (Reg) we see that only finitely many of its connected components will intersect W , and ΛpL, W q will have a description as a finite union of cut-and-projects sets with an internal space of smaller dimension.
Regarding the regularity assumptions on W , note that if no regularity assumptions are imposed, one can let Λ be an arbitrary subset of π phys pLq by letting W be equal to π int`L X π´1 phys pΛq˘. Also, the assumption that W is bounded (respectively, has nonempty interior) implies that Λ is uniformly discrete (respectively, relatively dense).
Finally, note that it is not W that plays a role in (1.1), but rather π´1 int pW q. In particular, if convenient, one can replace the space V int with any space V 1 int which is complementary to V phys , and with the obvious notations, replace W with W 1 def " π 1 int pπ´1 int pW qq. Put otherwise, it would have been more natural to think of W as being a subset of the quotient space R n {V phys . We refrain from doing so to avoid conflict with established conventions.
2.2. Chabauty-Fell topology. Let C pR d q denote the collection of all closed subsets of R d . Equip C pR d q with the topology induced by the following metric, which we will call the Chabauty-Fell metric: for Y 0 , Y 1 P C pR d q, dpY 0 , Y 1 q is the infimum of all ε P p0, 1q for which, for both i " 0, 1, and dpY 0 , Y 1 q " 1 if there is no such ε. It is known that with this metric, C pR d q is a compact metric space. In this paper, closures of collections in C pR d q and continuity of maps with image in C pR d q will always refer to this topology, and all measures will be regular measures on the Borel σ-algebra induced by this topology. We note that in the quasicrystals literature this topology is often referred to as the local rubber topology or the natural topology .
We note that there are many topologies on the set of closed subsets C pXq of a topological space X. The Chabauty-Fell metric was introduced by Chabauty [Cha50] for X " R d as well as for X a locally compact second countable group, and by Fell [Fel62] for general spaces X, particularly spaces arising in functional analysis. See also [LS03], where the connection to the Hausdorff metric is elucidated via stereographic projection. Many of the different topologies in the literature coincide on C`R d˘. Two notable exceptions are the Hausdorff topology, which is defined on the collection of nonempty closed subsets of X, and the weak-* topology of Borel measures on R d , studied in [Vee98,MS19], satisfying a certain growth condition and restricted to point processes. See [Bee93] for a comprehensive discussion of topologies on C pXq.
We will need the following fact, which is well-known to experts, but for which we could not find a reference (see [MS19, §5.3] for a related discussion): Proposition 2.1. Suppose W is Borel measurable and bounded. Then the map Ψ : Y n Ñ C pR d q, ΨpLq def " ΛpL, W q (2.1) is a Borel map, and is continuous at any L for which π int pLqXBW " ∅.
Proof. We first prove the second assertion, that is, we assume that π int pLq X BW " ∅ and suppose by contradiction that L j Ñ L in Y n but ΨpL j q Ñ Λ def " ΨpLq. By passing to a subsequence and using the definition of the Chabauty-Fell metric on C pR d q, we can assume that there is ε ą 0 such that for all j, one of the following holds: (a) There is v P Λ, }v} ď ε´1 such that for all j, ΨpL j q does not contain a point within distance ε of v.
where }v} ď ε´1, and v R Λ. In case (a), there is u P L such that v " π phys puq and π int puq P W . By assumption π int puq is in the interior of W . Since L j Ñ L there is u j P L j such that u j Ñ u and for large enough j, π int pu j q P W and hence v j " π phys pu j q P ΨpL j q. Clearly v j Ñ v and we have a contradiction.
In case (b), we let u j P L j such that v j " π phys pu j q. Then the images of v j under both projections π phys , π int are bounded sequences, and hence the sequence pu j q is also bounded. Passing to a subsequence and using that L j Ñ L we can assume u j Ñ u for some u P L. Since π int pu j q P W for each j, π int puq P W and hence, by our assumption, π int puq belongs to the interior of W , and in particular to W . This implies that v " π phys puq P Λ, a contradiction.
We now prove that Ψ is a Borel measurable map. For this it is enough to show that Ψ´1pBq is measurable in Y n , whenever B " BpΛ, εq is the ε-ball with respect to the Chabauty-Fell metric centered at Λ " ΨpLq P C pR d q. Let F 1 def " x P L : π phys pxq P B`0, ε´1˘, π int pxq P W ( and F 2 def " Λ X B`0, ε´1`ε˘. Then the definition of the Chabauty-Fell metric gives that L 1 belongs to Ψ´1pBq if and only if for any u 1 P F 1 , there is u 1 1 P L 1 with π int pu 1 1 q P W and }π phys pu 1 q´π phys pu 1 1 q} ă ε, and additionally, for any u 1 1 P L 1 with π int pu 1 1 q P W and }π phys pu 1 1 q} ă ε´1 there is v P F 2 with }π phys pu 1 1 q´v} ă ε. Since lattices are countable, F 1 , F 2 are finite, and W Ă V int is Borel measurable, this shows that Ψ´1pBq is described by countably many measurable conditions. We use this to obtain a useful continuity property for measures. Given a topological space X, we denote by ProbpXq the space of regular Borel probability measures. We equip ProbpXq with the weak-* topology. Any Borel map f : X Ñ Y induces a map f˚: ProbpXq Ñ ProbpY q defined by f˚µ " µ˝f´1. Corollary 2.2. Let Ψ be as in (2.1). Then anyμ P ProbpY n q for whichμ ptL P Y n : π int pLq X BW ‰ ∅uq " 0.
(2.2) is a continuity point for Ψ˚. In particular, this holds ifμ is invariant under translations by elements of V int » R m and BW has zero Lebesgue measure.
Proof. Supposeμ j Ñμ in ProbpY n q, and let µ j , µ denote respectively the pushforwards Ψ˚μ j , Ψ˚μ. To establish continuity of Ψ˚we need to show µ j Ñ µ. Sinceμ j Ñμ, we have ş g dμ j Ñ ş g dμ for any g P C c pY n q. By the Portmanteau theorem this also holds for any g which is bounded, compactly supported, and for which the set of discontinuity points hasμ-measure zero. Let f be a continuous function on C pR d q and letf " f˝Ψ. Thenf is continuous atμ-a.e. point, by Proposition 2.1. The Portmanteu theorem then ensures that ż That is, µ j Ñ µ, as required. For the last assertion, assuming thatμ is invariant under translations by elements of V int , we need to show that (2.2) is satisfied. Letting ½ BW , m V int denote respectively the indicator of BW and Lebesgue measure on V int , and letting B Ă V int be a measurable set of finite and positive measure, we have by Fubini that µ ptL P Y n : π int pLq X BW ‰ ∅uq It therefore suffices to show that for any L, and indeed, this follows immediately from the countability of L and the assumption that m V int pBW q " 0.
2.3. Ratner's Theorems. Ratner's measure classification and orbitclosure theorems [Rat91] are fundamental results in homogeneous dynamics. We recall them here, in the special cases which will be important for us. A Borel probability measure ν on Y n (respectively, X n ) is called homogeneous if there is x 0 in Y n (respectively, X n ) and a closed subgroup H of ASL n pRq (respectively, SL n pRq) such that the H-action preserves ν, the orbit Hx 0 is closed and equal to supp ν, and H x 0 def " th P H : hx 0 " x 0 u is a lattice in H. When we want to stress the role of H we will say that ν is H-homogeneous.
Recall that ASL n pRq (respectively, ASL n pZq) denotes the group of affine transformations of R n whose derivative has determinant one (respectively, and which map the integer lattice Z n to itself), and that Y n is identified with ASL n pRq{ASL n pZq, via the map which identifies the coset represented by the affine map ϕ with the grid ϕpZ n q. Similarly, we have an identification of X n with SL n pRq{ SL n pZq. We view the elements of ASL n pRq concretely as pairs pg, vq, where g P SL n pRq and x P R n determine the map x Þ Ñ gx`v. In what follows two subgroups of ASL n pRq play an important role, namely the groups SL d pRq and ASL d pRq, which we will denote alternately by F , and embed concretely in ASL n pRq in the upper left hand corner. That is, in the case F " SL d pRq, g P F is identified witĥˆg and in the case F " ASL d pRq, pg, vq P F is identified witĥˆg Here Id m , 0 k,ℓ , 0 k denote respectively an identity matrix of size mˆm, a zero matrix of size kˆℓ, and the zero vector in R k . We will refer to the embeddings of SL d pRq and ASL d pRq in ASL n pRq, given by (2.3) and (2.4), as the top-left corner embeddings.
The following is a special case of Ratner's result.
Theorem 2.3 (Ratner). Let 2 ď d ď n, and let F be equal to either ASL d pRq or SL d pRq (with the top-left corner embedding in ASL n pRq).
Then any F -invariant ergodic measure ν on Y n is H-homogeneous, where H is a closed connected subgroup of ASL n pRq containing F . Every orbit-closure F x is equal to supp ν for some homogeneous measure ν. The same conclusion holds for X n and F " SL d pRq.
The following additional results were obtained in [Sha91, Tom00]: Theorem 2.4 (Shah, Tomanov). Let ν, H be as in Theorem 2.3, and let x 0 " g 0 Z n in Y n or X n such that supp ν " Hx 0 . Let H 1 be the smallest algebraic subgroup of ASL n which is defined over Q and contains g´1 0 F g 0 . The solvable radical of H 1 is equal to the unipotent radical of H 1 , and letting H " g 0 H 1 g´1 0 , H is equal to the connected component of the identity in H R .
We will need a result of Shah which relies on Ratner's work (once more this is a special case of a more general result).
Theorem 2.5 ( [Sha96] ). Let F be equal to either ASL d pRq or SL d pRq as above, let tg t u be a one-parameter diagonalizable subgroup of SL d pRq, and let U " tg P F : lim tÑ8 g´tgg t Ñ eu be the corresponding expanding horospherical subgroup. Let Ω Ă U be a relatively compact open subset of U and let m U be the restriction of Haar measure to U, normalized so that m U pΩq " 1. Then for every x 0 P Y n , letting ν be the homogeneous measure such that supp ν " F x 0 , we have ż where δ x 0 is the Dirac measure at x 0 and the convergence is weak-* convergence in ProbpY n q.
2.4. Number fields, geometric embeddings, and restriction of scalars. For more details on the material in this subsection we refer the reader to [Wei82,PR94,Mor15,EW].
It is a lattice in R D » R rˆCs . Note that the geometric embedding depends on a choice of ordering of the field embeddings, and on representatives of each pair of complex conjugate embeddings. Thus, when we speak of 'the' geometric embeddings we will consider this data as fixed.
An algebraic group G defined over K (or K-algebraic group) is a variety defined over K such that the multiplication and inversion maps GˆG Ñ G, G Ñ G are K-morphisms. A K-homomorphism of algebraic groups is a group homomorphism which is a K-morphism of algebraic varieties. We will work only with linear algebraic groups which means that they are affine varieties, i.e., for some N, they are the subset of affine space A N satisfying a system of polynomial equations in N variables. We will omit the word 'linear' in the rest of the paper. A typical example of a K-algebraic group is a Zariski closed matrix group, that is, a subgroup of the matrix group SL m pCq for some m described by polynomial equations in the matrix entries, with coefficients in K. If G i are K-algebraic groups realized as subgroups of SL m i pCq for i " 1, 2, and ϕ : G 1 Ñ G 2 is a K-homomorphism, then there is a map ϕ : SL m 1 pCq Ñ SL m 2 pCq which is polynomial in the matrix entries, with coefficients in K, such thatφ| G 1 " ϕ. For any field L Ă C containing K, we will denote by G L the collection of L-points of G. It is a subgroup of SL m pLq, if G is realized as subgroup of SL m pCq.
We will do the same for rings L " Z or L " O. In this case the group G L depends on the concrete realization of G as a matrix group but the commensurability class of G L is independent of choices (recall that two subgroups Γ 1 , Γ 2 of some ambient group G are commensurable if rΓ i : Γ 1 X Γ 2 s ă 8 for i " 1, 2). By a real algebraic group we will mean a subgroup of finite index in G R for some K-algebraic group G, where K Ă R.
The restriction of scalars Res K{Q is a functor from the category of K-algebraic groups to Q-algebraic groups. Given an algebraic group G defined over K, there is an algebraic group H " Res K{Q pGq defined over Q, such that H Q is naturally identified with G K . For any K-homomorphism of K-algebraic groups ϕ : G 1 Ñ G 2 we have a Qhomomorphism Res K{Q pϕq : Res K{Q pG 1 q Ñ Res K{Q pG 2 q. Given a matrix representation of G there is a corresponding matrix representation of Res K{Q pGq, defined as follows. We can realize K (as a ring) as a subalgebra of the Q-algebra of DˆD matrices with entries in Q, and this leads to a corresponding identification of SL m pKq with a subgroup of SL mD pQq. A different choice of basis will produce a group that differs by a SL mD pQq-conjugate. Now suppose G Ă SL m pCq is the solution set of polynomial equations P 1 , . . . , P ℓ in the matrix entries, with coefficients in K. LetP 1 , . . . ,P ℓ be the matrix valued polynomials where each K-coefficient is replaced by its Mat DˆD pQq representative, and each variable (previously a matrix coefficient of SL m pCq) is an Mat DˆD pCq-block of SL mD pCq. These polynomials together with the (linear) polynomials that ensure that each DˆD block is an element of the Q-algebra K, have coefficients in Q, and Res K{Q pGq is their solution set.
The R-points of H " Res K{Q pGq can be represented concretely as where σ j G is the algebraic group defined by applying the field embedding σ j to the polynomials in the matrix entries, with coefficients in K, defining G. Here, for a C-algebraic group M, M C is a shorthand notation for the C-points of M, thought of as an R-group via the isomorphism C -R 2 . More explicitly, a polynomial equation involving m 2 complex matrix entries z ij " a ij`i b ij , where i, j P t1, . . . , mu, is replaced with the same polynomial in the matrix algebra of 2ˆ2 real matrices, with each appearance of z ij replaced by Mat 2ˆ2 pRq, and with the 2m 2 additional equations pA pijq q 12 "´pA pijq q 21 , pA pijq q 11 " pA pijq q 22 . Furthermore, denoting byQ the algebraic closure of Q, there is a conjugation of SL mD pQq by an element with coefficients in the Galois closure of K, so that HpQq is embedded in SL mD pQq in block form with r`s blocks, where each block contains one of the factors in (2.5). Similarly, for a K-morphism ϕ : G 1 Ñ G 2 , the restriction to the factor σ j G R in formula (2.5), of the Q-morphism Res K{Q pϕq : Res K{Q pG 1 q Ñ Res K{Q pG 2 q, is the map ϕ j obtained from ϕ by applying the field embedding σ j to its coefficients. Thus, after writing both Res K{Q pG 1 q and Res K{Q pG 2 q in product form as in (2.5), we have Res K{Q pϕqpg 1 , . . . , g r`s q " pϕ 1 pg 1 q, . . . , ϕ r`s pg r`s qq. (2.6) We now note a connection between restriction of scalars, geometric embeddings of lattices, and the action on X n . Suppose that O " O K , ∆ is an order in O, and let L be the geometric embedding of ∆ in R D . For m P N set n " Dm and let where we choose the dilation factor c so that L 1 P X n , and we choose the ordering of the indices so that Now suppose G is an algebraic K-group without K-characters, ϕ : G Ñ SL m is a K-morphism, and H def " Res K{Q pGq. Since ϕ is a K-morphism, there is a finite-index subgroup of G O whose image under ϕ is contained in SL m pOq, and hence preserves O m . This implies that a finite index subgroup of H Z preserves L 1 . Since H Z is a lattice in H def " H R (see [Bor19,§13]), we find that HL 1 is a closed orbit in X n which is the support of an H-homogeneous measure.
Classification of invariant measures
Recall from the introduction that an affine (respectively, linear) RMS measure µ is a probability measure on C pR d q which gives full measure to the collection of all irreducible cut-and-project sets, and is invariant and ergodic under F , where is the stabilizer group of µ. In this section we will give some more background on RMS measures, and two assertions (Theorem 3.1 and 4.1) which together imply Theorem 1.1. The careful reader will have noticed that we gave here a seemingly weaker definition of an affine RMS measure compared to the introduction, by requiring it to be ergodic under ASL d pRq instead of SL d pRq. However, these two definitions are equivalent by the Howe-Moore ergodicity theorem (see [EW11]).
3.1. RMS measures -background and basic strategy. In order to motivate the definition of an RMS measure, we recall some crucial observations of [MS14]. Let F be as in (3.1). Let R n " V phys ' V int , π phys , π int , L, W be the data involved in a cut-and-project construction. The observations of [MS14] consist of the following: ‚ From the fact that π phys intertwines the action of F on R n (via the top-left corner embedding in ASL n pRq) and on R d , for the map Ψ defined in (2.1), one obtains the equivariance property for all g P F ; in other words, gΛpL, W q " ΛpgL, W q. ‚ In particular, if we fix the data R n " V phys ' V int , W , then the map Ψ˚: ProbpY n q Ñ ProbpC pR d qq considered in Corollary 2.2 maps F -invariant measures to F -invariant measures. ‚ Due to Ratner's work described in §2.3, ergodic F -invariant measures on Y n can be described in detail, in terms of certain real algebraic subgroups of ASL n pRq. ‚ Theorem 2.5 and other results from homogeneous dynamics can then be harnessed as a powerful tool for deriving information about cut-and-project sets. In order to analyze measures on Y n , a basic strategy is to work first with the simpler space X n . Let Recall that Y n is identified with M{Γ and under this identification, a closed orbit HL is identified with HgΓ " gH 1 Γ, where g P M is such that L " gZ n , and H 1 " g´1Hg. Also let M def " SL n pRq and Γ def " SL n pZq.
We think of M concretely as the stabilizer of the origin in the action of M on R n . Recall also that X n is identified with M {Γ. Let denote respectively the natural quotient map, and the induced map on the quotients (which is well-defined since πpΓq " Γ). The map π is a Q-morphism, and the map π is realized concretely by mapping a grid L to the underlying lattice L´L obtained by translating L so that it has a point at the origin. It satisfies an equivariance property Every fiber of π is a torus and thus π is a proper map. We summarize the spaces and maps we use in the following diagram.
Extending the terminology in the introduction, a homogeneous measurē µ on Y n will be called affine if it is ASL d pRq-invariant, and linear if it is SL d pRq-invariant but not ASL d pRq-invariant. Here ASL d pRq and SL d pRq are embedded in M via the top-left corner embeddings (2.4) and (2.3).
3.2. The homogeneous measures arising from the F -action on Y n . In this section we state a more precise version of Theorem 1.1. Suppose k 0 is a subfield of C. We say that a k 0 -algebraic group H is k 0almost simple if any normal k 0 -subgroup H 1 satisfies dim H 1 " dim H or dim H 1 " 0. In this case we will also say that a subgroup of finite index of H k 0 is k 0 -almost simple.
Theorem 3.1. Letμ be an F -invariant ergodic measure on Y n , and let H and L 1 denote respectively the subgroup of M and the point in Y n involved in Theorem 2.3; i.e.,μ is H-invariant and supported on the closed orbit HL 1 . Let g 1 P M such that L 1 " g 1 Z n and let H 1 def " g´1 1 Hg 1 . Assume also that L 1 satisfies conditions (D) and (I).
Then H, H 1 and L 1 are described as follows: (i) In the linear case, H 1 is semisimple and Q-almost simple. In this case write H 1 def " H 1 . In the affine case, we can write H 1 as a semidirect product H 1˙Rn where H 1 is semisimple and Q-almost simple, and R n denotes the full group of translations of R n .
(ii) The group H 1 in (i) is the connected component of the identity in the group of R-points of Res K{Q pGq, where K is a real number field and G is a K-group which is K-isomorphic to either SL k or Sp 2k , for some k ě d. In the case G " SL k we have n " k degpK{Qq, and there is a subspace V of R n of dimension k containing g´1 1 V phys which is H 1 -invariant and such that the action of H 1 on V gives the group SLpV q. The case G " Sp 2k only arises when d " 2, and in that case n " 2k degpK{Qq, and there is a subspace V of R n of dimension 2k equipped with a symplectic form ω 1 such that V is H 1 -invariant, the action of H 1 on V gives the symplectic group SppV, ω 1 q, and V contains g´1 1 V phys as a symplectic subspace.
The proof will involve a reduction to the space X n of lattices. We introduce some notation and give some preparatory statements.
As in §3.1, let M " ASL n pRq, Γ " ASL n pZq, Y n " M{Γ, so that the closed orbit HL 1 is identified with Hg 1 Γ " g 1 H 1 Γ. By Theorem 2.3, Γ H 1 def " H 1 X Γ is a lattice in H 1 andμ is the pushforward of the unique H 1 -invariant probability measure on H 1 {Γ H 1 under the map hΓ H 1 Þ Ñ g 1 hΓ. By Theorem 2.4, H 1 is the connected component of the identity in the group of real points of a Q-algebraic group. In particular there are at most countably many possibilities for H 1 .
We say that property (irred) holds if there is no proper Q-rational subspace of R n that is H 1 -invariant (for the linear action by matrix multiplication). Note that by Theorem 2.4, H 1 is the connected component of the identity in the group of real points of the smallest Qsubgroup of SL n containing g´1 1 SL d pRqg 1 , and thus (irred) is equivalent to requiring that there is no proper Q-rational subspace of R n that is g´1 1 SL d pRqg 1 -invariant.
We now state an analogue of Theorem 3.1 for the action on X n .
Lemma 3.2. Assume pirredq holds. Then H 1 is the connected component of the identity of the group of real points of a semisimple Qalgebraic group H, satisfying the properties listed in statement (ii) of Theorem 3.1 (for the group H 1 ).
Lemma 3.2 is the main result of this section, and its proof will be given below in §3.3 and §3.4.
Proof of Theorem 3.1 assuming Lemma 3.2. LetH be the smallest Qsubgroup of ASL n containing g´1 1 SL d pRqg 1 , so that by Theorem 2.4, we have H 1 " pH R q˝. Similarly, let H be the smallest Q-subgroup of SL n containing g´1 1 SL d pRqg 1 . We extend π to a projection map of algebraic groups defined over Q, mapping Q-subgroups to Q-subgroups ([Bor91, Cor I.1.4]). Then it follows from minimality of H andH, that πpHq " H.
As we will see in Lemma 3.4, under the assumptions of Theorem 3.1, condition (irred) holds. In particular, the conclusion of Lemma 3.2 applies. Hence H is semisimple.
Let U be the unipotent radical ofH. Then U Ă ker π, and since ker π XH is a unipotent normal subgroup, U " ker π XH. This means that in the affine map determined by h P H 1 on R n , πphq is the linear part, and U " U R acts on R n by translations. This implies the equality spantupxq´x : x P R n , u P Uu " spantup0q : u P Uu, (3.5) and we denote the subspace of R n appearing in (3.5) by V 0 . Clearly, V 0 are the real points of a Q-subspace of C n since U is defined over Q.
Since H 1 normalizes U, V 0 is H 1 -invariant, and since H 1 " πpH 1 q is the group of linear parts of elements of H 1 , H 1 also preserves V 0 . By (irred) we must have V 0 " t0u or V 0 " R n . If V 0 " t0u then U " t0u. If V 0 " R n then U contains translations in n linearly independent directions and hence U » R n is the entire group of translations of R n . This gives the description of the translational part of H 1 , in assertion (i). Assertion (ii) follows from Lemma 3.2.
The next proposition shows that all the cases described in Theorem 3.1 do arise. Namely we have: Proposition 3.3. For any k ě d ě 2 and any real number field K, there are R-algebraic groups H and H 1 in M, and L 1 " g 1 Z n P Y n , where n " k degpK{Qq and g 1 P M, such that the following hold: ‚ H 1 is defined over Q, and is Q-isogenous to Res K{Q pGq, where G is K-isomorphic to SL k . ‚ H is either equal to H 1 (linear case) or to H 1˙Rn (affine case). ‚ The orbit HL 1 is closed and supports an H-homogeneous probability measure ν. The pushforward Ψ˚ν is an RMS measure. The same statement is true with d " 2, n " 2k degpK{Qq, and with G being K-isomorphic to Sp 2k for some k ě 2.
Proof. The proof amounts to reversing the steps in the preceding discussion. For concreteness, we give it for G " SL k . Let D def " degpK{Qq, n def " Dk and G def " SL n pRq. The standard action ϕ of G K on K k gives rise to a Q-embedding Res K{Q pϕq : Res K{Q pGq Ñ SL n . Let H 1 denote the connected component of the identity in the group of R-points in Res K{Q pGq. Similarly to (2.3) and (2.4), we refer to as the top-left corner embedding of SL d pRq in M . By the explicit description of restriction of scalars described in §2.4, there is g 1 P M such that H def " g 1 H 1 g´1 1 contains the top-left corner embedding of SL d pRq in M , and up to scaling, g 1 Z n is the geometric embedding of O k as in (2.7), where O is the ring of integers in K. In particular, the orbit Hg 1 Z n is a closed orbit supporting an H-homogeneous measure in X n .
Recall that there is an embedding of M in M and of X n in Y n (respectively as the stabilizer of the origin in the standard action on R n , and as the set of lattices in the space of grids). We let H 1 denote the image of H 1 under this embedding, and in the linear case we set H def " H 1 and let HL 0 be the image of Hg 1 Z n under this embedding, and let ν be the H-homogeneous measure on HL 1 . Because the action of SL d pRq is ergodic with respect to ν, we can find g 1 so that for L 1 " g 1 Z n we have SL d pRqL 1 " HL 1 " HL 0 . It is not hard to check that with these choices, the desired conclusions hold. The proof in the affine case is similar, taking H " H 1˙Rn and π´1pHg 1 Z n q.
Preparations for the proof of Lemma
In other words a subspace V is L 1 -rational if it is of the form g 1 W for some rational subspace W Ă R n , i.e., a subspace spanned by vectors with rational entries.
Lemma 3.4. The following implications hold.
(a) pDq ñ V phys is not contained in a proper L 1 -rational subspace.
Variants of statements (a) and (b) are given in [Ple03], but we give a complete proof for the convenience of the reader.
Proof. We will prove all three statements by contradiction. Suppose that (a) fails, so that there is a proper L 1 -rational subspace W containing V phys . Let W K be an L 1 -rational complement of W . Since W K is L 1 -rational, L 1 is mapped to a lattice in W K under the projection R n Ñ W K , and hence the projection of L 1 to W K is discrete. On the other hand, R n Ñ W K factors through V int since V phys Ă W , and by (D) the image of L 1 is dense in V int . Thus, the projection of L 1 is dense in W K , a contradiction. Now suppose that (b) fails, and V int contains a nontrivial L 1 -rational subspace W . Then V int , which is the kernel of the map R n Ñ V phys , contains W X L 1 , which by assumption is nontrivial. This contradicts (I).
Now suppose (D) and (I) hold but (irred) fails, so that there is a proper H 1 -invariant Q-rational subspace W . From (b) we know that g 1 W is not contained in V int . Hence some u P g 1 W can be written as We can find g P SL d pRq such that gu p ‰ u p , and hence g 1 W X V phys is nontrivial. Since SL d pRq acts irreducibly on V phys , V phys Ă g 1 W . This contradicts the conclusion of (a).
Theorem 3.5 (Morris). Let n ě d ě 2, and let S be a connected real algebraic group which is R-almost simple, and contains the image of SL d pRq under the top-left corner embedding (see (3.6)). Then there are k ě d, ℓ ě d and g P SL n pRq such that gSg´1 is the image of either SL k pRq or Sp 2ℓ pRq under the top-left corner embedding, and the latter can only occur when d " 2.
In this statement, by the 'top-left corner embedding of Sp 2k pRq', we mean the image under (3.6), that is, the elements of SL 2k pRq stabilizing a non-degenerate alternating bilinear form on R 2k . As is well-known, such a form can be taken to be defined by This result was proved by Dave Morris in 2014, in connection with prior work of one of the authors and Solomon. Namely, the result appeared in an initial ArXiV version [SW14] (in a slightly different form) but eventually did not appear in the published version [SW16].
For any k ě d, we will refer to the image of SL k pRq under the top-left corner embedding in (3.6) (replacing d with k in that embedding) as the top-left copy of SL k pRq. Clearly, with respect to the decomposition the top-left copy of SL k pRq acts via its standard action on the first summand, and the second summand is the set of vectors fixed by the action. Let k be maximal, such that S contains a conjugate (over SL n pRq) of the top-left copy of SL k pRq. To make the ideas more transparent we separate the proof into cases according to whether k ě 3 (the easier case) or k " 2. The proofs in these cases are not independent -readers interested in the case k " 2 are encouraged to first read the proof for k ě 3.
Proof in case k ě 3. We recall the following result of Mostow [Mos55]: If G 1 è¨¨Ă G r Ă SL n pRq are connected reductive real algebraic groups, then there is x P SL n pRq such that x´1G i x is self-adjoint for every i. That is, if g P x´1G i x, then the transpose of g is also in x´1G i x.
Replacing S by a conjugate, we may assume that S contains the top-left embedding of SL k pRq, which we denote by F . By Mostow's theorem, there is x P SL n pRq, such that x´1F x and x´1Sx are selfadjoint. Let V be the pn´kq-dimensional subspace of R n which is pointwise fixed by F . Since SO n pRq acts transitively on the set of subspaces of any given dimension, there is some h P SO n pRq, such that xhpV q " V . After replacing x with xh, we may assume that x´1F x fixes pointwise the second summand in the splitting (3.7), and x´1F x and x´1Sx are self-adjoint (because this property is not affected by conjugation by an element of SO n pRq). We conclude that x´1F x " F . Thus, we may assume that S is self-adjoint and contains F . We will assume that S ‰ F and derive a contradiction to the maximality of k.
Since F Ł S are connected, their Lie algebras f, s satisfy dim f ă dim s.
For 1 ď i, j ď n, let e i,j be the elementary matrix with 1 in the pi, jq entry, and all other entries 0. Write where ‚ sl n pRq and f are the Lie algebras of SL n pRq and F , respectively, ‚ z is the subspace of sl n pRq fixed pointwise by AdpF q, where Ad : SL n pRq Ñ Autpsl n pRqq is the adjoint representation, ‚ X i is the linear span of t e i,j : k`1 ď j ď n u, and ‚ Y j is the linear span of t e i,j : k`1 ď i ď n u. Now we denote by A the group of diagonal matrices in F with positive entries. We write an element a P A as a " diag`a 1 , a 2 , . . . , a k´1 , pa 1 a 2¨¨¨ak´1 q´1, 1, . . . , 1˘, (3.9) and denote by χ i the characters a Þ Ñ a i , where a k def " pa 1¨¨¨ak´1 q´1. Since k ě 3, the characters χ i , χ´1 i are distinct, for i " 1, . . . , k, and the subspaces X 1 , X 2 , . . . , X k and Y 1 , Y 2 , . . . , Y k are the corresponding weight spaces, that is, ‚ X i " tx P sl n pRq : Adpaqpxq " χ i paqx for all a P Au, and ‚ Y j " tx P sl n pRq : Adpaqpxq " χ´1 j paqx for all a P Au. We will use repeatedly the fact that if l is an AdpAq-invariant subspace of sl n pRq, and v P l has a nontrivial projection onto some weight space, then this projection is contained in l.
Since A Ă S, s is invariant under AdpAq. Since S is R-almost simple and dim f ă dim s, s cannot be contained in f ' z, and hence s projects nontrivially to some X i or Y j . In fact, since S is self-adjoint, it must project nontrivially to both X i and Y i , for some i. Since X i is a weight space of AdpAq, we find that X i X s is nontrivial. Conjugating by an element of I kˆS O n´k pRq, we may assume that s contains the matrix e i,k`1 . Applying an appropriate element of AdpSO k pRqq shows that e k,k`1 P s. Then, since S is self-adjoint, s also contains e k`1,k . Therefore, s contains the Lie subalgebra generated by f, e k,k`1 , and e k`1,k , which is the Lie subalgebra of F 1 , the top-left copy of SL k`1 pRq. Thus S contains F 1 , contradicting the maximality of k, and completing the proof in case k ě 3.
Proof in case k " 2. In this case we also have d " 2. Arguing as in the case k ě 3 we may assume that S properly contains F , the top-left copy of SL 2 pRq, and is self-adjoint. Let ℓ be the maximal number so that S contains a copy of H def " F 1ˆ¨¨¨ˆFℓ , where each F r is isomorphic to SL 2 pRq and there is an where the spaces V 1 , . . . , V ℓ are two dimensional, and each F r acts linearly on V r and trivially on À s‰r V s . By assumption ℓ ě 1, and there is a conjugation taking H into a top-left copy of SL t pRq, where t " 2ℓ ě 2. We replace H and S by their images under this conjugacy (retaining the same names H and S). By Mostow's theorem we can assume that H and S are both self-adjoint.
Our first goal is to show that S is also contained in the top-left copy of SL t pRq. (3.10) Indeed, in analogy with (3.8), consider the decomposition and ‚ l is the Lie algebra of the top-left SL t pRq, ‚ z is the Lie algebra of the centralizer of the top-left SL t pRq, ‚ X i is the linear span of t e i,j : t`1 ď j ď n u, and ‚ Y j is the linear span of t e i,j : t`1 ď i ď n u. With this notation, our claim (3.10) is that s Ă l.
We note that s does not contain a nonzero element in some X i or some Y i . (3.11) Indeed, if v P ps X X i q t0u, we could re-index to assume i " 1, and conjugate by an element of I tˆS O n´t pRq and rescale to assume v " e 1,t`1 . Since s is self-adjoint, we also have e t`1,1 P s. Since f 1 , e t`1,1 and e 1,t`1 generate a Lie algebra isomorphic to sl 3 pRq, this gives a contradiction to the choice of k and proves (3.11). If s Ć l, using that s is simple and the Lie algebras l, z commute, we see that the projection of s onto m is nontrivial; indeed, if s Ă l ' z then the kernel of the projection of s to z contains f and by simplicity is equal to s.
Let A 1 be the intersection of H with the diagonal subgroup and let a 1 be its Lie algebra. For each odd index i ă t, the spaces X i ' Y i`1 and X i`1 ' Y i are weight spaces for AdpA 1 q, and hence there is some Re-indexing, conjugating and rescaling as in the proof of (3.11), we can assume u " e 1,t`1`ř jět`1 a j e j,2 , where the a j are not all zero. By a further conjugation by an element of I tˆS O n´t pRq that fixes e 1,t`1 , we can also assume that a j " 0 for j ą t`2, that is, we can write u " e 1,t`1`a e t`1,2`b e t`2,2 , with pa, bq ‰ p0, 0q.
Since w and v are both nonzero elements of s 1 2,4 , by (3.16) they are scalar multiples of each other and thus there is c ‰ 0 so that w " cv. This forces´a " cb and´b " ca and so c "˘1, proving (3.17).
Note that for the case ℓ " 2 we only applied one conjugation, namely the conjugation swapping the indices 3,4. Thus, by induction on ℓ, we see that after a conjugation, we have the following. For i P t1, . . . , ℓ´1u, let SL piq 4 pRq be the copy of SL 4 pRq embedded in SL n pRq in a 4ˆ4 block corresponding to indices 2i´1, 2i, 2i`1, 2i`2. Let H piq " F iˆFi`1 Ă SL piq 4 pRq be the corresponding diagonal copies of SL 2 pRq, and let s piq be the intersection of s with the Lie algebra of SL piq 4 pRq. Then s piq is the obvious embedding of spp4, Rq (namely, the embedding given for i " 1 by (3.12) and (3.13)). The Lie algebras s piq generate spp2ℓ, Rq (namely, the Lie algebra of the top-left Sp 2ℓ pRq). This implies that H contains Sp 2ℓ pRq. Since Sp 2ℓ pRq is a maximal subgroup among the connected Lie subgroups of SL 2ℓ pRq (see [Kar55]), we must have that S " Sp 2ℓ pRq.
3.4. Proof of Lemma 3.2. Since π is proper, we have Since H 1 " g´1 1 Hg 1 , by Theorem 2.4, H 1 is the connected component of the identity in the group of real points of a Q-algebraic group H. From now on we replace F with its image under π, i.e., denote F " SL d pRq.
We also write We need to show that H admits the description given in the statement. We divide the proof into steps.
Step 1: H is semisimple. Let U be the radical of H. By Theorem 2.4, it is defined over Q and unipotent, U " UR is the unipotent radical of H 1 , and U is connected Thus V U is defined over Q.
Furthermore, since every unipotent subgroup can be put in an upper triangular form, V U ‰ t0u, and is a proper subspace of R n unless U is trivial. Since U is normal in H 1 , the space V U is H 1 -invariant, and thus by assumption (irred), V U is not a proper subspace of R n . It follows that U is trivial, and hence H 1 is semisimple. Therefore so is H.
For a group M and normal subgroups M 1 , . . . , M k , the product is the subgroup Note that ś M i is also normal and does not depend on the ordering of the M i . Let k 0 be one of the fields Q or R. Recall that an almost direct product is the image of a direct product under a homomorphism with finite kernel (that is, isogenous to a direct product). A semisimple k 0 -group is an almost direct product of its k 0 -almost simple normal subgroups, and such a decomposition is unique up to permuting the k 0 -almost simple factors.
We write H in two ways: as an almost direct product of its R-almost simple factors S i , and as an almost direct product of its Q-almost simple factors T j , and let S i and T j denote respectively the connected component of the identity in the group of R-points of S i and T j . Since every T j can be further decomposed into R-almost simple factors, and since these decompositions are unique, the decomposition of H into the S i refines the decomposition of H into the T j . In other words, there is a partition of the S i into subsets such that each T j is a product of the S i in one subset of the partition. Then H 1 is the product of the S i . For t is another such presentation, then for each i, h 1 i h´1 i belongs to the finite center of H 1 .
Step 2: F 1 is contained in one of the S i , and H is Q-almost simple. The second assertion follows from the first one. Indeed, by reindexing, let S 1 and T 1 denote respectively the connected component of the identity in the real points of the R-and Q-simple factors containing F 1 . Then S 1 Ă T 1 and T 1 does not properly contain the real points of any Q-subgroup containing S 1 , and by the last assertion of Theorem 2.4 we have that H 1 " T 1 .
Turning to the first assertion, let ZpH 1 q denote the center of H 1 , for each i let S 1 i be the quotient group H 1 {´ZpH 1 q¨ś j‰i S j¯, and let F 1 Note that i 0 P I if and only if for any subset F 1 Ă F 1 which generates a dense subgroup, there is f 1 P F 1 which can be written as a product of elements Clearly F 1 Ă H 2 , and our goal is to show that H 2 is equal to one of the S i , or in other words that # I " 1. Also, for i P I, F 1 i is isogenous to SL d pRq. Recall that a representation of a group H on a vector space V is isotypic if V is the direct sum of k P N isomorphic irreducible representations for H, where k is referred to as the multiplicity. We will also use the term H-isotypic, if we want to make the dependence on H explicit. A linear representation of a semisimple group has a unique presentation as a direct sum of isotypic representations (up to permuting factors). Let V 1 phys def " g´1 1 pV phys q and V 1 int def " g´1 1 pV int q. Then the decomposition R n " V 1 phys ' V 1 int , is the decomposition of R n into F 1 -isotypic representations, and the action of F 1 on V 1 phys is irreducible. In particular, the multiplicity of the representation on V 1 phys is equal to one. Let V 1 '¨¨¨' V t be a decomposition of R n into H 2 -isotypic representations. Since F 1 Ă H 2 , each V ℓ is F 1 -invariant, and decomposes further into isotypic representations for F 1 . Since V 1 phys is an isotypical component of F 1 of multiplicity one, V 1 phys is contained in one of the V ℓ . By renumbering we can assume V 1 phys Ă V 1 . Since F 1 acts on V 1 phys irreducibly, the action of H 2 on V 1 is irreducible, and the H 2 -isotypic component associated to V 1 has multiplicity one. Since F 1 acts trivially on V 1 int , which is a complementary subspace to V 1 phys , the action of F 1 on each V ℓ is trivial for ℓ " 2, . . . , t, that is, (3.18) The right-hand side of (3.18) is a normal subgroup of H 2 , and thus a product ś iPJ S i for some J Ă I. By the assumption that F 1 i is nontrivial for each i P I, we must have that J " I, that is, the group on the right-hand side of (3.18) must coincide with H 2 . This means that for ℓ ě 2, the V ℓ are trivial representations for H 2 , and hence of S i for each i P I.
Let F 1 denote the elements of F 1 whose eigenvalues on V 1 phys are all real, distinct from each other, and not equal to 1. Since these conditions are invariant under conjugation and F 1 is simple, F 1 generates a dense subgroup of F 1 . Write f 1 as a product of elements f 1 i , where f 1 i P S i . Then the elements f 1 i commute with each other and with f 1 . Thus each f 1 i fixes the eigenspaces for f 1 and hence each f 1 i preserves the eigenspace decomposition of the action of f 1 on R n . In particular, f 1 i preserves V 1 phys for each i P I. Re-indexing if necessary we can assume that 1 P I, and suppose by contradiction that there is i 0 P I t1u. There is f 1 P F 1 such that, when writing f 1 as a product of elements f 1 i P S i , f 1 1 acts on V 1 phys with infinite order (this property does not depend on the presentation of f as a product of the f 1 i ). Then the action of f 1 1 on V 1 phys preserves an eigenspace V 1 , with d 1 def " dim V 1 ă d " dim V 1 phys . Since the action of S i 0 commutes with the action of f 1 1 , the space V 1 is preserved by S i 0 , and hence by f 1 i 0 . The group generated by all such elements f 1 i 0 is isogenous to F 1 i 0 and hence to SL d pRq. Thus, it has no nontrivial representations on any d 1 -dimensional real vector space, for d 1 ă d. This implies that the action of S i 0 on V 1 has an infinite kernel, but since S i 0 is simple, the action of S i 0 on V 1 must also be trivial.
So the space V 2 def " span S 1 pV 1 q Ă span S 1 pV 1 phys q Ă V 1 is acted on trivially by S i 0 for any i 0 P I t1u. In particular, V 2 is H 2 -invariant. By the irreducibility of the H 2 -action on V 1 , this means that V 1 " V 2 , and therefore S i 0 acts trivially on V 1 . It follows that F 1 i 0 acts trivially on V 1 for each i 0 P I t1u. Since S i 0 acts trivially on V ℓ for all i 0 P I and all ℓ ě 2, we get that in any decomposition of f 1 P F 1 , all the elements f 1 i for i ě 2 act trivially on R n . That is, I " t1u.
Step 3: Restriction of scalars, in explicit form. Since H is Qalmost simple, it is obtained by restriction of scalars from an absolutely almost simple algebraic group defined over a number field K -see [BT65,6.21] for a proof. We will reprove this result in our setup, obtaining more information about the embedding of H 1 in SL n pRq. Using Step 2 and re-indexing, let S 1 " pS 1 qR be the connected component of the identity in the R-almost simple group containing F 1 , and set G def " S 1 , G def " S 1 . It follows from [BT65, §2.15b] that G is Zariski connected, which implies via [Bor91,Cor. 18.3] that G is Zariski dense in G. From Theorem 3.5, we only have two possibilities for G, and its Zariski closure is a conjugate of either SL k or Sp 2ℓ . Hence G R is a conjugate of SL k pRq or Sp 2ℓ pRq. In particular, we have that G is actually C-almost simple. Since H is defined over Q, the C-almost simple factors of H are defined over a finite extension of Q; this is well-known (see e.g. [BT65, §2.15b]) but we were unable to find a suitable reference, so we sketch the argument. The group H has a maximal torus which is defined over Q and split over a finite extension L of Q by [Bor91, §8, §18]. For each root α, the group G α , which is the centralizer of the connected component of the identity in ker α, is defined over L (see [Bor91, Proof of Thm. 18.7]). The groups G α generate H [Bor91, §14] and each C-almost simple factor either contains G α , or intersects it trivially. Thus, any C-almost simple factor S can be described as the elements commuting with all the G α not contained in S. In particular, the C-almost simple factors are defined over L.
Replacing L if necessary with its Galois extension, suppose that L is the smallest Galois extension of Q such that all C-almost simple factors of H are defined over L. Let GalpL{Qq denote the Galois group of L, which we can think of explicitly as the group of field automorphisms of L. If V Ă A n is an affine variety defined over L then for any σ P GalpL{Qq there is a new affine variety, which we will denote by σ V, obtained by acting on the coefficients of the defining polynomial equations, and σ acts on the points of L n by acting separately on each component. The assignments V Þ Ñ σ V and σ : L Ñ L are compatible in the sense that for x P L n , x P V L if and only if σpxq P σ V L . Moreover, if V is defined over L, then it is defined over Q if and only if σ V " V for every σ P GalpL{Qq; this follows from the more general fact (see [Bor91,), that if L 1 is a number field then V is defined over L 1 if and only if for any σ P GalpQ{Qq such that σ| L 1 " Id we have σ V " V, whereQ denotes the algebraic closure of Q.
Let D denote the number of C-almost simple factors of H, or equivalently, the number of L-almost simple factors of H. The action of GalpL{Qq permutes these factors, and this permutation action is transitive since H is Q-almost simple. Thus, the subgroup ∆ def " tσ P GalpL{Qq : σ G " Gu is of index D in GalpL{Qq, and the C-almost simple factors are the (distinct) images of G by elements σ 1 , . . . , σ D P GalpL{Qq, where the σ i are coset representatives of GalpL{Qq{∆. Let K def " tx P L : @σ P ∆, σpxq " xu.
Complex conjugation z Þ Ñz induces an automorphism of L belonging to ∆ since G is defined over R, hence we see that K Ă R. By the Galois correspondence, degpK{Qq " D and ∆ " tσ P GalpL{Qq : for all x P K, σpxq " xu.
We claim that G is defined over K, and G is not defined over any proper subfield of K. Indeed, if σ P GalpQ{Qq satisfies σ| K " Id, then σ| L P ∆ and hence σ G " G. Furthermore, if G were defined over a proper subfield K 1 Ł K, then its stability group ∆ 1 would be of index D 1 ă D and therefore the collection t σ G : σ P GalpL{Qqu would have cardinality D 1 . We will show that H is isomorphic (as a Q-algebraic group) to Res K{Q pGq. Moreover, we will show that the given inclusion H ãÑ SL n is, up to a conjugation over SL n pR XQq, the matrix presentation described in §2.4. By Theorem 3.5 G is, up to a conjugation in SL n pRq, either the top-left copy of SL k pRq or the top-left copy of Sp 2k pRq for some k ě 2 (and the latter can only arise when d " 2). In the remainder of the proof we will refer to these two cases as the SL k case and the Sp 2k case.
We know that G is conjugate over SL n pRq to the top-left copy of SL k pRq (in the SL k case) or Sp 2k pRq (in the Sp 2k case). Therefore there is a G-invariant subspace V Ă R n , of dimension k (in the SL k case) and 2k (in the Sp 2k case) and a complementary subspace V 0 such that R n " V ' V 0 , the action of G on V is irreducible, and V 0 is the subspace of G-fixed vectors in R n . We claim that we can recover V explicitly as V " span tgx´x : g P G, x P R n u . (3.19) Indeed, denote the RHS of (3.19) by W . We clearly have W Ă V , and for the reverse inclusion, it is enough to show that W is G-invariant. To see this, let g 0 , g P G and x P R n . Then where g 1 def " g 0 gg´1 0 and x 1 def " g 0 x. This shows that the generators of W are mapped to W by any g 0 P G.
From (3.19) and since G is defined over K Ă R, we deduce that V " V R for a subspace V Ă A n defined over K. Clearly V 0 " pV 0 q R for a subapce V 0 which is also defined over K. Arguing as in (3.19), but using F 1 in place of G and V 1 phys in place of V, we have V 1 phys " spantf 1 x´x : f 1 P F 1 , x P R n u, and therefore V 1 phys Ă V . We can think of VQ as aQ-linear subspace ofQ n , and can discuss the action of GalpQ{Qq as before. We have that pG i qQ preserves the decompositionQ n " σ i VQ ' p σ i V 0 qQ. We claim that (3.20) To see this, let W denote the vector subspace of A n spanned by Ť i σ i V. Since it is GalpQ{Qq-invariant, it is defined over Q. Since V 1 phys " g´1 1 V phys and Z n " g´1 1 L 1 , Lemma 3.4 implies that V 1 phys is not contained in any proper rational subspace of R n . This implies that W R " R n and thus W " A n . The groups G i commute, and σ i V is a G i -isotypic component of multiplicity one. For each pair of distinct i, j, each g P G i defines an intertwining operator for the action of G j , and thus by Schur's lemma (see e.g. [Kna02,Cor. 4.9]), the action of G i on σ j V factors through an abelian group. Since G i is simple, this means that each G i acts trivially on σ j V for j ‰ i. In particular, σ i V X ř j‰i σ j V " t0u, and we have shown (3.20).
It follows from (3.20) that R n is the space of R-points of Res K{Q pVq. Write D " r`2s as in §2.4. Since dim σ i V " dim σ j V for every i ‰ j, we have that H 1 is realized explicitly in r`s blocks. For real embeddings σ i , i " 1, . . . , r we have that the dimension (over R) of σ i V R is k (in the SL k case) and 2k (in the Sp 2k case), and for σ r`j , j " 1, . . . , s which are non-conjugate complex embeddings of K we have that the dimension (over R) of σ r`j V C is 2k (in the SL k case) and 4k (in the Sp 2k case). Putting this together we get that n " Dk (in the SL k case) and n " 2Dk (in the Sp 2k case), and the embedding of H 1 in SL n pRq is the one given in (2.6), where ϕ : SL k Ñ SL k is the identity map (in the SL k case), and ϕ : Sp 2k Ñ SL 2k is the natural embedding (in the Sp 2k case). In particular, we have proved that H " Res K{Q pGq, with the explicit form of restriction of scalars given in §2.4.
Step 4: G as a K-group. It remains to identify the K-isomorphism type of G. We proved in Step 3 that K Ă R, the decomposition R n " V ' V 0 into G-invariant subspaces is defined over K, and there is a conjugacy over SL n pRq sending G to the top-left corner embedding of SL k pRq or of Sp 2k pRq (as defined after the statement of Theorem 3.5). We now show that as a K-group, G is K-isomorphic to either SL k or Sp 2k .
Consider first the SL k -case. Let W ' W 0 " C k ' C n´k (whose real points we used in equation (3.7)), and note that both subspaces are defined over K. Since V, V 0 are K-subspaces, we can find g P SL n pKq, such that gV " W, gV 0 " W 0 , and hence, G 1 " gGg´1 is contained in the top-left corner embedding of SL k . In particular, the groups G and G 1 are K-isomorphic, and G 1 R is R-isomorphic to the top-left SL k pRq. Let G 2 " SLpWq " SL k (top-left corner embedding) considered as a K-group. Then G 2 R is also R-isomorphic to SL k pRq, and thus G 1 and G 2 have the same dimension (as algebraic varieties). Since G 1 K " gG K g´1 Ă G 2 , there is a K-embedding G ãÑ G 2 , and since these groups have the same dimension and are Zariski connected, G and G 2 are K-isomorphic. Now consider the Sp 2k case. We have shown that dim V " 2k is even, and we adjust the definitions W ' W 0 " C 2k ' C n´2k . We let again g P SL n pKq be the conjugating element sending G to G 1 " gGg´1 Ă SLpWq. G 1 R is R-isomorphic to Sp 2k pRq, that is, there is a nondegenerate alternating bilinear form ω on W R such that G 1 R is the group of all R-linear transformations of W preserving ω. Note that ω is R-bilinear and takes values in R. We claim that there is a form ω 1 which is defined over K on W (that is, takes values in K when evaluated on elements of W K ), so that G 1 R is contained in the group of R-linear transformations of W preserving ω 1 . Once the claim is proved, we will have that there is a K-embedding G ãÑ SppW, ω 1 q (the group of linear transformations of W preserving ω 1 ) which will be an isomorphism by dimension considerations as in the preceding case, thus proving that G is K-isomorphic to SppW, ω 1 q -Sp 2k .
To prove the claim, consider the collection Ź 2 pW˚q of alternating bilinear forms on W. This collection is a linear space, and the nondegenerate forms form a Zariski open subset (since nondegeneracy is equivalent to the non-vanishing of the determinant of the Gram matrix of the form). Since G 1 is a K-group, the subspace Ź 2 pW˚q G 1 of G 1 -invariant forms is a K-subspace, which is nonempty since its collection of R-points contains ω. Since K-points are Zariski dense in K-subspaces, we find that there are nondegenerate symplectic K-forms which are G 1 -invariant. Finally, the proof of Theorem 3.5 shows that in the symplectic case, the space gV 1 phys -R 2 is spanned by two vectors x, y satisfying ωp x, yq " 1; that is, gV 1 phys is a symplectic subspace for ω. (We recall at this point that V 1 phys " g´1 1 V phys Ă V , g is the conjugation mapping V to W , and ω is the real symplectic form on W induced by the isomorphism of G 1 R » Sp 2k pRq from Theorem 3.5.) Write ω as a linear combination of forms ω 1 which are defined over K and G 1 -invariant. Since ωp x, yq ‰ 0, there has to be some ω 1 P Ź 2 pW˚q G 1˘K for which ω 1 p x, yq ‰ 0. This shows that V 1 phys is a symplectic subspace of V under the form induced by ω 1 .
Remark 3.6. In the symplectic case, Step 4 also shows that there is a symplectic form on the entire space R n that is preserved by the entire group H 1 . Indeed, the form ω 1 , which is symplectic and defined over K, can be 'pushed' using the field embeddings σ i to induce symplectic forms on the spaces σ i V. We will not be using this fact and we leave the details to the reader.
4. An intrinsic description of the measures arising via ΨT he following result shows that all RMS measures arise via the map Ψ˚. For a given constant c ą 0, we denote by ρ c : C pR d q Ñ C pR d q the map induced by the dilation by c, that is, ρ c pF q " tcx : x P F u.
Theorem 4.1. Let F be as in (3.1) and embedded in G via the topleft corner embedding. For any ergodic F -invariant Borel probability measure µ on C pR d q which assigns full measure to irreducible cut-andproject sets, there is an irreducible cut-and-project construction with R n " V phys 'V int , π phys , π int , W and with Ψ as in (2.1), a constant c ą 0, and an F -invariant ergodic homogeneous measureμ on Y n , such that µ " ρ c˚Ψ˚μ . For µ-a.e. Λ we have where DpΛq is the density of Λ as defined in (1.10).
We will split the proof into the linear and affine case.
Proof of Theorem 4.1, affine case. Suppose µ is ASL d pRq-invariant and F " ASL d pRq, and let tg t u be a one-parameter diagonalizable subgroup of SL d pRq Ă F . By the Mautner phenomenon (see [EW11]), the action of tg t u on`C pR d q, µ˘is ergodic. Thus, by the Birkhoff pointwise ergodic theorem, there is a subset X 0 Ă C pR d q of full µ-measure such that for all Λ P X 0 we have Since the function Λ Þ Ñ DpΛq is measurable and invariant, we can further assume that the value of DpΛq is the same for each Λ P X 0 . Let U, Ω, m U be as in Theorem 2.5. Then by Fubini's theorem, and since µ is U-invariant, we have where 1 X 0 is the indicator function of X 0 . Thus the inner integral on the RHS is equal to one on a subset of full measure; i.e., there is X 1 Ă C pR d q of full measure such that for every Λ P X 1 we have uΛ P X 0 for m U -a.e. u P Ω. This implies that for Λ P X 1 we have Let Λ P X 1 be an irreducible cut-and-project set, that is, Λ " ΨpLq, where L is a grid and Ψ is defined using data d, m, n, V phys , V int , W satisfying (D), (I), (Reg). We can simultaneously rescale L, the window W , and the metric on V phys by the same positive scalar, in order to assume that L P Y n . Namely, set c 1 def " covolpLq´1 n , so that Now solving for c " 1 c 1 in (1.10) gives (4.1). Define a sequence of measures η T on Y n by That is, the measures η T are defined by the same averaging as in (4.2), but for the action on Y n rather than on C pR d q. By (3.2), their pushforward under Ψ are the measures appearing on the LHS of (4.2). By Theorem 2.5 we have η T Ñ T Ñ8μ for some homogeneous measureμ on Y n . By assertion (i) of Theorem 3.1,μ is invariant under translation by any element of R n , and in particular any element of V int . Hence, by Corollary 2.2,μ is a continuity point of the map Ψ˚. By (4.2), Ψ˚η T Ñ µ and by continuity, µ " Ψ˚μ.
For the case in which µ is SL d pRq-invariant but not ASL d pRq-invariant, we will need the following result: 1 is an open dense subset of R n . Proof. Write v " g 1 u for u P Z n t0u. It suffices to show that the orbit H 1 u is open and dense in R n . The linear action of H 1 on R n factors through the group H 1 so we may replace H 1 with H 1 .
The action of SL k pRq on R k has the property that the orbit of every nonzero vector is dense. The same is true for the action of Sp 2k pRq on R 2k (since any vector can be completed to a symplectic basis), for the action of SL k pCq on C k » R 2k , and for the action of Sp 2k pCq on C 2k » R 4k . By Step 3 of the proof of Lemma 3.2, H 1 is the product of groups G i , and we have a direct product R n " ' r`s i"1 V i , with the following properties: ‚ For i " 1, . . . , r we have a real field embedding σ i , and V i " σpVq R ; for i " r`1, . . . , r`s we have representatives σ i of pairs of complex embeddings, and V i " σpVq C . ‚ For i " 1, . . . , r we have G i " σ i pGq R and for i " r`1, . . . , s we have G i " σ i pGq C . ‚ In the SL k -case (resp., the Sp 2k case), V 1 is isomorphic to R k (resp., R 2k ), with the standard action. ‚ The action of G i on V i is the obtained from the action of G 1 on V 1 by applying σ i . In particular, for real embeddings it is isomorphic to the standard action of SL k pRq or Sp 2k pRq, and for complex embeddings it is isomorphic to the standard action of SL k pCq or Sp 2k pCq.
Thus, it is enough to show that for any u P Z n t0u, and for any field embedding σ j of K, the projection u j of u to the factor corresponding to σ j is nonzero.
Suppose to the contrary that u j " 0 for some j, and let a P SL n pRq be a diagonalizable matrix, such that a acts on the ℓ-th factor of R n corresponding to the field embedding σ ℓ as a scalar matrix λ ℓ¨I d, where the λ ℓ are positive real scalars satisfying λ j ą 1, λ i ă 1 for i ‰ j, and ź ℓ λ ℓ " 1.
That is, a belongs to the centralizer of H 1 in SL n pRq, and a i u Ñ iÑ8 0. This implies by Mahler's compactness criterion that the sequence a i Z n is divergent (eventually escapes every compact subset of X n ). In particular, the orbit of the identity coset SL n pZq under the centralizer of H 1 is not compact. From this, via the implication 3 ùñ 2 in [EMS97, Lemma 5.1], we see that H 1 is contained in a proper Q-parabolic subgroup of SL n pRq, and hence (see e.g. [Bor19, §11.14]) leaves invariant a proper Q-subspace of R n . This is a contradiction to (irred).
Proof of Theorem 4.1, linear case. We repeat the argument given for the affine case. The only complication is in establishing η T Ñμ implies Ψ˚η T Ñ Ψ˚μ, as in the last paragraph of the proof. In the proof for the affine case, this was obtained from Corollary 2.2, which shows thatμ is a continuity point for the map Ψ˚, using the fact thatμ is invariant under translations by elements of V int . In the linear situationμ no longer has this continuity property.
To overcome this difficulty we argue as follows. We note that if µ ptL P Y n : π int pLq X BW ‰ ∅uq " 0 (4.3) then Corollary 2.2 can still be applied to show thatμ is a continuity point for Ψ˚. Thus, we can assume from now on that (4.3) fails. Since suppμ " HL 1 , this implies that the Haar measure m H of H satisfies m H pth P H : π int phL 1 q X BW ‰ ∅uq ą 0.
(4.4) Since L 1 is countable, there must be some v P L 1 such that m H pth P H : π int phvq P BW uq ą 0. (4.5) By Lemma 4.2, there is a unique element v 1 P R n which is fixed by H (namely v 1 " g 1 p0q), and for any v ‰ v 1 , the orbit of v under the action of H is an open dense subset of R n . In particular, if v ‰ v 1 then the map h Þ Ñ hv sends m H to an absolutely continuous measure on R n , and for such v (4.5) cannot hold by (Reg). Thus, we must have v " v 1 . In this case hv " v and π int phvq P BW for all h P H. By examining the proof of Proposition 2.1, we see that the map H Ñ C pR d q, h Þ Ñ ΨphL 1 q is still continuous at any point outside a set of zero measure; namely, the set of h for which there is v ‰ v 1 such that π int phvq P BW . Furthermore, the measureμ and the measures η T are all supported on the orbit HL 1 .
Thus, we can apply the argument proving Corollary 2.2, to see that the restriction of Ψ˚to measures supported on the orbit HL 1 is continuous. This is sufficient to conclude that Ψ˚η T Ñ Ψ˚μ as T Ñ 8.
Remark 4.3. Theorem 4.1 remains valid when one considers other topologies (and potentially, Borel structures) on C pR d q, as is done for example in [Vee98,MS19]. Thus, in the terminology of [Vee98], the theorem is valid ifμ is a Siegel measure giving full measure to cutand-project sets. Indeed, the only properties of the topology on C pR d q used in the proof are the validity of Corollary 2.2 (in the affine case) and Proposition 2.1, and the arguments deriving Corollary 2.2 (in the linear case). These topological ingredients are easily seen to hold for the vague topology used in [Vee98] and [MS19]. For example, for the analogue of Proposition 2.1, see [MS19, Lemma 5.14].
Some consequences of the classification
With Theorem 3.1 in hand it is easy to obtain explicit descriptions of RMS measures in low dimensions. Recall that we refer to the unique ASL n pRq-invariant probability measure on Y n and the unique SL n pRqinvariant probability measure on X n as the Haar-Siegel measures.
Corollary 5.1. With the notation above, suppose that dim V phys ą dim V int . Then the only affine RMS measure is the one for whichμ is the Haar-Siegel measure on Y n , and the only linear RMS measure is the one for whichμ is the Haar-Siegel measure on X n .
This reproves a result stated without proof in [MS14, Prop. 2.1].
Proof. In our classification result, there is k P td, . . . , nu and D " degpK{Qq such that n " Dk in the SL k -case and n " 2Dk in the Sp 2k -case. Since we obtain k ą pD´1qk in the SL k -case and k ą p2D´1qk in the Sp 2kcase. This is only possible if D " 1 and we are in the SL k -case. That is, the only possible case is H 1 " SL n pRq, and this gives the required result.
We extend Corollary 5.1 to the case of equality: Corollary 5.2. With the above notation, suppose that µ is not one of the Haar-Siegel measures mentioned in Corollary 5.1, and suppose dim V phys " dim V int . Then either d " 2 and H 1 " Sp 4 pRq, or d ě 2 and there is a real quadratic field K such that H 1 is (the group of real points of ) Res K{Q pSL d q.
Proof. If the strict inequality in (5.1) becomes non-strict, it is also possible that H 1 " Res K{Q pSL d q and K is a real quadratic field, or K " Q, d " 2 and H 1 " Sp 4 pRq.
As shown by Pleasants [Ple03], an example of a cut-and-project set associated with a real quadratic field as in Corollary 5.2 is the vertex set of an Ammann-Beenker tiling, where in this case the associated field is K " Qp ? 2q. Similarly, as discussed in [MS14, §2.2], the Penrose tiling vertex set can be described as a finite union of cut-and-project sets associated with the real quadratic field Qp ? 5q.
We record the following trivial but useful fact.
Proposition 5.3. For any affine RMS measure µ, one can assume the window W contains the origin in its interior.
Proof. Let W be the window in the construction of the RMS measure µ. By (Reg), let x 0 P V int be a point in the interior of W . By assertion (i) of Theorem 3.1, the measureμ is invariant under translations by the full group R n of translations, and in particular by the translation by x 0 . So we can replace any L P Y n by L´x 0 without affecting the measureμ. But clearly for x 0 P V int we have ΛpL, W q " ΛpL´x 0 , W´x 0 q.
So the measure µ can be obtained fromμ by using the window W´x 0 , which contains the origin in its interior.
Recall that we have an inclusion ι : SL n pRq Ñ ASL n pRq, ιpgq " pg, 0 n q, i.e., ιpSL n pRqq is the stabilizer of the origin in the affine action of ASL n pRq on R n . This induces an inclusionῑ : X n Ñ Y n , and these maps form right inverses to the maps appearing in (3.3): π˝ι " Id SLnpRq , π˝ῑ " Id Xn .
In the linear case, we can use these maps to understand the measuresμ on Y n appearing in Theorem 3.1 in terms of measures on X n . Namely we have: Proposition 5.4. Let F " SL d pRq, embedded in ASL n pRq via (2.3), and letμ be a measure on Y n projecting to a linear RMS measure on C pR d q; i.e.,μ is F -invariant and ergodic, and not invariant under ASL d pRq. Let H, L 1 be as in Theorem 3.1. Let F def " πpF q. Then one of the following holds: (i) We have suppμ ĂῑpX n q and π| suppμ is a homeomorphism which mapsμ to an F -invariant ergodic measure on X n . In this case H is contained in G def " ιpSL n pRqq, i.e., H " ι˝πpHq. (ii) We haveμpῑpX n qq " 0, and there are D 1 , D 2 P N such that π| suppμ is a closed map of degree D 1 , and for every L P suppμ there is a lattice L 1 P X n , depending only on πpLq, such that L 1 contains πpLq with index rL 1 : πpLqs " D 2 , and such that L is a translate of πpLq by an element of L 1 .
Proof. The set of latticesῑpX n q Ă Y n is clearly F -invariant, so by ergodicity is either null or conull for the measureμ. If it is conull then ιpX n q is a closed subset of full measure, i.e., suppμ ĂῑpX n q. Sincē ι is a right inverse for π we have that π| suppμ is a homeomorphism. Furthermore, since we have a containment of orbits HL 1 " suppμ ĂῑpX n q " GZ n " GL 1 , and the groups H, G are connected analytic submanifolds of G, we have a containment of groups H Ă G. This proves (i). Now supposeμ pῑpX n qq " 0, and let H, L 1 be as in the statement of Theorem 3.1, so that suppμ " HL 1 . Let T n def " π´1pπpL 1 qq be the orbit of L 1 under translations. Since we are in the linear case, H is transverse to the group of translations R n which moves along the fibers of π, and since HL 1 does not accumulate on itself and T n is compact, the intersection Ω def " T n X HL 1 is a finite set. Then by (3.4), for any L " hL 1 P suppμ we have hΩ " π´1pπpLqq X HL 1 , and thus the map π| suppμ has fibers of a constant cardinality D 1 def " |Ω|. Now denote Γ 1 def " th P H : hL 1 " L 1 u, Γ 2 def " th P H : hΩ " Ωu.
By equivariance we have Γ 1 Ă Γ 2 and the index of the inclusion is D 1 since Γ 2 acts transitively on Ω. The bijection R n {πpL 1 q Ñ T n , x mod πpL 1 q Þ Ñ x`L 1 endows T n with the structure of a real torus, whose identity element corresponds to L 1 . In these coordinates Γ 2 acts by affine maps of T n but Γ 1 acts by toral automorphisms, since it preserves L 1 . Thus, Ω is a finite invariant set for the action of an irreducible lattice in a group acting L 1 -irreducibly on R n , and thus by [GS04] consists of torsion points in T n . That is, there is q P N so that they belong to the image of 1 q¨L 1 in T n . By equivariance the same statement holds, with the same q, for hL 1 in place of L 1 . Thus, the second assertion holds if we let L 1 " 1 q¨L , D 2 " q n . Example 5.5. It is possible that in case (ii) we have suppμ XῑpX n q ‰ ∅. For example, take n " 3, d " 2, let f be the translation f pxq def " x1 2 e 3 , where e 3 is the unit vector in the third axis. Let H be the conjugate of SL 3 pRq by f and let L 1 " f pZ 3 q. Then F Ă H and HL 1 is a closed homogeneous orbit. Since L 1 RῑpX 3 q, the corresponding homogeneous measure does not satisfy (i). But one can check that the lattice span Z pe 1 , 2e 2 , 1 2 e 3 q is contained in HL 1 , that is, HL 1 XῑpX 3 q ‰ ∅.
Integrability of the Siegel-Veech transform
In this section we prove Theorem 1.2. Let µ be an RMS measure and letμ, H 1 , L 1 " g 1 Z n be as in Theorem 3.1. Recall that the functionf defined in (1.3) is defined on supp µ. Also let π : ASL n pRq Ñ SL n pRq, π : Y n Ñ X n , H 1 " πpH 1 q be as in §3.2. Let Γ 1 def " H 1 X ASL n pZq, Γ 1 def " H 1 X SL n pZq be the Z-points of H 1 and H 1 , We will use the results of §3.2 to liftf to a function on X 1 , and show that it is dominated by the pullback of a function on X 1 . For the arithmetic homogeneous space X 1 we will develop the analogue of the Siegel summation formula and its properties. Specifically, we will describe a Siegel set S Ă H 1 , which is an easily described subset projecting onto X 1 , and estimate the rate of decay of the Haar measure of the subset of S covering the 'thin part' of X 1 . 6.1. Reduction theory for some arithmetic homogeneous spaces. We begin our discussion of Siegel sets. For more details on the terminology and statements given below, see [Bor19,.
Let H be a semisimple Q-algebraic group, let P be a minimal Qparabolic subgroup, and let H " H R . Then P " P R has a decomposition P " MAN (almost direct product), where: ‚ A is the group of R-points of a maximal Q-split torus A of P; ‚ N is the unipotent radical of P ; ‚ and M is the connected component of the identity in the group of R-points of M, a maximal Q-anisotropic Q-subgroup of the centralizer of A in P. Furthermore, H " KP for a maximal compact subgroup K of H.
As in §2.4, we think of H as concretely embedded in SL n 0 pRq for some n 0 P N, where we take this embedding to be defined over Q for the standard Q-structure on SL n 0 pRq. Let a and n denote respectively the Lie algebras of A and N, let Φ Ă a˚denote the Q-roots of H and choose an order on Φ for which n is generated by the positive root-spaces.
Every element of H can be written in the form h " kman pk P K, m P M, a P A, n P Nq, (6.1) and one can express the Haar volume element dh of H in these coordinates in the form dh " dk dm dn ρ 0 paqda, where dk, dm, dn, da denote respectively the volume elements corresponding to the Haar measures on the (unimodular) groups K, M, N, A, and ρ 0 paq " |det pAdpaq| n q| " expp2ρpXqq, (6.3) where a " exppXq and ρ is the character on a given by ρ " for Φ`the positive roots in Φ, and c α " dim h α . We note that this formula for Haar measure is well-defined despite the fact that the decomposition (6.1) is not unique. Let ∆ Ă Φ`be a basis of simple Q-roots. For fixed t P R, let A t def " texppXq : X P a, @χ P ∆, χpXq ď tu (6.4) and for a compact neighborhood of the identity ω Ă MN, let These sets are referred to as Siegel sets, and by a fundamental result, a finite union of translates of Siegel sets contains a fundamental domain for the action of an arithmetic group; that is, there is a finite subset F 0 Ă H Q and there are t, ω such that S t,ω F 0 projects onto H{Γ H , where Γ H " H Z ; equivalently H " S t,ω F 0 Γ H . The sets S t,ω F 0 do not represent Γ H -cosets uniquely, in fact the map S t,ω F 0 Ñ H{Γ H is far from being injective. Nevertheless the formulas (6.1) and (6.3) make it possible to make explicit computations with the restriction of Haar measure to S t,ω F 0 , and in particular to show that Siegel sets have finite Haar measure. An important observation is that the set Ť aPAt aωa´1 is bounded, because of the definition of M and N and because of the compactness of ω. This means that a Siegel set is contained in a set of the form ω 1 A t , where ω 1 is a bounded subset of H.
6.2. The integrability exponent of an auxiliary function on X n . We will specialize the discussion in §6.1 to the specific choices of H{Γ H that arise in our application. Let H be as above, let S t,ω be a Siegel set and let F 0 Ă H Q be a finite subset for which S t,ω F 0 Γ H " H. Given functions ϕ 1 , ϕ 2 defined on H, we will write ϕ 1 ! ϕ 2 if there is a constant c such that for all x P S t,ω F 0 we have ϕ 1 pxq ď cϕ 2 pxq. The constant c is called the implicit constant. We will also write ϕ 1ϕ 2 if ϕ 1 ! ϕ 2 and ϕ 2 ! ϕ 1 . In general these relations on functions depend on the choice of Siegel set (i.e., the choice of t) and the choice of the finite set F 0 , but in the case we will be interested in, when ϕ 1 , ϕ 2 are actually lifts of function defined on H{Γ H , this notion does not depend on choices.
We now define an auxiliary function, and compute its integrability exponent. Given a nonzero discrete subgroup L 1 Ă R n (not necessarily of rank n), we denote by covolpL 1 q the volume of a fundamental domain for L 1 in span R pL 1 q (with respect to Lebesgue measure on span R pL 1 q, normalized using the standard inner product on R n ). For g P SL n pRq and L " gZ n P X n , definê Recall that X 1 " H 1 {Γ 1 is embedded in X n as the closed orbit X 1 " H 1 Z n , and so we can consider the restrictions of α andα to X 1 and to H 1 .
Proof. Let λ i " λ i pLq, i " 1, . . . , n be the successive minima of a lattice L, and let i 0 " i 0 pLq be the index for which λ i 0 pLq ď 1 ă λ i 0`1 pLq. Then it is easy to see using Minkowski's second theorem (see e.g. [Cas97, §VIII.2]) that (as functions on X n ), As a consequence, for any C Ă SL n pRq bounded, we have @u P C, αpuLq -αpLq (with the implicit constant depending on C). Let T denote the diagonal subgroup of SL n pRq, let T " TR and let t be the Lie algebra of T . In what follows we will replace T by its conjugate over SL n pQq, where the conjugate will be conveniently chosen with respect to H 1 and its subgroups. The reader should note that the statements to follow about T are not affected by such conjugations in SL n pQq.
It is easy to check that for the lattice Z n and for a " exppdiagpX 1 , . . . , X n qq P T , we have λ i paZ n q " e X jpiq where i Þ Ñ jpiq is a permutation giving X jp1q ď X jp2q 﨨¨ď X jpnq , and hencê αpaq " αpaZ n q -exp˜´ÿ (6.8) Furthermore, for an element f 0 P SL n pQq we have that λ i paf 0 Z n qe X jpiq , where implicit constants depend on f 0 , and thusαpaq -αpaf 0 q. Recall the notation D " degpK{Qq from Theorem 3.1. We first prove the proposition under the assumption D " 1. That is, we have K " Q, H 1 " SL k pRq and n " k in case G -SL k , and n " 2k, H 1 " Sp 2k pRq in case G -Sp 2k . Now consider a Siegel set for H " H 1 , and suppose A t is the corresponding subset of the maximal Q-split torus of H 1 . Since T is a maximal Q-split torus of SL n pRq, by [Bor91,Thm. 15.14], applying a conjugation in SL n pQq we can assume that A Ă T and the order on the roots Φ is consistent with the standard order on the group of characters on t; that is, A t Ă T t 1 for some t 1 , as can be observed by an elementary computation (see [Bor19,Ex. 11.15] for a description of A in the symplectic case). In particular, for a " exppdiagpX j qq P A t we have exppX j q ! exppX j`1 q for j " 1, . . . , n´1. Then from (6.8), for a P A t and f 0 P F 0 , where F 0 is a finite subset of pH 1 q Q , we haveα Since a Siegel set S t,ω is contained in a set of the form ω 1 A t , where ω 1 is a compact subset of H, this implies that We will first show the following: (i) For any j, and any X P a for which exppXq P A t , we have p2ρ´r 0 β j q pXq ! 1. (ii) The number r 0 is the largest number for which the conclusion of (i) remains valid.
For ℓ " 1, . . . , n´1 let χ ℓ denote the simple roots on t, that is, In order to show (i), since the χ ℓ are bounded above on A t , it suffices to show that if we write 2ρ " ř a ℓ χ ℓ and β j " ř b pjq ℓ χ ℓ , then r 0 b pjq ℓ ď a ℓ . In order to show (ii) it suffices to check that there are some j, ℓ for which equality holds, i.e., r 0 b pjq ℓ " a ℓ . This can be checked using the tables of [Bou02, pp. 265-270, Plates I & III] (note that the restrictions of the β j to A are the fundamental weights in both cases). Namely, for G " SL k we have ℓpk´jq if ℓ ă j jpk´ℓq if ℓ ě j and we have the desired inequality, with equality when ℓ " j. If G " Sp 2k we have and again the inequality holds, with equality when ℓ " j " k. Now to see that α P L p`µ˘, since a Siegel set is contained in ω 1 A t with ω 1 bounded, and by (6.2), it suffices to prove that for f 0 P F 0 we have ş Atα p paf 0 qρ 0 paqda ă 8. Using the preceding discussion, if we let a t denote the cone in a with A t " exppa t q (where A t is as in (6.4)), and use that da is the pushforward under the exponential map of dX, we have ż where the integral is finite as the integrand is the exponential of a linear functional which is strictly decreasing along the cone a t . The same computation and (ii) show that we have a corresponding lower bound ş Atα r 0 paf 0 qρ 0 paqda " ş at exp pτ pXqq dX, where τ is a linear functional which is constant along a face of a t . We have shown (6.6) for D " 1. Now suppose D ą 1. Our strategy will be to show that we can repeat the computations used for the case D " 1, with the only difference being that in some of the formulas, the characters ρ and β j are multiplied by a factor of D. Write G 1 def " σ 1 G R , let V be as in the statement of Theorem 3.1, a K-subspace of R n . Let if G -Sp 2k , (6.12) so that dim V " t. Let A 1 denote a maximal K-split torus in G, and let a 1 denote its Lie algebra. Then, with respect to a suitable basis of V K , we can write elements of a 1 as matrices diagpX 1 , . . . , X t q, where ř X i " 0 when G -SL k and X i`k "´X i when G -Sp 2k . Let B def " Res K{Q pA 1 q, and let A denote a maximal Q-split torus in H 1 . The dimension of A 1 is the number of independent one-parameter multiplicative K-subgroups (morphisms KˆÑ A 1 ), and, applying restriction of scalars, each such one-parameter group gives rise to a oneparameter Q-subgroup QˆÑ B. This implies that B contains a Q-split torus of dimension equal to dim A 1 . Since the Q-rank of H is the same as the K-rank of G, see [BT65, 6.21 (i)], the dimensions of these groups coincide. Since all maximal Q-split tori in H are conjugate over H Q , we can assume that A Ă B, and by conjugating SL n pRq by an element of SL n pQq, we can also assume that A Ă T and the order on the roots Φ is consistent with the order on the roots of t. We claim that after these conjugations, the elements of A " AR are of the form diag¨X jp1q , . . . , X jp1q loooooooomoooooooon D times , . . . , X jptq , . . . , X jptq looooooomooooooon D times‚ , (6.13) where diagpX 1 , . . . , X t q ranges over the elements of a 1 in the abovechosen basis, and i Þ Ñ jpiq is a permutation guaranteeing exp`X jp1q˘!¨¨! exp`X jptq˘. We first assume the validity of (6.13), and conclude the proof of the case D ą 1. We will use (6.13) to compare characters on A 1 with characters on A. First, comparing the character ρ appearing in (6.3) for the two groups H 1 , G 1 , we see that each real field embedding σ i , i ď r contributes one dimension to the dimension of a root space, and each pair σ i ,σ i , i ą r of conjugate non-real embedding contributes two dimensions. Alternatively: in G 1 the root spaces are one dimensional and defined over K, since G 1 is K-split. The root spaces in H 1 are obtained from the root spaces in G 1 by applying the restriction of scalars operation to each one individually. This implies that the character ρ for H 1 is obtained from the corresponding character for G 1 by a multiplication by D. Similarly, it is clear from (6.13) that the characters β j appearing in (6.10) for H 1 are obtained from the same characters β j for G 1 , multiplied by D. Thus, the computations guaranteeing (6.6) for D " 1, imply the same property for general D.
It remains to prove (6.13). Recall that B " Res K{Q pA 1 q, which we wish to describe explicitly using the discussion in §2.4. For y P K t we define a 1 p yq def " diagpy 1 , . . . , y t q P A 1 pKq; that is, these are matrices acting on V which are diagonal with respect to a K-basis of V , and the y i satisfy y 1`¨¨¨`yt " 0 for G -SL k and y i "´y 2k´i`1 for G -Sp 2k . Each y P K has a representative which is a matrix in Mat DˆD pQq. If we take y P Q then the corresponding representative matrix is the scalar matrix y¨Id D . The elements of B can be considered as tˆt matrices, whose entries are elements of Mat DˆD . In particular, for y P Q t , we get matrices a 2 p yq P Mat nˆn pQq, which are simultaneously diagonalizable, with each y i appearing as an eigenvalue D times. That is, up to permuting the coordinates, the matrices a 2 p yq are as in (6.13), with X i P Q. The map a 1 p yq Þ Ñ a 2 p yq is a polynomially defined group homomorphism. Letting A 2 denote the Zariski closure of ta 2 p yq : y P Q t , a 1 p yq P A 1 u, we see that A 2 is a torus in B whose group of real points pA 2 q R satisfies the description (6.13), and with dim A 2 " dim A 1 " dim A. Also, A 2 is Q-split since the maps a 2 p yq Þ Ñ y i are Q-characters. Thus, A 2 is a maximal Q-split torus of H, and by the uniqueness of the maximal Q-split torus in the torus B (see [Bor19,Prop. 10.6]), we must have A " A 2 . (See also the related discussion in [PR94, Example, p. 54], giving an explicit description of a maximal Q-anisotropic torus in B as a product of norm-tori.) 6.3. An upper bound for the Siegel transform. We will now state and prove a result implying Theorem 1.2. For a function F on R n , a measureμ on Y n , and L P Y n , in analogy with (1.3) we denote p F pLq " "ř xPL t0u F pxqμ is linear ř xPL F pxqμ is affine (6.14) Theorem 6.2. Letμ be the H-homogeneous measure on Y n as in Theorem 3.1, and let q " qμ be as in (1.5). Then for any F P C c pR n q and any p ă q we have p F P L p pμq. Moreover, there are F P C c pR n q for which p F R L q pμq.
We will prove Theorem 6.2 separately in the linear and affine cases. In the linear case, we will first show, using Proposition 5.4, that the Siegel-Veech transform (6.14) can be bounded in terms of a Siegel transform of a function on X n . The latter can be bounded in terms of the function α considered in §6.2.
Proof of Theorem 6.2, linear case. Suppose thatμ satisfies (i) of Proposition 5.4, i.e.,μ is supported on ιpX n q. Then we can assume that the cut-and-project scheme involves lattices in X n , rather than grids. Moreover, H " ι˝πpHq, g 1 " g 1 , H 1 " ι˝πpH 1 q, and the function p F is a Siegel-Veech transform of a Riemann integrable function on R n , for a homogeneous subspace of X n . It is known that the function α defined in (6.5) describes the growth rate of the Siegel transforms of functions on X n . Namely (see [EMM98,Lemma 3.1] or [KSW17, Lemma 5.1]), for any Riemann integrable function F on R n , for any L P X n , p F pLq ! αpLq. Furthermore, if F is the indicator of a ball around the origin then p F pLq " αpLq. Thus, the conclusion of Theorem 6.2 in this case follows from Proposition 6.1. Now assume that case (ii) of Proposition 5.4 holds. We cannot use Proposition 6.1 since p F is a function on Y n . To remedy this, we define for each L P HL 1 the lattice L 1 " L 1 pπpLqq appearing in assertion (ii) of Proposition 5.4, and set Then the bounds given in Proposition 5.4 imply that p F pLq ! p F pπpLqq, with a reverse inequality p F pπpLqq ! p F pLq for positive F . Since p F is the Siegel-Veech transform of a function on R n with respect to a measure on X n , we can apply Proposition 6.1 to conclude the proof in this case as well.
For the affine case, we will need the following additional interpretation of the function α defined in (6.5).
Proposition 6.3. Let L P X n , let T n L " T n " π´1pLq -R n {L be the quotient torus, equipped with its invariant measure element dL. Then for any ball B Ă R n and any p ą 1 we have ż where the implicit constants depend on the dimension n, on p, and on the radius of B.
Proof. Let λ 1 , . . . , λ n be the Minkowski successive minima of L. Using Korkine-Zolotarev reduction, let v 1 , . . . , v n be a basis for L satisfying }v i }λ i (where implicit constants are allowed to depend on the dimension n), and let u i def " v i }v i } . For a vector s of positive numbers s 1 , . . . , s n Setting v 0 " p}v 1 }, . . . , }v n }q, we have that ( is a fundamental parallelepiped for L, and we can identify T n with this parallelpiped via the bijection which sends the Lebesgue measure on P v 0 to the Haar measure dvol on T n . Now set P r def " P r where r " pr, . . . , rq. We can translate B so that it is centered at the origin without affecting the integral in (6.15), and since there is a lower bound on the angles between the v i , there are r 1 -Rr 2 such that P r 1 Ă B Ă P r 2 . Thus, we can replace B with P R . Furthermore, the lower bound on the angles between the u i implies Writing each vector y P R n in the form y " ř i c i u i , and reducing each c i modulo }v i }¨Z, it is easy to verify that for x P P v 0 we have: for some i, then P R X L x " ∅; and Proof of Theorem 6.2, affine case. By decomposing F into its positive and negative parts, we see that it suffices to prove p F P L p pµq when F is the indicator of a ball in R n . By Theorem 3.1 we have that in the affine case, the translation group R n is contained in H 1 , which implies that we can decompose the measureμ as ż Now the statement follows from Propositions 6.1 and 6.3. The case of equality p " q µ follows similarly, taking for F the indicator of a ball in R n .
Proof of Theorem 1.2. Let f P C c pR d q and letf be as in (1.3). Let µ be an RMS measure on C pR d q associated with a cut-and-project scheme involving grids in Y n , a decomposition R n " V phys ' V int , and a window W Ă V int . Let 1 W be the indicator function of W and letμ be an H-homogeneous measure, supported on the orbit HL 1 Ă Y n such that µ " Ψ˚μ (where we have replaced µ by its image under a rescaling map to simplify notation). Define F : R n Ñ R, F pxq " 1 W pπ int pxqq¨f pπ phys pxqq, (6.16) and define p F via (6.14). Then it is clear from the definition of Ψ and (1.3) thatf pΨpLqq " p F pLq provided L satisfies (I), and, in the linear case, provided all nonzero vectors of L project to nonzero vectors in V phys ; the last assumption is equivalent to requiring that The condition that L satisfies (I) is valid forμ-a.e. L by definition of an RMS measure. We claim further that in the linear caseμpN q " 0. Indeed, sinceμ is induced by the Haar measure of H, otherwise we would have some fixed v P L 1 t0u such that H N ,v def " th P H : hv P V int u has positive Haar measure. Recall that for analytic varieties V 1 , V 2 , with V 1 connected, if V 1 X V 2 has positive measure with respect to the smooth measure on V 1 , then V 1 Ă V 2 . Since H N ,v is an analytic subvariety in H, if it has positive measure with respect to the Haar measure on H, it must coincide with H. This contradicts Lemma 4.2. This contradiction shows thatμ-almost surely we have p f˝Ψ " p F . Since µ " Ψ˚μ, the first assertion thatf P L p pµq for p ă q µ now follows from the first assertion of Theorem 6.2.
For the second assertion, let f be a nonnegative continuous function whose support contains a ball around the origin. Since we have assumed that W contains a ball around the origin in V int , the support of the function F also contains a ball around the origin in R n , sof is bounded below by the Siegel-Veech transform of the indicator of a ball in R n , and we have that such functions do not belong to L qµ pμq.
Integral formulas for the Siegel-Veech transform
In this section we will prove Theorem 1.3. We begin with its special case p " 1, i.e., with a derivation of (1.4). This will illustrate the method of Weil [Wei82] which we will use. Note that (1.4) was first proved by Marklof and Strömbergsson in [MS14] following an argument of Veech [Vee98]. Their argument does not rely on an integrability bound such as our Theorem 1.2, and instead, uses the result of Shah [Sha96], Theorem 2.5. 7.1. A derivation of a 'Siegel summation formula'. Given f P C c pR d q, define F via (6.16), and define p F pLq via (6.14). We can bound F pointwise from above by a compactly supported continuous function on R n , and hence, by Theorem 6.2, p F P L 1 pμq. Therefore f Þ Ñ ş X 1 p F dμ is a positive linear functional on C c pR d q. By the Riesz representation theorem, there is some Radon Borel measure ν on R d such that ş c 1 volμ is affine c 1 vol`c 2 δ 0μ is linear. (7.1) As we have seen in the proof of Theorem 1.2, we have that p F " p f˝Ψ holdsμ-a.e. Since µ " Ψ˚μ, this implies that ż In combination with (7.1), this establishes (1.4) in the affine case, and gives ż in the linear case. It remains to show that c 2 " 0. Let B r " Bp0, rq be the ball in R d centered at the origin, let f P C c pR d q satisfy 1 B 1 ď f ď 1 B 2 , and let f r " f`x r˘. Thus, as r Ñ 0, the functions f r have smaller and smaller support around the origin. By (1.3) and discreteness of Λ we have that p f r pΛq Ñ rÑ0 0 for any Λ. The functions f r vanish outside the ball B 2r , and for r ď 1, the functions p f r are dominated by p f 1 . Therefore 7.2. A formula following Siegel-Weil-Rogers. In this section we state and prove a generalization of Theorem 1.3. Let the notation be as in 3.1, so thatμ is an H-homogeneous measure on Y n . Let p P N and let R np " R n '¨¨¨' R n looooooomooooooon p copies . For f P C c pR np q and L P Y n , define Let J Ă ASLpnp, Rq be a real algebraic group and let θ be a locally finite Borel measure on R np . We say that θ is J-algebraic if J preserves θ and has an orbit of full θ-measure (in this case θ can be described in terms of the Haar measure of J, see [Rag72, statement and proof of Lemma 1.4]).
Theorem 7.1. Let p P N and assume that p ă qμ where qμ is as in (1.5). Then there is a countable collection tτ e : e P Eu of H-algebraic Borel measures on R np such thatτ def " řτ e is locally finite and for every f P L 1 pτ q we have ż As we will see in the proof, in the affine (resp. linear) case, the indexing set E is naturally identified with the set of Γ H 1 -orbits in the set of p-tuples of (nonzero) vectors in Z n .
We will need a by-now standard result of Weil, which is a generalization of the Siegel summation formula and is proved via an argument similar to the one used in §7.1. Let G 1 Ă G 2 be unimodular locally compact groups, let Γ 2 Ă G 2 be a lattice in G 2 and let m G 2 {Γ 2 denote the unique G 2 -invariant Borel probability measure on G 2 {Γ 2 . Since G 1 , G 2 are unimodular, there is a unique (up to scaling) locally finite G 2 -invariant measure on G 2 {G 1 , which we denote by m G 2 {G 1 (see e.g. [Rag72, Chap. I]). Define Γ 1 def " Γ 2 X G 1 , and for any γ P Γ 2 , denote its coset γΓ 1 P Γ 2 {Γ 1 by rγs. With this notation, Weil showed the following: Wei46]). Assume that Γ 1 is a lattice in G 1 . Then we can rescale m G 2 {G 1 so that the following holds. For any Proof of Theorem 7.1. Consider the map which sends f P C c pR np q to ş p f p dμ. This is well-defined by Theorem 6.2, and defines a positive linear functional on C c pR np q. Thus, by the Riesz representation theorem, there is a locally finite measureτ on R np such that @f P C c pR np q, For each e P E, choose a representative p-tuple x e " px 1 , . . . , x p q P e and let G 1,e def " th P H 1 : hx i " x i , i " 1, . . . , pu.
We will apply Proposition 7.2 with G 2 " H 1 , Γ 2 " Γ H 1 , G 1 " G 1,e , Γ 1 " Γ 2 X G 1 , and with F ph 1 G 1 q def " f pg 1 h 1 x e q. Comparing (7.5) and (7.7) we see that these choices imply that r F ph 1 Γ 2 q " p f p e phL 1 q, for h " g 1 h 1 g´1 1 P H. We will see below that Γ 1 is a lattice in G 1 . Assuming this, we apply Proposition 7.2 to obtain ż This shows thatτ e is the pushforward of m G 2 {G 1 under the map In particular, since H " g 1 H 1 g´1 1 ,τ e is H-algebraic. It remains to show that Γ 1 is a lattice in G 1 . To see this, note that G 2 is a real algebraic group defined over Q, and G 1 is the stabilizer in G 2 of a finite collection of vectors in Z n . Thus, G 1 is also defined over Q. By the theorem of Borel and Harish-Chandra (see [Bor19,§13]), if G 1 has no nontrivial characters then Γ 1 " G 1 X ASL n pZq is a lattice in G 1 . Moreover, a real algebraic group generated by unipotents has no characters. Thus, to conclude the proof of the claim, it suffices to show that G 1 is generated by unipotents. We verify this by dividing into the various cases arising in Theorem 3.1.
We first reduce to the case that G 1 is a subgroup of SL n pRq. In the linear case we simply identify G 2 with its isomorphic image πpG 2 q, where π : ASL n pRq Ñ SL n pRq is the projection in (3.3), and thus we can assume G 1 Ă SL n pRq. In the affine case, since the property of being generated by unipotents is invariant under conjugations in ASL n pRq, we may conjugate by a translation to assume that one of the vectors in x e is the zero vector, so that G 1 Ă SL n pRq. Thus, in both cases we may assume that G 2 " H 1 is the group of real points of Res K{Q pGq, and G 1 is the stabilizer in G 2 of the finite collection x 1 , . . . , x p , where these are vectors in the standard representation on R n .
Suppose first that G " SL k . Then, in the notation of (2.5), we have that G 2 " σ 1 G Rˆ¨¨¨ˆσ r`s G R , where for i " 1, . . . , i (respectively, for i " r`1, . . . , r`s) we have that σ i G R is isomorphic to SL k pRq (respectively to SL k pCq as a real algebraic group). Furthermore, as in §2.4, there is a decomposition where V i -R k (resp., V i -R 2k ) for i " 1, . . . , r (resp., for i " r1 , . . . , r`s), and such that the action of G 2 on R n is the product of the standard action of each σ i G R on V i . Let P i : R n Ñ V i be the projection with respect to this direct sum decomposition. Then the stabilizer in G 2 of x 1 , . . . , x p is the direct product of the stabilizer, in σ i G R , of P i px 1 q, . . . , P i px p q. So it suffices to show that each of these stabilizers is generated by unipotents. In other words, we are reduced to the well-known fact that for SL k pRq acting on R k in the standard action, and for SL k pCq acting on R 2k » C k in the standard action, the stabilizer of a finite collection of vectors is generated by unipotents. Now suppose that G " Sp 2k , and let F " R or F " C. Then by a similar argument, we are reduced to the statement that for the standard action of Sp 2k pFq on F 2k , the stabilizer of a finite collection of vectors is generated by unipotents. This can be shown as follows. Let ω be the symplectic form preserved by Sp 2k , let V " spanpx 1 , . . . , x p q Ă F 2k , and let Q def " tg P Sp 2k pFq : @v P V, gv " vu.
We need to show that Q is generated by unipotents. We can write V " V 0 ' V 1 , where V 0 " ker pω| V q is Lagrangian, and V 1 is symplectic. Let 2ℓ " dim V 1 , where ℓ ď k. Since any element of Q fixes V 1 pointwise, it leaves V K 1 invariant, and it also fixes pointwise the subspace V 0 Ă V K 1 . Thus, Q is isomorphic to where m def " k´ℓ. This means we can reduce the problem to the case in which V 1 " t0u, i.e., ωpx i , x j q " 0 for all i, j. We can apply a symplectic version of the Gram-Schmidt orthogonalization procedure to assume that x 1 , y 1 , . . . , x p , y p , x p`1 , y p`1 , . . . , x m , y m is a symplectic basis and V 0 " spanpx 1 , . . . , x p q. Let Then V 2 is symplectic and the subgroup of Q leaving V 2 invariant is isomorphic to Sp 2m´2p pFq, hence generated by unipotents. Also, for i " 1, . . . , p, by considering the identity ωpgy i , x j q " ωpgy i , gx j q " ωpy i , x j q pj " 1, . . . , pq one sees that any g P Q must map the y i to vectors in y i`V3 . This implies that Q is generated by symplectic matrices leaving V 2 invariant, and transvections mapping y i to elements of y i`V3 . In particular, Q is generated by unipotents.
Definition 7.3. Given a real algebraic group J Ă ASL n pRq, we will say that a locally finite measure τ on R dp is J-c&p-algebraic if there is a J-algebraic measureτ on R np such that for every f P C c pR dp q we have ż R dp f pπ phys px 1 q, . . . , π phys px p qq @i, π int px i q P W 0 otherwise . (7.10) We will say τ is c&p algebraic if it is J-c&p algebraic for some J.
It is easy to check that for p " 1, the measure τ in Definition 7.3 is the pushforward under π phys of the restriction ofτ to π´1 int pW q. For general p, define projections p π phys : R np Ñ R dp , p π phys px 1 , . . . , x p q def " pπ phys px 1 q, . . . , π phys px p qq , and p π int : R np Ñ R mp , p π int px 1 , . . . , x p q def " pπ int px 1 q, . . . , π int px p qq .
Then the measures τ,τ satisfy τ " p π phys˚pτ | S q , where S def " p π´1 int¨Wˆ¨¨¨ˆW looooooomooooooon p copies‚ . (7.11) Proof of Theorem 1.3. By Theorem 4.1, after a rescaling of R d , there is a homogeneous measureμ on Y n such that µ " Ψ˚μ. Suppose h P H satisfies that π phys | hL 1 is injective, and in the linear case, assume also that hL 1 X V int Ă t0u. Since µ is an RMS measure, and in the linear case, arguing as in the proof of Theorem where F is as in (7.10). Thus, Theorem 1.3 is reduced to Theorem 7.1.
Remark 7.4. The assignment e Þ Ñτ e implicit in the proof of Theorem 1.3 is not injective, nor is it finite-to-one. To see this, take p " 1 and consider the RMS measure corresponding to the Haar-Siegel measure on X n . Then H 1 " SL n pRq, Γ H 1 " SL n pZq, and there are countably many Γ H 1 -orbits on Z n , where two integer vectors belong to the same orbit if and only if the greatest common divisor of their coefficients is the same. On the other hand, as the proof of formula (1.4) shows, there are two c&p-algebraic measures, namely Lebesgue measure on R d and the Dirac measure at 0. The Dirac measure is associated with the orbit of 0 P Z n , and all the other orbits of nonzero vectors in Z n give rise to multiples of Lebesgue measure on R d . Nevertheless, we will continue using the symbol E for both the collection of Γ H 1 -orbits in Z np , and for the indexing set for the countable collection of measure arising in Theorem 1.3. This should cause at most mild confusion.
The Rogers inequality on moments
In this section we will prove Theorem 1.4. We will need more information about the measures τ e appearing in Theorem 1.3, in case p " 2. We begin our discussion with some properties that are valid for all p ď d. Some of the results of §8.1 will be given in a greater level of generality than required for our counting results. They are likely to be of use in understanding higher moments for RMS measures.
8.1. Normalizing the measures. For any k, denote the normalized Lebesgue measure on R k by vol pkq . Some of the c&p-algebraic measures τ on R dp which arise in Theorem 1.3 are the globally supported Lebesgue measures on R dp , i.e., multiples of vol pdpq . Indeed, such a measure arises if in Definition 7.3 we takeτ equal to a multiple of Lebesgue measure on R np . These measures give a main term in the counting problem we will consider in §10. We write τ 1 9 τ 2 if τ 1 , τ 2 are proportional, recall the measures tτ e u defined in the proofs of Theorems 1.3 and 7.1, and set We define constants c µ,p by the condition τ main " c µ,p vol pdpq .
The next result identifies the normalizing constants c µ,p . Recall from Theorem 4.1 that an RMS measure µ is of the form µ " ρ c˚μ where µ is a homogeneous measure on Y n , c is the constant of (4.1), and µa.e. Λ is of the form Λ " ΛpL, W q for a grid L with covolpLq " c n . We denote this almost-sure value of covolpLq by covolpµq. Recall also that the function Λ Þ Ñ DpΛq defined in (1.10) is measurable and invariant, and hence is a.e. constant, and denote its almost-sure value by Dpµq.
Proposition 8.1. For any RMS measure µ " ρ c˚Ψ˚μ satisfying (1.8) (i.e., G " SL k or µ is affine), we have c µ,1 " Dpµq " vol pmq pW q covolpµq , (8.1) and for p P N satisfying p ă q µ and p ď d we have Note that the normalizing constant c µ,1 discussed here is the same as the constant denoted by c 1 in (7.2) and by c in (1.4).
With the identification R ℓp -M ℓ,p pRq in mind, we say that a subspace V Ă R ℓp is an annihilator subspace if it is the common annihilator of a collection of vectors in R p ; that is, there is a collection Ann Ă R p such that V " ZpAnnq def " ! pv 1 , . . . , v p q P R ℓp : @i, v i P R ℓ & @pa 1 , . . . , a p q P Ann, ÿ a i v i " 0 ) .
Note that the meaning of ZpAnnq depends on the choice of the ambient space R ℓ containing the vectors v i ; when confusion may arise we will specify the ambient space explicitly. Suppose ℓ P N and pv 1 , . . . , v p q is a p-tuple in R ℓp . In the linear case, let Annpv 1 , . . . , v p q def " tpa 1 , . . . , a p q P R p : ÿ a i v i " 0u, and in the affine case, let Annpv 1 , . . . , v p q def " tpa 1 , . . . , a p´1 q P R p´1 : ÿ a i pv i´vp q " 0u.
Note that in the linear case, this is the usual relation between the rank of a matrix and the dimension of its kernel. The dimension of Lpv 1 , . . . , v p q is equal to ℓ rankpv 1 , . . . , v p q.
We recall some notation from §2.4 and from Step 3 of the proof of Lemma 3.2. Let K be a real number field of degree D " r`2s, with σ 1 , . . . , σ r being distinct real embeddings, and σ r`1 , . . . , σ s denoting representatives of conjugate pairs of non-real embeddings. Let G be isomorphic to either SL k pRq or to Sp 2k pRq, and let H " Res K{Q pGq. Let V be a K-vector space of dimension t, where t is as in (6.12), and denote V j " σ j V R , that is, V j -R t if j " 1, . . . , r and V j -C t -R 2t if j " r`1, . . . , s. These vector spaces are chosen so that V is equipped with the standard action of G, and taking into account the isomorphism Let σ j π : R n Ñ V j be the corresponding projections. In the notation (2.5), let π j : H R Ñ σ j G R , so that the action of H R factors through the action of each σ j G R on V j . We can assume without loss of generality (see §2.1) that V 2 '¨¨¨' V r`s Ă V int and π phys " π phys˝σ 1 π.
Lemma 8.2. Suppose µ is an RMS measure of higher rank, and let G be the group appearing in Theorem 1.1. Let p ă q µ , let x e " px 1 , . . . , x p q P e, where e P E is as defined before (7.7), and let v i def " σ 1 πpx i q, i " 1, . . . , p. Assume that Letτ def "τ e be the algebraic measure on R np as in (7.9) and let τ be a c&p-algebraic measure obtained fromτ as in Definition 7.3. Then τ is (up to proportionality) the Lebesgue measure on some annihilator subspace of R dp . This subspace is equal to R dp if and only if v 1 , . . . , v p are independent.
Proof. Letτ be as in Definition 7.3. As in the proof of Theorem 7.1), we have that Hpx 1 , . . . , x p q is a dense subset of full measure in suppτ . We will split the proof according to the various cases arising in Theorem 3.1.
Case 1: µ is linear, G " SL k . In this case, our proof will also show that suppτ is a sum of annihilator subspaces, one in each V j ; in fact, we first establish this statement.
The action of H on R n factors into a product of actions of each σ j G R on V j . That is, H acts on v j i def " σ j πpx i q, i " 1, . . . , p via its mapping to σ j G R , i.e., via the standard action of SL k pRq or SL k pCq on R k or C k . It follows from (1.5) and (1.6) that p ă q µ " k. Therefore for each j, the rank R j of v j i : i " 1, . . . , p ( is less than k. For the standard action, σ j G R is transitive on linearly independent R j -tuples. From this, by choosing a linearly independent subset B j Ă tv j 1 , . . . , v j p u of cardinality R j and expressing any v j i R B j as a linear combination of elements of B j , one sees that if pu 1 , . . . , u p q, pw 1 , . . . , w p q are two p-tuples in V j there is h P σ j G R such that hpw 1 , . . . , w p q " pu 1 , . . . , u p q ðñ Annpw 1 , . . . , w p q " Annpu 1 , . . . , u p q. (8.4) This implies that σ j G R pv j 1 , . . . , v j p q is open and dense in Lpv j 1 , . . . , v j p q, and hence Hpx 1 , . . . , x p q is open and dense in L r`s 1 def " À r`s j"1 Lpv j 1 , . . . , v j p q. We have shown that suppτ " L r`s 1 and thatτ is a multiple of the Lebesgue measure on L r`s 1 . Since π phys " π phys˝σ 1 π, we have p π phys`L r`s 1˘" p π phys pLpv 1 , . . . , v p qq .
To simplify notation, write H 1 def Ann 1 def " Ann pv 1 , . . . , v p q . We have p π phys pLpv 1 , . . . , v p qq " ZpAnn 1 q, (8.5) seen as an annihilator subspace of R dp . Indeed, the inclusion Ă follows from linearity of π phys . For the opposite inclusion, recall that we have an inclusion V phys ãÑ V 1 , and this induces an inclusion ι : R dp ãÑ R np . We clearly have ι pZ pAnn 1 qq Ă Lpv 1 , . . . , v p q, which implies the inclusion Ą in (8.5).
Replacing x i with elements of x i`Vphys does not change the condition px 1 , . . . , x p q P S, where S is as in (7.11). This shows that supp τ " p π phys`L r`s 1˘" p π phys pLpv 1 , . . . , v p qq is an annihilator subspace, and τ is a multiple of Lebesgue measure on this subspace. Moreover, the subspace is proper if and only if Ann 1 ‰ t0u, or equivalently, v 1 , . . . , v p are dependent.
Case 2: µ is linear, G " Sp 2k , d " 2. The action of H splits as a Cartesian product of actions of the groups σ j G R on the spaces V j , for j " 1, . . . , r`s. As in Case 1, we will pay attention to the action on the first summand V 1 , where H acts via H 1 def " σ 1 G R -Sp 2k pRq. We denote by ω the symplectic form on V 1 preserved by H 1 . Let L def " Hpx 1 , . . . , x p q " suppτ , whereτ is the unique (up to scaling) Hinvariant measure with support L, and let L 1 def " L X V 1 " σ 1 πpLq " H 1 pv 1 , . . . , v p q, where v i def " σ 1 πpx i q, i " 1, . . . , p.
Let F -SL 2 pRq be as in (3.1). Then F Ă H 1 , and hence τ is F -invariant. Write V 1 int def " V int X V 1 " V K phys , and abusing notation slightly, let π phys , π int denote the restrictions of these mappings to V 1 , so they are the projections associated with the direct sum decomposition V 1 " V phys 'V 1 int . Define R def " rankpv 1 , . . . , v p q, and define R 1 as the maximal rank of tπ phys phv 1 q, . . . , π phys phv p qu, as h ranges over elements of H 1 . Thus, 0 ď R 1 ď R ď 1.
If R 1 " 0 this means that π phys phv i q " 0 for all h P H and all i, and then τ is the Dirac measure at 0, and there is nothing to prove. Now suppose R 1 " R " 1. Since R " 1, there is some v i such that π phys pv i q ‰ 0, and there are coefficients a j , j ‰ i so that v j " a j v i . This implies that for all h, π phys phv j q " a j π phys phv i q, that is, supp τ Ă p π phys pLq Ă L 1 def " tpu 1 , . . . , u p q P R 2p : @j ‰ i, u j " a j u i u.
Moreover, since F acts transitively on nonzero vectors in V phys , and τ is F -invariant, we actually have equality and τ is a multiple of Lebesgue measure on the annihilator subspace L 1 , and L 1 is a proper subspace of R 2p , unless p " 1.
Case 3: µ is affine. The affine case can be reduced to the linear case. Note that the definition of the annihilator Annpv 1 , . . . , v p q in the affine case is such that it does not change under the diagonal action of the group of translations, and that the group of translations in H is the full group R n , so that x 1 , . . . , x p can be moved so that x p " 0. Moreover, by Proposition 5.3, we can assume that 0 P W . We leave the details to the diligent reader.
The preceding discussion gives a description of the measures τ e with e P E rest .
Corollary 8.3. Under the conditions of Lemma 8.2, any measure τ e , e P E rest , is Lebesgue measure on a proper subspace of R dp .
Proof of Proposition 8.1. Let B r denote the Euclidean ball of radius r around the origin in R d , let 1 Br be its indicator function, and let y 1 Br be the function obtained from the summation formula (1.3), so that DpΛq " lim rÑ8 y 1 Br pΛq vol pdq pB r q .
By Theorem 1.3 we have: c µ,p " 1 r dp ż R dp 1 Q p r dτ main " 1 r dp "ż R dp ż p y 1 Q p r pΛq r dp dµ´1 r dp ÿ ePE rest ż R dp 1 Q p r dτ e . (8.10) Repeating the argument establishing (8.8), we find p y 1 Qr pΛq !´vol pdq pQ r q¯p αpLq p , and thus the integrable function α p dominates the integral in the second line of (8.10), independently of r. Moreover, since they differ by a constant, α p also dominates the series in the second line of (8.10). Using (8.9), the first integral gives c p µ,1 , and thus it remains to show that lim rÑ8 1 r dp ż R dp 1 Q p r dτ e " 0, for every e P E rest . (8.11) From (1.8) and Corollary 8.3 we have that τ e is (up to proportionality) equal to Lebesgue measure on a subspace V 1 Ă R dp , and we have V 1 ‰ R dp since e P E rest . This implies (8.11).
Remark 8.4. One can also work in R np rather than R dp , and define analogous normalization constantscμ ,p by the formulaτ main " cμ ,p vol pnpq . Then one can show thatcμ ,p " 1 for all p ă q µ . We will not need the values of these constants and leave the proofs to the interested reader.
8.2. More details for p " 2. We will need to describe the measure τ rest in the case p " 2.
Proposition 8.5. Let µ be an RMS measure so that (1.8) holds. Let p " 2, and let E rest , τ rest be as in (8.6). Then there is a partition E rest " E rest 1 \ E rest 2 , and constants ta e : e P E rest 2 u, tb e : e P E rest 1 u, tc e : e P E rest u, such that the following hold.
(1) For all f P C c pR 2d q, we have ż f pa e x, xq dvol pdq pxq.
(8.12) (2) c e ą 0 for all e P E rest and ř ePE rest c e ă 8. (3) |a e | ď 1 for all e P E rest 2 and |b e | ď 1 for all e P E rest 1 . Proof. Lemma 8.2 is applicable in view of (1.8); indeed, when G " SL k , we have p " 2 ď d, and when G " Sp 2k and µ is affine, we have rankpv 1 , v 2 q ď 1. Therefore, for each e P E rest , there is an annihilator subspace V e Ł R dp such that τ e is proportional to Lebesgue measure on V e . Repeating the argument of §7.1 we can see that τ e is not the Dirac mass at the origin. In other words V e has positive dimension. Since p " 2, this means we can find α, β, not both zero, such that V e " Zpα, βq. We can rescale so that maxp|α|, |β|q " maxpα, βq " 1 and we define E rest 1 def " te P E rest : β " 1u, E rest 2 def " E rest E rest 1 .
Then if we set b e "´α for e P E rest 1 and a e "´β for e P E rest 2 , then the bounds in (3) hold and we have tpx, b e xq : x P R d u for e P E rest 1 tpa e x, xq : x P R d u for e P E rest 2 .
We now define c e by the formula @f P C c pR 2d q, " c e ş R d f pa e x, xq dvol pdq pxq for e P E rest 1 c e ş R d f px, b e xq dvol pdq pxq for e P E rest 2 .
Then clearly (8.12) holds, and c e ą 0 for all e P E rest . It remains to show ř c e ă 8. Let 1 B be the indicator of a ball in R 2d centered at the origin. Then there is a positive number λ which bounds from below all the numbers "ż R d 1 B pax, xq dvol pdq pxq : |a| ď 1 * ď "ż R d 1 B px, bxq dvol pdq pxq : |b| ď 1 * .
Clearly´ş R d f dvol pdq¯2 " ş R 2d ϕ dvol p2dq , and it follows easily from (1.3) and (1.7) that Using (8.13), Theorem 1.3 with p " 2, (1.4), and (8.2) we have that ż It remains to show that ż where the implicit constant is allowed to depend on µ. And indeed, by Proposition 8.5, we have ż 9. From bounds on correlations to a.e. effective counting In this section we present two results which we will use for counting. The first is due to Schmidt [Sch60] but we recast it in a slightly more general form (see also [KS19, Thm. 2.9]). To simplify notation, for measurable S Ă R n , we will write V S def " vol pnq pSq.
Theorem 9.1. Let n P N and let µ be a probability measure on C pR n q. Let κ P r1, 2q, let Φ " tB α : α P R`u be an unbounded ordered family of Borel subsets of R n , and let ψ : R`Ñ R`. Suppose the following hypotheses are satisfied: (a) The measure µ is supported on discrete sets, and for each f P L 1 pR n , volq, a Siegel-Veech transform as in (1.3) satisfies that p f P L 2 pµq. Furthermore, there are positive a, b such that for any function f : R n Ñ r0, 1s, f P L 1 pR n , volq, we have ż p f dµ " a ż R n f dvol (9.1) and Var µ p p f q def " żˇˇˇˇp f´ż p f dµˇˇˇˇ2 dµ ď bˆż Note that we allow defining p f as in either one of the linear or affine cases of (1.3), as long as the conditions in (a) are satisfied. For definiteness we will use the affine case, namely p f " ř vPΛ f pvq, so that x 1 S pΛq " |S X Λ| for any subset S Ă R n with indicator function 1 S . In the linear case we may have x 1 S pΛq " |S X Λ|´1 or x 1 S pΛq " |S X Λ| (depending on whether or not S contains 0), and the reader will have no difficulty adjusting the proof in this case.
Before giving the proof of Theorem 9.1 we will state the following more general result.
Theorem 9.2. Let d, m, n P N with n " d`m, let µ be a probability measure on C pR n q, let λ P r0, 1q, κ P r1, 2q, let ψ : R`Ñ R`, let Φ " tB α : α P R`u be an unbounded ordered family of Borel subsets of R d , and let tW α : α P R`u be a collection of subsets of R m . Suppose that (a) and (b) of Theorem 9.1 are satisfied, and in addition: (c) For any N P N there is α such that vol pdq pB α q " N. (d) Each W α can be partitioned as a disjoint union W α " Ů Lα ℓ"1 C α pℓq, where L α -´vol pdq pB α q¯λ, and where w α def " vol pmq pC α pℓqq is the same for ℓ " 1, . . . , L α , and is of order -´vol pdq pB α q¯´λ.
(9.5) Theorems 9.1 and 9.2 both follow from ideas developed by Schmidt in [Sch60]. We begin with Theorem 9.1, for which we need the following Lemmas.
By the definition of an unbounded ordered family, we can assume that for each V ą 0 there is Ω P Φ such that volpΩq " V . For each N P N, let S N P Φ with volpS N q " N and let ρ N def " 1 S N denote its indicator function. Given two integers N 1 ă N 2 , let Since the S N are nested, we have N 1 ρ N 2 " 1 S N 2 S N 1 . Lemma 9.3 (cf. [Sch60], Lemma 2). Let T P N and let K T be the set of all pairs of integers N 1 , N 2 satisfying 0 ď N 1 ă N 2 ď 2 T , N 1 " u2 t , N 2 " pu`1q2 t , for integers u and t ě 0. Then there exists c ą 0 such that ÿ pN 1 ,N 2 qPK T Var µ p{ N 1 ρ N 2 q ď cpT`1q2 κT . (9.6) Proof. Indeed, (9.2) yields Var µ p{ N 1 ρ N 2 q ď bpN 2´N1 q κ . Each value of N 2´N1 " 2 t for 0 ď t ď T occurs 2 T´t times, hence ÿ pN 1 ,N 2 qPK T pN 2´N1 q κ " ÿ 0ďtďT 2 T`pκ´1qt ď pT`1q2 κT .
Proof. Let Bad T be the set of Λ P suppµ for which it is not true that ÿ pN 1 ,N 2 qPK T p{ N 1 ρ N 2 pΛq´apN 2´N1 qq 2 ď pT`1q2 κT ψpT log 2´1q. (9.9) Then the bound (9.7) follows from Lemma 9.3 by Markov's inequality. Assume N ď 2 T and Λ P supp µ Bad T . The interval r0, Nq can be expressed as a union of intervals of the type rN 1 , N 2 q, where pN 1 , N 2 q P I N Ă K T and |I N | ď T . Therefore, { ρ N pΛq´aN " ř p{ N 1 ρ N 2 pΛq´apN 2´N1 qq, where the sum is over pN 1 , N 2 q P I N . Applying the Cauchy-Schwarz inequality to the square of this sum together with the bound from (9.9) we obtain (9.8).
Proof of Theorem 9.1. Let Bad T be the sets from Lemma 9.4. Since ψ´1 is integrable and monotone, we find by Borel-Cantelli and (9.7) that for µ-a.e. Λ there is T Λ such that for any T ě T Λ , Λ R Bad T . Assume now N ě N Λ " 2 T Λ and let T be the unique integer for which 2 T´1 ď N ă 2 T . By Lemma 9.4, px ρ N pΛq´aNq 2 ď T pT`1q2 κT ψpT log 2´1q " O`N κ plog Nq 2 ψplog Nq˘.
(9.10) Given arbitrary S P Φ, let N be such that N ď V S ă N`1, and let S N , S N`1 P Φ with S N Ă S Ă S N`1 and volpS N q " N, volpS N`1 q " N`1. Then # pS N X Λq´apN`1q ď # pS X Λq´aV S ď # pS N`1 X Λq´aN.
(9.11) From (9.10), the LHS of (9.11) is O´N A similar upper bound for aV S´# pS X Λq is proved analogously.
We turn to the proof of Theorem 9.2. Note that the collectionΦ is not ordered; nevertheless one can apply similar arguments to each ℓ separately, before applying Borel-Cantelli. We turn to the details.
Proof of Theorem 9.2. Given N, using assumption (c), for each N there is α " αpNq so that vol pdq pB α q " N. It follows that vol pnq pB αŴ α q " NL α w α -N. We let ρ ℓ N be the characteristic function of B αˆCα pℓq, which is of volume Nw α -N 1´λ . We will take N 1 ρ ℓ N 2 to be the characteristic function of`B αpN 1 q zB αpN 2 q˘ˆCαpN q pℓq. Note that the dependence of the function N 1 ρ ℓ N 2 on N is suppressed from the notation.
Then applying (9.12) we get µpBad T q ď c 1 ψpT log 2´1q´1, so that by Borel-Cantelli, a.e. Λ belongs to at most finitely many sets Bad T . Also for Λ R Bad T , we have |# pS X Λq´aV S | 2 ď L 2 α T pT`1q2 κ 1 T ψpT log 2´1q, which replaces (9.8), and we proceed as before.
Counting patchesà la Schmidt
In this section we prove Theorem 1.6. We recall some notation and terminology from the introduction and the statement of the theorem. For a cut-and-project set Λ Ă R d , x P R d and R ą 0, P Λ,R pxq " Bp0, Rq X pΛ´xq is called the R-patch of Λ at x, and DpΛ, P 0 q " lim T Ñ8 #tx P Λ X Bp0, T q : P Λ,R pxq " P 0 u volpBp0, T qq is called the frequency of P 0 . Suppose Λ arises from a cut-and-project construction with associated dimensions n " d`m and window W Ă In addition, it is shown in [KW21,§2] that W ∆ is the intersection of finitely many translations of W and its complement. Since BW ∆ Ă F`BW, for some finite F Ă R m , we deduce that the upper box dimension of BW ∆ is bounded from above by that of BW .
Let λ P pλ 0 , 1q, and let η ą 0 be small enough so that maxˆ1`λ 0 2`η , 1´λ 0 pδ´ηq m˙ă 1`λ 2 . (10.5) Such η exists in light of (10.2). Given α, we let K α P N so that volpB α q λ 0 -K m α . Define (10.7) We bound separately the two summands on the RHS of (10.7). For the first summand we use the case (9.5) of Theorem 9.2, with W α " A piq α and C α pℓq " Q Kα pℓq. Note that assumption (d) is satisfied by our choice of K α , with implicit constants depending on P 0 . We obtain, for µ-a.e. L, that Λ where c 1 , as well as the constants appearing in the following inequalities, depends only onΦ " and on L. For the second summand, recall that dim B pBW ∆ q ď m´δ. This implies that the number of ℓ P Z n with Q Kα pℓq X BW ∆ ‰ H is ! K m´δ`η α . Therefore This implies via (10.6) that Plugging these two estimates into (10.7), and using (10.5) and the fact that plogpvolpB α qqq | 2020-12-25T02:15:28.880Z | 2020-12-24T00:00:00.000 | {
"year": 2020,
"sha1": "5f3048b7cad8b7ff4e8893823010eb933dc10468",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fbfc96dc7a4a3b1d99db837dd0e5ca43fd7dc545",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
59034729 | pes2o/s2orc | v3-fos-license | The Influence of Price Structures on Experience Quality and Behavior Intention in Hospitality Industry
Tourism is one of the largest industries in the world as well as the significant contributors to the world's economy. According to the UNWTO, the export income enervated by international tourism ranks fourth after fuels, chemicals, and automotive product. In 2011, there were over 983 million international tourist arrivals worldwide, representing a growth of 4.6% when compared to 940 million in 2010. The attribution of pricing structures and experience quality played an important in determinant of buying behaviour in the sense where these principles are likely to remain important factors or elements whether shoppers are purchasing online or through another medium such as online travel sites or traditional agents. To narrow down the author research, author is selecting Malaysia as a based to examine how the influences of price structures are has affected the experience quality and customers behaviour intention in selecting a hotel to stay. A theoretical framework is formulated in order to achieve the results by revealing the influence of price structures towards the experience quality and behaviour intention with incorporated the Hedonic pricing model. This study has contribution from both a theoretical and a practical perspective. First, the relationships between price structures and experience quality, and behavioural intentions were examined. Second, little price structure toward experience quality research has been conducted in the area of hospitality context. This paper aims to provide a further study for future researchers, especially hotel managers to have an in depth understanding of price structures towards experience quality in the hospitality industry. While it also helps hotel managers to implement all their pricing strategies where pricing is an issue of paramount importance for practitioners in the hospitality industry and It is the only element in the accommodation marketing mix that impacts directly on revenues.
Introduction
Everyone is tired of working and if given a chance, everyone would prefer to spend their holidays elsewhere.Therefore, spending on tourism is getting higher and higher and in fact, many researchers said that this industry is much closely related to the economic cycle of life.Thus, no doubt that it is one of the largest industries in the globe and is the significant contributors to the world's economy.According to the UNWTO, Fuel, chemical and automotive products are the main export income and tourism comes to the fourth ranks (QF Finance, 2011).This research is supported by an online source, where tourism business has become a popular global leisure activity.To support this statement, a research done by Wikipedia (2012) said that there are more than 983 million international tourist arrived worldwide which accounting a growth of 4.6% compared to 940 million in 2010.As quoted by Sir David Michels "hotel industry has a high degree of unreliability needs to be exaggerated and that the hotel business in general is not exactly fast moving, either in its product, customer base, owner base or in fact, in any way whatsoever" (Hotel Year Book, 2011).According to the spoke person from World Tourism Organization (WTO), Comparing year 2006 and 2007, there is an increase of 6.6% or 559,020 international tourist which amounting to 903 million.This has shown a significant earning of US$ 856 billion in that particular year, and in real terms by 5.6 per cent over 2006.Receipts from international passenger transport (example: visitor exports) were estimated at US$ 165 billion, bringing the total international tourism receipts to over US$ 1 trillion, corresponding to almost US$ 3 billion a day (WTO) while the tour and online package industries together produced more than $18 billion in gross travel sales in 2008 (PhocusWright 2009).A survey done by Talking Travel Tech in 2012, it is estimated that there are about 202,842 hotels globally built and offering 17.5 million guest rooms (tnooz.com, 2012).With the growing of this industry, it is also indirectly benefited to those cybermediaries where travel package sold over the Internet will be likely to be increased, henceforth, predicted that tour packages purchased over the Internet will be one of the best values in 2010, and that "travellers who bundle air and hotel will save the most" (USA Today, 2009).
With the growing of the hospitality in the globally, the attribution of pricing structures and experience quality played an important role in determinant of buying behaviour in the sense where these factors are considered to be the important factors either customers are purchasing online or through another medium such as online travel sites or traditional agents.To narrow down the author research, author is selecting Malaysia as a based to examine how is the influence of price structures has affected the experience quality and customers behaviour intention in selecting a hotel to stay, particularly, there are so many hotels can be chosen now.A theoretical framework is formulated in order to achieve the results by revealing the influence of price structures towards the experience quality and behaviour intention.This is done by (1) to investigate the relationship between price sensitivity and experience quality in Malaysia's hotel industry; (2) to determine the relationship between price discount and experience quality in Malaysia's hotel industry; (3) to examine the moderating effect of price transparency on the relationship between price sensitivity and price discount on experience quality in hospitality industry; and (4) to determine the relationship between experience quality and behavior intention.
This study will contribute to academic and practitioners from both theoretical and practical perspective.First of all, the relationships between price structures and experience quality, and behavioural intentions were examined.Secondly, price structures toward experience quality research had been conducted in the area of hospitality context.The findings help hoteliers and hotel managers to determine the influence of price structures on experience quality and behavioural intentions.Findings of this study also provide the relationships among hospitality image, value, satisfaction, and behavioural intentions so that hoteliers in Malaysia may have an in depth understand of different customers on how to meet or even exceed their needs and wants.
Literature Review
There is no significant study on the influence of price structures towards experience quality and behavior intention.Although many researchers has done many aspect research on price structures, experience quality and behavior intention but none has none of the study has focused onto this model; Thus, give the author to comes out with the proposed framework to illustrate the effect on influence of price structure.Bojanic (1996) argued that prices and quality are the two fundamental elements to form unique strategies to gain a competitive edge, therefore, researchers like Cooper et al. (2008) said that pricing decision are the toughest decisions to make in the marketing mix.A Recent research compiled by Canina and Carvell (2005) had indicated that higher occupancy levels at discounted room rates do not necessarily lead to increased hotel financial performance in the long run.A further sample done by Enz et al. (2009) claim that average price in the hotel industry during their research has indicates that hotels that did not provide discount to customers will score better profit levels than those who did.The purpose of giving discount to the hotel room rates is to intend to meet company objectives so that their financial performance will look good ahead.However, it will only to be best described as a short term pricing goals for different strategies applied.In the hotel pricing section, rate transparency has been always classified as the ability for customers to see the rate for each night of their stay (Carroll & Sigauw, 2003;Rohlfs & Kimes, 2007) and to compare rates for different hotels which have been pre selected by them (Miao & Mattila, 2007).The findings of the results also indicated that consumers prefer itemized pricing.Thus, they are willing to pay higher price if the rate increases and are transparent to them.Furthermore, Rohlfs and Kimes (2007) found that hoteliers whom demonstrated their price to customers for each nights staying will be more likely reasonable and easy to be accepted by public even though the total price was the same.Researchers also found that experience quality play an important role in the hospitality industry too where it is a subjective in terms of measurement evaluation and thus, tends to be holistic and gestalt rather than attribute-based, and the focus of evaluation is on self (internal) but not on service environment (external) (Chen & Chen, 2010).Therefore, experience quality can be conceptualized as tourists' affective responses to their desired social-psychological benefits (Chen & Chen, 2010).A research done by Miao and Mattila (2007) found that their participants were willing to pay higher price when they were exposed to actual online pricing that listed hotels in ascending rate order (Hotwire.comand Expedia.com),making it easy to make price comparisons.Several empirical studies have showed that experience quality has positive relationship with customer behavioral intention.Behavioral intention includes spoken word, repurchase visit & loyalty (Ryu & Han, 2010;Zeithmal, 1996;Swonson & Davis, 2003).Behavior intentions can be described as word of mouths, repurchase intentions, complaining behavior and price sensitivity while low service quality leads to unfavourable behavioral intentions and affected sales (Burton et al. 2003).From the other point of views, behavioral intentions can be observed from customer's decision whether he or she remains intact or gets isolated from the service providing company.The greater the customers' experience, the better the customer is willing to reuse the services.
Hypothesis
The present study is considered to be exploratory in a sense.To the best of our knowledge, this is the first investigation to focus on price structures responses to unexpected experience quality and their behaviour intention on purchase; however, some of our hypotheses are based on the previous research.
H1. Price sensitivity has positive relationship with experience quality.
Conceptual and Operation Definition of Price Sensitivity
Conceptual Definition: Some experts indicated that individual choice of decisions and focus on the estimation is one of the most prominent variables, which is related to price (Decrop & Snelders, 2004).
Operational Definition: When it comes to the selection of tourism activities, it is important to know that tourists gained pleasure from activities that they participates for all destinations, but they have to compare the pleasure obtained with the cost they have to pay, which is, their moneys.Therefore, the price remained as the decision element again once they arrive at their designated destination (Masiero & Nicolau, 2011).Therefore, when price sensitivity is high, customers are likely to tolerate when price goes up.Alternately, when price sensitivity is low, lenders can afford to price for large gains in margin -and grow overall profitability (http://www.businessdictionary.com).
H1a. Price transparency moderates the relationship between price sensitivity and experience quality.
H1b. Price transparency moderates the relationship between price discount and experience quality.
Conceptual and Operation Definition of Price Transparency
Conceptual Definition: Studies have found that price perceptions are the result of price comparisons between the internal reference price and the observed retail price, and that such perceptions significantly influence the consumer's decisions on product category choice, brand choice, or purchase quantity (Kalwani et al., 1990;Rajendran & Tellis, 1994).
Operational Definition: In the hotel pricing section, rate transparency has been classified as the ability for customers to see the rate for each night of their stay (Carroll & Sigauw, 2003;Rohlfs & Kimes, 2007) and to compare the rates for different hotels they have pre selected.H2.Price discount has positive relationship with experience quality.
Conceptual and Operation Definition of Price Discount
Conceptual Definition: Hanks, Cross, and Noland (2002) argue that discounting on room nights will be a wise strategy to move to perishable products in services industries while Finch, Becherer, and Casavant (1998) strongly agree that using discounting on room nights will maintain the perishable services and eventually increase hotel financial performance.
Operational Definition:
As indicated by Robertico Croes and Kelly J. Semrad (2012), using rebate systems on room rates is to meet hotelier' objectives while also to increase hotel financial performance in the short run.Furthermore, it is helping to bring the local market back to equilibrium when a state of disequilibria is observed.On top of this, there will be a risk of negative marginal profit.Chatwin (2000) and Vinod (2004) both argue that constant price adjustments in the lodging industry may end up competitors criticize in many ways, especially in the same destinations.On the contrary, competitors will use the same tactics in adjusting price so that to meet their revenues too.
Conceptual and Operation Definition of Experience Quality
Conceptual Definition: In the tourism context, experience quality refers to the psychological outcome resulting from customer participation in tourism activities which not only to attributes by a supplier but also the attributes brought to the opportunity by the visitor.Therefore, experience quality can be conceptualized as tourists' affective responses to their desired social-psychological benefits.
Operational Definition:
Service experience can be known in many ways.One of the examples is, a personal reaction and feelings towards the reaction which are felt by customers themselves when consuming or using a service.Otto and Ritchie (2000) described the service experience contain an important influence on the consumers' evaluation and satisfaction with a given service offered.
Conceptual and Operation Definition of Behavior Intention
Conceptual Definition: Shirai (2009) has mentioned that consumers are likely to feel happy if they are satisfied with an offered price or angry if they think it is too high.Thus, price response is considered to be composed of both cognitive and emotional responses.
Operational Definition:
Previous research has not captured fully of customers' potential behaviours.Thus, it is likely to be triggered by the service quality offered.In many cases, positive words of mouth, the willingness to recommend to friends/relatives and returning purchase are used to measure behavioural intentions (Theodorakis & Alexandris, 2008;Ozdemir & Hewett, 2010).According to Bourton et al. (2003), most customers experience their "packages" are mostly related to behaviour intention, therefore, the more positive the customers experienced they felt, the more they will be likely to repurchase again.
The Hedonic Price Model
Since pricing structures are being introduced into the framework, author is using the Hedonic pricing model to be incorporated into the framework so that it will match pricing structures with experience quality and behaviour intention.Hedonic skills were developed long before other's conceptual framework and it was applied in the price indices according to Triplett (1986) research.Meanwhile, Bartik (1987) claimed that the first formal contributions to hedonic price theory were those made by Court before World War II.Although there was other solid evidence to prove it, however, Colwell and Dilmore (1999) mentioned that Haas is the first to publish the terminology "hedonic" and has been study for more than 15 years prior to Court.The original term of "hedonics" can trace back from the Greek word hedonikos, which is equivalent to pleasure.In the modern economic term, it is always refers to the utility or satisfaction which comes from the consumption of goods and services from a particular method.
The issue with hedonic model is the choice of the functional form.There are few fundamental functional forms such as linear, semi-log, and log-log forms that can be applied to the hedonic price model.According to Bloomquist and Worley (1981), there is an incorrect choice of functional form which may end up inconsistent estimates result and this has been supported by Goodman (1978).The Hedonic pricing theory provides little results even it has been known for many years and less guidance on how to apply to proper functional form (Butler, 1982;Halvorsen & Pollakowski, 1981).Butler (1982) added that all estimates of hedonic price models are to some extend models only which use a small number of key variables to generate suffice.He suggested that those attributes to the theory are too costly to produce and yield utility considered in the regression equation.Mok et al. (1995) conferred that biases are due to missing variables which are relatively small.Thus, neglect the prediction and explanatory power in the equation.
To ensure the data set used is homogeneous, a practical solution to the problem is to estimate the coefficient and all the missing variables, which may eventually, caused bias.When there is homogeneity, the use of the hedonic price approach will be justified.Finally, to resolve the issue of stability of the coefficients, a penal data are to be required in order to (pooled time-series cross-sectional data) modelling with different variable coefficients which can be adopted.
Measurement
A questionnaire was designed in this study to measure the hotel customers' perceptions of price structures towards experience quality and behaviour intentions.This type questionnaire form consists of two parts.First part consists of demographic characteristics of tourists and second part consist of experience quality and behavioural intentions scales.Questionnaire form was designed in English only as to measure perceived experience quality and price structures in precise matters.
Questionnaires
The survey questions were developed from a series of focus groups and informal interviews with customers from all walks of life.A preliminary draft of the questionnaires was then pre tested on a small sample of survey by relate to their price ending choices.Author specific research questions are as follows: 1) Have you been staying in any hotels (regardless of STAR) for the past 12 months?
2) Are you required to travel often within Malaysia?
3) Is the price of each room stay is main concern for you?
Yes No
Data Collection
In order to obtain the maximum data as our sample frame, we chose car wash centre as our destination because it provides (i) a large of pool of customer from different categories; (ii) They will have time for survey while waiting their car to be washed; (iii) Customers will most likely to be answer questionnaires; (iv) Chances of these customers are likely to be stay in any hotels for the past 12 months due; (v) convenient sampling as high traffic car wash contain customers from many different background such as sales related (required to travel), government related and so forth.
Data will be randomly picked during office hours (trying to gather white collar employees/employers) and during weekend (most survey can be collected).Author's aim is trying to get a min 20 Malaysian for the pre test and readjust from it so that questionnaires can be more suitable to be interviewed.Only those who had been staying in any hotels in Malaysia in the past 12 months were eligible to participate while they are still fresh in their mind.To make a precise contribution to the literature within the limited resources, it was decided to use a convenience sample for public and customize the task so it would be relevant and meaningful to these populations.Calder et al. (1982) suggested that using a convenience sample is to examine the theoretical relationships rather than revealing population parameters with no intention to generalize to all travel purchasers.
Plan for Data Analysis
In order to assess the data collected, author will use the multiple regression analysis to perform the hypotheses.Any correlation analysis must be mutated while the multi coleniality of each pair of hypothesis must be 0.8 or below to ensure to get the nearest answers.Each of the independent variables and dependent variables must be crossing relationship quality and attribution of price structures and their behavior intentions.
Discussion
It is important to link between service quality and customer behavioral intentions for researchers and practitioners in this study because with the result found, it will compile all the evidence for the value of service quality research.Furthermore, the relationships between service quality dimensions and behavioural intentions are not yet clear, due to the different service quality models used and the different contexts applied (Theodorakis and Alexandris, 2008, p. 166).In some researches, it has been found that quality of service affected the behavioral intentions and this has been supported by Cronin and Taylor (1992), where they have found a positive relationship between service quality and purchase intentions.In addition, their previous research has also demonstrated service quality has associated with behavioral intentions.For example, in the study of Theodorakis and Alexandris (2008), a survey has been conducted with 242 spectators with tangibles, responsiveness and reliability dimensions were determined as the moderate estimators of the word of mouth's variance.Results has proven that customers who perceive the quality of the service as high to be more likely to demonstrate behaviour intentions.
Limitation and Further Research
Although this finding has provided relevant and interesting insight with regard to the effort of price structures towards experience quality and behaviour intention, it is important to recognize limitation associated with this study.First, data were randomly selected in selected area were used in this study.As a consequence, the data collected may not be able to represent all Malaysian attitude and behaviour.The result, therefore, may not be able to interpreted as a proof of a causal relationship, but rather than a lending support.Secondly, the time frame given was too short and was not able to collect data from other cities.As some data collected from other cities may be different when on approaching.Further research, therefore, can expand on this study by taking samples from different locations in cities with different environment.
Figure
Figure 1.Conceptual framework | 2018-12-17T20:19:16.070Z | 2015-11-30T00:00:00.000 | {
"year": 2015,
"sha1": "02e64ae2ad8e4e803ecbeb3f040ea8e5d466c61e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5539/ijms.v7n6p137",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "02e64ae2ad8e4e803ecbeb3f040ea8e5d466c61e",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.